id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
a42b73913211-3 | Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_issue(issue: dict) → Document[source]¶
Create Document objects from a list of GitHub issues.
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property headers: Dict[str, str]¶
property query_params: str¶
Create query parameters for GitHub API.
property url: str¶
Create URL for GitHub API.
Examples using GitHubIssuesLoader¶
GitHub | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
1b49ddbc0b6e-0 | langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader¶
class langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path: Union[str, Path], *, glob: str = '**/[!.]*', exclude: Sequence[str] = (), suffixes: Optional[Sequence[str]] = None, show_progress: bool = False)[source]¶
Load blobs in the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Initialize with a path to directory and how to glob over it.
Parameters
path – Path to directory to load from
glob – Glob pattern relative to the specified path
by default set to pick up all non-hidden files
exclude – patterns to exclude from results, use glob syntax
suffixes – Provide to keep only files with these suffixes
Useful when wanting to keep files with different suffixes
Suffixes must include the dot, e.g. “.txt”
show_progress – If true, will show a progress bar as the files are loaded.
This forces an iteration through all matching files
to count them prior to loading them.
Examples
# Recursively load all text files in a directory.
loader = FileSystemBlobLoader("/path/to/directory", glob="**/*.txt")
# Recursively load all non-hidden files in a directory.
loader = FileSystemBlobLoader("/path/to/directory", glob="**/[!.]*")
# Load all files in a directory without recursion.
loader = FileSystemBlobLoader("/path/to/directory", glob="*")
# Recursively load all files in a directory, except for py or pyc files.
loader = FileSystemBlobLoader(
"/path/to/directory",
glob="**/*.txt", | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html |
1b49ddbc0b6e-1 | "/path/to/directory",
glob="**/*.txt",
exclude=["**/*.py", "**/*.pyc"]
)
Methods
__init__(path, *[, glob, exclude, suffixes, ...])
Initialize with a path to directory and how to glob over it.
count_matching_files()
Count files that match the pattern without loading them.
yield_blobs()
Yield blobs that match the requested pattern.
__init__(path: Union[str, Path], *, glob: str = '**/[!.]*', exclude: Sequence[str] = (), suffixes: Optional[Sequence[str]] = None, show_progress: bool = False) → None[source]¶
Initialize with a path to directory and how to glob over it.
Parameters
path – Path to directory to load from
glob – Glob pattern relative to the specified path
by default set to pick up all non-hidden files
exclude – patterns to exclude from results, use glob syntax
suffixes – Provide to keep only files with these suffixes
Useful when wanting to keep files with different suffixes
Suffixes must include the dot, e.g. “.txt”
show_progress – If true, will show a progress bar as the files are loaded.
This forces an iteration through all matching files
to count them prior to loading them.
Examples
# Recursively load all text files in a directory.
loader = FileSystemBlobLoader("/path/to/directory", glob="**/*.txt")
# Recursively load all non-hidden files in a directory.
loader = FileSystemBlobLoader("/path/to/directory", glob="**/[!.]*")
# Load all files in a directory without recursion.
loader = FileSystemBlobLoader("/path/to/directory", glob="*")
# Recursively load all files in a directory, except for py or pyc files. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html |
1b49ddbc0b6e-2 | # Recursively load all files in a directory, except for py or pyc files.
loader = FileSystemBlobLoader(
"/path/to/directory",
glob="**/*.txt",
exclude=["**/*.py", "**/*.pyc"]
)
count_matching_files() → int[source]¶
Count files that match the pattern without loading them.
yield_blobs() → Iterable[Blob][source]¶
Yield blobs that match the requested pattern. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html |
1680271c9fe4-0 | langchain.document_loaders.unstructured.satisfies_min_unstructured_version¶
langchain.document_loaders.unstructured.satisfies_min_unstructured_version(min_version: str) → bool[source]¶
Check if the installed Unstructured version exceeds the minimum version
for the feature in question. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.satisfies_min_unstructured_version.html |
04d9cdf0b4c6-0 | langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader¶
class langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader(file_path: str, *, transcript_format: TranscriptFormat = TranscriptFormat.TEXT, config: Optional[assemblyai.TranscriptionConfig] = None, api_key: Optional[str] = None)[source]¶
Loader for AssemblyAI audio transcripts.
It uses the AssemblyAI API to transcribe audio files
and loads the transcribed text into one or more Documents,
depending on the specified format.
To use, you should have the assemblyai python package installed, and the
environment variable ASSEMBLYAI_API_KEY set with your API key.
Alternatively, the API key can also be passed as an argument.
Audio files can be specified via an URL or a local file path.
Initializes the AssemblyAI AudioTranscriptLoader.
Parameters
file_path – An URL or a local file path.
transcript_format – Transcript format to use.
See class TranscriptFormat for more info.
config – Transcription options and features. If None is given,
the Transcriber’s default configuration will be used.
api_key – AssemblyAI API key.
Methods
__init__(file_path, *[, transcript_format, ...])
Initializes the AssemblyAI AudioTranscriptLoader.
lazy_load()
A lazy loader for Documents.
load()
Transcribes the audio file and loads the transcript into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, transcript_format: TranscriptFormat = TranscriptFormat.TEXT, config: Optional[assemblyai.TranscriptionConfig] = None, api_key: Optional[str] = None)[source]¶
Initializes the AssemblyAI AudioTranscriptLoader.
Parameters
file_path – An URL or a local file path. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader.html |
04d9cdf0b4c6-1 | Parameters
file_path – An URL or a local file path.
transcript_format – Transcript format to use.
See class TranscriptFormat for more info.
config – Transcription options and features. If None is given,
the Transcriber’s default configuration will be used.
api_key – AssemblyAI API key.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Transcribes the audio file and loads the transcript into documents.
It uses the AssemblyAI API to transcribe the audio file and blocks until
the transcription is finished.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AssemblyAIAudioTranscriptLoader¶
AssemblyAI Audio Transcripts | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader.html |
0a4449a81ee2-0 | langchain.document_loaders.unstructured.UnstructuredFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load files using Unstructured.
The file loader
uses the unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredFileIOLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileIOLoader(f, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html |
0a4449a81ee2-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileIOLoader¶
Google Drive | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html |
741343e573eb-0 | langchain.document_loaders.dataframe.DataFrameLoader¶
class langchain.document_loaders.dataframe.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Pandas DataFrame.
Initialize with dataframe object.
Parameters
data_frame – Pandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Parameters
data_frame – Pandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document]¶
Lazy load records from dataframe.
load() → List[Document]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DataFrameLoader¶
Pandas DataFrame | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dataframe.DataFrameLoader.html |
60c45d83b5c5-0 | langchain.document_loaders.notebook.concatenate_cells¶
langchain.document_loaders.notebook.concatenate_cells(cell: dict, include_outputs: bool, max_output_length: int, traceback: bool) → str[source]¶
Combine cells information in a readable format ready to be used.
Parameters
cell – A dictionary
include_outputs – Whether to include the outputs of the cell.
max_output_length – Maximum length of the output to be displayed.
traceback – Whether to return a traceback of the error.
Returns
A string with the cell information. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.concatenate_cells.html |
a3e08bad8ab1-0 | langchain.document_loaders.parsers.pdf.extract_from_images_with_rapidocr¶
langchain.document_loaders.parsers.pdf.extract_from_images_with_rapidocr(images: Sequence[Union[Iterable[ndarray], bytes]]) → str[source]¶
Extract text from images with RapidOCR.
Parameters
images – Images to extract text from.
Returns
Text extracted from images.
Raises
ImportError – If rapidocr-onnxruntime package is not installed. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.extract_from_images_with_rapidocr.html |
9397b3583850-0 | langchain.document_loaders.pdf.AmazonTextractPDFLoader¶
class langchain.document_loaders.pdf.AmazonTextractPDFLoader(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[str] = None, headers: Optional[Dict] = None)[source]¶
Load PDF files from a local file system, HTTP or S3.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Amazon Textract service.
Example
Initialize the loader.
Parameters
file_path – A file, url or s3 path for input file
textract_features – Features to be used for extraction, each feature
should be passed as a str that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client (Optional)
credentials_profile_name – AWS profile name, if not default (Optional)
region_name – AWS region, eg us-east-1 (Optional)
endpoint_url – endpoint url for the textract service (Optional)
Attributes
source
Methods
__init__(file_path[, textract_features, ...])
Initialize the loader.
lazy_load()
Lazy load documents
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html |
9397b3583850-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[str] = None, headers: Optional[Dict] = None) → None[source]¶
Initialize the loader.
Parameters
file_path – A file, url or s3 path for input file
textract_features – Features to be used for extraction, each feature
should be passed as a str that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client (Optional)
credentials_profile_name – AWS profile name, if not default (Optional)
region_name – AWS region, eg us-east-1 (Optional)
endpoint_url – endpoint url for the textract service (Optional)
lazy_load() → Iterator[Document][source]¶
Lazy load documents
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AmazonTextractPDFLoader¶
Amazon Textract | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html |
d17c4cfe304d-0 | langchain.document_loaders.unstructured.UnstructuredFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load files using Unstructured.
The file loader uses the
unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader(“example.pdf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
d17c4cfe304d-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileLoader¶
Unstructured
Unstructured File | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
895801f50466-0 | langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Base Loader class for PDF files.
If the file is a web path, it will download it to a temporary file, use it, thenclean up the temporary file after completion.
Initialize with a file path.
Parameters
file_path – Either a local, S3 or web path to a PDF file.
headers – Headers to use for GET request to download a file from a web path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Initialize with a file path.
Parameters
file_path – Either a local, S3 or web path to a PDF file.
headers – Headers to use for GET request to download a file from a web path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html |
da3095eb36cc-0 | langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader¶
class langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Load PDF files as HTML content using PDFMiner.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html |
0ab446d04231-0 | langchain.document_loaders.unstructured.validate_unstructured_version¶
langchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) → None[source]¶
Raise an error if the Unstructured version does not exceed the
specified minimum. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html |
4f741b82fc39-0 | langchain.document_loaders.parsers.pdf.PDFMinerParser¶
class langchain.document_loaders.parsers.pdf.PDFMinerParser(extract_images: bool = False, *, concatenate_pages: bool = True)[source]¶
Parse PDF using PDFMiner.
Initialize a parser based on PDFMiner.
Parameters
extract_images – Whether to extract images from PDF.
concatenate_pages – If True, concatenate all PDF pages into one a single
document. Otherwise, return one document per page.
Methods
__init__([extract_images, concatenate_pages])
Initialize a parser based on PDFMiner.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(extract_images: bool = False, *, concatenate_pages: bool = True)[source]¶
Initialize a parser based on PDFMiner.
Parameters
extract_images – Whether to extract images from PDF.
concatenate_pages – If True, concatenate all PDF pages into one a single
document. Otherwise, return one document per page.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFMinerParser.html |
b9f0c9428031-0 | langchain.document_loaders.dropbox.DropboxLoader¶
class langchain.document_loaders.dropbox.DropboxLoader[source]¶
Bases: BaseLoader, BaseModel
Load files from Dropbox.
In addition to common files such as text and PDF files, it also supports
Dropbox Paper files.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param dropbox_access_token: str [Required]¶
Dropbox access token.
param dropbox_file_paths: Optional[List[str]] = None¶
The file paths to load from.
param dropbox_folder_path: Optional[str] = None¶
The folder path to load from.
param recursive: bool = False¶
Flag to indicate whether to load files recursively from subfolders.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
b9f0c9428031-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
b9f0c9428031-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using DropboxLoader¶
Dropbox | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
8e926cc3e5db-0 | langchain.document_loaders.pubmed.PubMedLoader¶
class langchain.document_loaders.pubmed.PubMedLoader(query: str, load_max_docs: Optional[int] = 3)[source]¶
Load from the PubMed biomedical library.
query¶
The query to be passed to the PubMed API.
load_max_docs¶
The maximum number of documents to load.
Initialize the PubMedLoader.
Parameters
query – The query to be passed to the PubMed API.
load_max_docs – The maximum number of documents to load.
Defaults to 3.
Methods
__init__(query[, load_max_docs])
Initialize the PubMedLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, load_max_docs: Optional[int] = 3)[source]¶
Initialize the PubMedLoader.
Parameters
query – The query to be passed to the PubMed API.
load_max_docs – The maximum number of documents to load.
Defaults to 3.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PubMedLoader¶
PubMed | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pubmed.PubMedLoader.html |
1a21a80d2e02-0 | langchain.document_loaders.youtube.GoogleApiYoutubeLoader¶
class langchain.document_loaders.youtube.GoogleApiYoutubeLoader(google_api_client: GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]¶
Load all Videos from a YouTube Channel.
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
“https://developers.google.com/docs/api/quickstart/python”
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
Attributes
add_video_info
captions_language
channel_name
continue_on_failure
video_ids
google_api_client
Methods
__init__(google_api_client[, channel_name, ...])
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
validate_channel_or_videoIds_is_set(values)
Validate that either folder_id or document_ids is set, but not both. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html |
1a21a80d2e02-1 | Validate that either folder_id or document_ids is set, but not both.
__init__(google_api_client: GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False) → None¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]¶
Validate that either folder_id or document_ids is set, but not both.
Examples using GoogleApiYoutubeLoader¶
YouTube
YouTube transcripts | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html |
5b081da61ddd-0 | langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser¶
class langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser(textract_features: Optional[Sequence[int]] = None, client: Optional[Any] = None)[source]¶
Send PDF files to Amazon Textract and parse them.
For parsing multi-page PDFs, they have to reside on S3.
The AmazonTextractPDFLoader calls the
[Amazon Textract Service](https://aws.amazon.com/textract/)
to convert PDFs into a Document structure.
Single and multi-page documents are supported with up to 3000 pages
and 512 MB of size.
For the call to be successful an AWS account is required,
similar to the
[AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
requirements.
Besides the AWS configuration, it is very similar to the other PDF
loaders, while also supporting JPEG, PNG and TIFF and non-native
PDF formats.
`python
from langchain.document_loaders import AmazonTextractPDFLoader
loader=AmazonTextractPDFLoader("example_data/alejandro_rosalez_sample-small.jpeg")
documents = loader.load()
`
One feature is the linearization of the output.
When using the features LAYOUT, FORMS or TABLES together with Textract
```python
from langchain.document_loaders import AmazonTextractPDFLoader
# you can mix and match each of the features
loader=AmazonTextractPDFLoader(
“example_data/alejandro_rosalez_sample-small.jpeg”,
textract_features=[“TABLES”, “LAYOUT”])
documents = loader.load()
```
it will generate output that formats the text in reading order and
try to output the information in a tabular structure or
output the key/value pairs with a colon (key: value). | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
5b081da61ddd-1 | output the key/value pairs with a colon (key: value).
This helps most LLMs to achieve better accuracy when
processing these texts.
Initializes the parser.
Parameters
textract_features – Features to be used for extraction, each feature
should be passed as an int that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client
Methods
__init__([textract_features, client])
Initializes the parser.
lazy_parse(blob)
Iterates over the Blob pages and returns an Iterator with a Document for each page, like the other parsers If multi-page document, blob.path has to be set to the S3 URI and for single page docs the blob.data is taken
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(textract_features: Optional[Sequence[int]] = None, client: Optional[Any] = None) → None[source]¶
Initializes the parser.
Parameters
textract_features – Features to be used for extraction, each feature
should be passed as an int that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Iterates over the Blob pages and returns an Iterator with a Document
for each page, like the other parsers If multi-page document, blob.path
has to be set to the S3 URI and for single page docs
the blob.data is taken
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
597d54c93086-0 | langchain.document_loaders.airbyte_json.AirbyteJSONLoader¶
class langchain.document_loaders.airbyte_json.AirbyteJSONLoader(file_path: str)[source]¶
Load local Airbyte json files.
Initialize with a file path. This should start with ‘/tmp/airbyte_local/’.
Attributes
file_path
Path to the directory containing the json files.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path. This should start with ‘/tmp/airbyte_local/’.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteJSONLoader¶
Airbyte
Airbyte JSON | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte_json.AirbyteJSONLoader.html |
5799583ccf0c-0 | langchain.document_loaders.gcs_directory.GCSDirectoryLoader¶
class langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Load from GCS directory.
Initialize with bucket and key name.
Parameters
project_name – The ID of the project for the GCS bucket.
bucket – The name of the GCS bucket.
prefix – The prefix of the GCS bucket.
loader_func – A loader function that instantiates a loader based on a
file_path argument. If nothing is provided, the GCSFileLoader
would use its default loader.
Methods
__init__(project_name, bucket[, prefix, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Initialize with bucket and key name.
Parameters
project_name – The ID of the project for the GCS bucket.
bucket – The name of the GCS bucket.
prefix – The prefix of the GCS bucket.
loader_func – A loader function that instantiates a loader based on a
file_path argument. If nothing is provided, the GCSFileLoader
would use its default loader.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
5799583ccf0c-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSDirectoryLoader¶
Google Cloud Storage
Google Cloud Storage Directory | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
c0b5e56ee524-0 | langchain.document_loaders.github.BaseGitHubLoader¶
class langchain.document_loaders.github.BaseGitHubLoader[source]¶
Bases: BaseLoader, BaseModel, ABC
Load GitHub repository Issues.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param github_api_url: str = 'https://api.github.com'¶
URL of GitHub API
param repo: str [Required]¶
Name of repository
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html |
c0b5e56ee524-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html |
c0b5e56ee524-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property headers: Dict[str, str]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html |
619fdb59fe90-0 | langchain.document_loaders.telegram.text_to_docs¶
langchain.document_loaders.telegram.text_to_docs(text: Union[str, List[str]]) → List[Document][source]¶
Convert a string or list of strings to a list of Documents with metadata. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.text_to_docs.html |
5f8b93294a92-0 | langchain.document_loaders.conllu.CoNLLULoader¶
class langchain.document_loaders.conllu.CoNLLULoader(file_path: str)[source]¶
Load CoNLL-U files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load from a file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CoNLLULoader¶
CoNLL-U | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.conllu.CoNLLULoader.html |
36f158257b4c-0 | langchain.document_loaders.larksuite.LarkSuiteDocLoader¶
class langchain.document_loaders.larksuite.LarkSuiteDocLoader(domain: str, access_token: str, document_id: str)[source]¶
Load from LarkSuite (FeiShu).
Initialize with domain, access_token (tenant / user), and document_id.
Parameters
domain – The domain to load the LarkSuite.
access_token – The access_token to use.
document_id – The document_id to load.
Methods
__init__(domain, access_token, document_id)
Initialize with domain, access_token (tenant / user), and document_id.
lazy_load()
Lazy load LarkSuite (FeiShu) document.
load()
Load LarkSuite (FeiShu) document.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(domain: str, access_token: str, document_id: str)[source]¶
Initialize with domain, access_token (tenant / user), and document_id.
Parameters
domain – The domain to load the LarkSuite.
access_token – The access_token to use.
document_id – The document_id to load.
lazy_load() → Iterator[Document][source]¶
Lazy load LarkSuite (FeiShu) document.
load() → List[Document][source]¶
Load LarkSuite (FeiShu) document.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using LarkSuiteDocLoader¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html |
36f158257b4c-1 | Returns
List of Documents.
Examples using LarkSuiteDocLoader¶
LarkSuite (FeiShu) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html |
12920ccf165c-0 | langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Load files using Unstructured API.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredAPIFileLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileAPILoader(f, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
__init__(file[, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load() | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
12920ccf165c-1 | Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
7b79db825667-0 | langchain.document_loaders.ifixit.IFixitLoader¶
class langchain.document_loaders.ifixit.IFixitLoader(web_path: str)[source]¶
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&A’s
and wikis from devices on iFixit using their open APIs and web scraping.
Initialize with a web path.
Methods
__init__(web_path)
Initialize with a web path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_device([url_override, include_guides])
Loads a device
load_guide([url_override])
Load a guide
load_questions_and_answers([url_override])
Load a list of questions and answers.
load_suggestions([query, doc_type])
Load suggestions.
__init__(web_path: str)[source]¶
Initialize with a web path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html |
7b79db825667-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_device(url_override: Optional[str] = None, include_guides: bool = True) → List[Document][source]¶
Loads a device
Parameters
url_override – A URL to override the default URL.
include_guides – Whether to include guides linked to from the device.
Defaults to True.
Returns:
load_guide(url_override: Optional[str] = None) → List[Document][source]¶
Load a guide
Parameters
url_override – A URL to override the default URL.
Returns: List[Document]
load_questions_and_answers(url_override: Optional[str] = None) → List[Document][source]¶
Load a list of questions and answers.
Parameters
url_override – A URL to override the default URL.
Returns: List[Document]
static load_suggestions(query: str = '', doc_type: str = 'all') → List[Document][source]¶
Load suggestions.
Parameters
query – A query string
doc_type – The type of document to search for. Can be one of “all”,
“device”, “guide”, “teardown”, “answer”, “wiki”.
Returns:
Examples using IFixitLoader¶
iFixit | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html |
f2f0b3f04b49-0 | langchain.document_loaders.facebook_chat.concatenate_rows¶
langchain.document_loaders.facebook_chat.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
Parameters
row – dictionary containing message information. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.concatenate_rows.html |
3b17823e9312-0 | langchain.document_loaders.pdf.OnlinePDFLoader¶
class langchain.document_loaders.pdf.OnlinePDFLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Load online PDF.
Initialize with a file path.
Parameters
file_path – Either a local, S3 or web path to a PDF file.
headers – Headers to use for GET request to download a file from a web path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None)¶
Initialize with a file path.
Parameters
file_path – Either a local, S3 or web path to a PDF file.
headers – Headers to use for GET request to download a file from a web path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.OnlinePDFLoader.html |
3619357ce9f9-0 | langchain.document_loaders.brave_search.BraveSearchLoader¶
class langchain.document_loaders.brave_search.BraveSearchLoader(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]¶
Load with Brave Search engine.
Initializes the BraveLoader.
Parameters
query – The query to search for.
api_key – The API key to use.
search_kwargs – The search kwargs to use.
Methods
__init__(query, api_key[, search_kwargs])
Initializes the BraveLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]¶
Initializes the BraveLoader.
Parameters
query – The query to search for.
api_key – The API key to use.
search_kwargs – The search kwargs to use.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BraveSearchLoader¶
Brave Search | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.brave_search.BraveSearchLoader.html |
8ae5c0149203-0 | langchain.document_loaders.pdf.UnstructuredPDFLoader¶
class langchain.document_loaders.pdf.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load PDF files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader(“example.pdf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-pdf
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.UnstructuredPDFLoader.html |
5d19f4d44a5c-0 | langchain.document_loaders.toml.TomlLoader¶
class langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]¶
Load TOML files.
It can load a single source file or several files in a single
directory.
Initialize the TomlLoader with a source file or directory.
Methods
__init__(source)
Initialize the TomlLoader with a source file or directory.
lazy_load()
Lazily load the TOML documents from the source file or directory.
load()
Load and return all documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(source: Union[str, Path])[source]¶
Initialize the TomlLoader with a source file or directory.
lazy_load() → Iterator[Document][source]¶
Lazily load the TOML documents from the source file or directory.
load() → List[Document][source]¶
Load and return all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TomlLoader¶
TOML | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html |
d0b44eae0520-0 | langchain.document_loaders.image.UnstructuredImageLoader¶
class langchain.document_loaders.image.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load PNG and JPG files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredImageLoader
loader = UnstructuredImageLoader(“example.png”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-image
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html |
d0b44eae0520-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredImageLoader¶
Images | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html |
bfd72607bfa1-0 | langchain.document_loaders.pdf.DocumentIntelligenceLoader¶
class langchain.document_loaders.pdf.DocumentIntelligenceLoader(file_path: str, client: Any, model: str = 'prebuilt-document', headers: Optional[Dict] = None)[source]¶
Loads a PDF with Azure Document Intelligence
Initialize the object for file processing with Azure Document Intelligence
(formerly Form Recognizer).
This constructor initializes a DocumentIntelligenceParser object to be used
for parsing files using the Azure Document Intelligence API. The load method
generates a Document node including metadata (source blob and page number)
for each page.
file_pathstrThe path to the file that needs to be parsed.
client: AnyA DocumentAnalysisClient to perform the analysis of the blob
modelstrThe model name or ID to be used for form recognition in Azure.
>>> obj = DocumentIntelligenceLoader(
... file_path="path/to/file",
... client=client,
... model="prebuilt-document"
... )
Attributes
source
Methods
__init__(file_path, client[, model, headers])
Initialize the object for file processing with Azure Document Intelligence (formerly Form Recognizer).
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, client: Any, model: str = 'prebuilt-document', headers: Optional[Dict] = None) → None[source]¶
Initialize the object for file processing with Azure Document Intelligence
(formerly Form Recognizer).
This constructor initializes a DocumentIntelligenceParser object to be used
for parsing files using the Azure Document Intelligence API. The load method
generates a Document node including metadata (source blob and page number)
for each page. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.DocumentIntelligenceLoader.html |
bfd72607bfa1-1 | generates a Document node including metadata (source blob and page number)
for each page.
file_pathstrThe path to the file that needs to be parsed.
client: AnyA DocumentAnalysisClient to perform the analysis of the blob
modelstrThe model name or ID to be used for form recognition in Azure.
>>> obj = DocumentIntelligenceLoader(
... file_path="path/to/file",
... client=client,
... model="prebuilt-document"
... )
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DocumentIntelligenceLoader¶
Azure Document Intelligence | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.DocumentIntelligenceLoader.html |
9a492bf3baab-0 | langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader¶
class langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]¶
Load from Azure Blob Storage files.
Initialize with connection string, container and blob name.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
blob
Blob name.
Methods
__init__(conn_str, container, blob_name)
Initialize with connection string, container and blob name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conn_str: str, container: str, blob_name: str)[source]¶
Initialize with connection string, container and blob name.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AzureBlobStorageFileLoader¶
Azure Blob Storage
Azure Blob Storage File | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader.html |
5affba5ec3fc-0 | langchain.document_loaders.merge.MergedDataLoader¶
class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶
Merge documents from a list of loaders
Initialize with a list of loaders
Methods
__init__(loaders)
Initialize with a list of loaders
lazy_load()
Lazy load docs from each individual loader.
load()
Load docs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(loaders: List)[source]¶
Initialize with a list of loaders
lazy_load() → Iterator[Document][source]¶
Lazy load docs from each individual loader.
load() → List[Document][source]¶
Load docs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MergedDataLoader¶
MergeDocLoader | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html |
0e9003d33245-0 | langchain.document_loaders.pdf.PyPDFium2Loader¶
class langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str, *, headers: Optional[Dict] = None, extract_images: bool = False)[source]¶
Load PDF using pypdfium2 and chunks at character level.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path, *[, headers, extract_images])
Initialize with a file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None, extract_images: bool = False)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html |
37433aa54f88-0 | langchain.document_loaders.pdf.PDFMinerLoader¶
class langchain.document_loaders.pdf.PDFMinerLoader(file_path: str, *, headers: Optional[Dict] = None, extract_images: bool = False, concatenate_pages: bool = True)[source]¶
Load PDF files using PDFMiner.
Initialize with file path.
Parameters
extract_images – Whether to extract images from PDF.
concatenate_pages – If True, concatenate all PDF pages into one a single
document. Otherwise, return one document per page.
Attributes
source
Methods
__init__(file_path, *[, headers, ...])
Initialize with file path.
lazy_load()
Lazily load documents.
load()
Eagerly load the content.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None, extract_images: bool = False, concatenate_pages: bool = True) → None[source]¶
Initialize with file path.
Parameters
extract_images – Whether to extract images from PDF.
concatenate_pages – If True, concatenate all PDF pages into one a single
document. Otherwise, return one document per page.
lazy_load() → Iterator[Document][source]¶
Lazily load documents.
load() → List[Document][source]¶
Eagerly load the content.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerLoader.html |
ff7aa4add13f-0 | langchain.document_loaders.discord.DiscordChatLoader¶
class langchain.document_loaders.discord.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶
Load Discord chat logs.
Initialize with a Pandas DataFrame containing chat logs.
Parameters
chat_log – Pandas DataFrame containing chat logs.
user_id_col – Name of the column containing the user ID. Defaults to “ID”.
Methods
__init__(chat_log[, user_id_col])
Initialize with a Pandas DataFrame containing chat logs.
lazy_load()
A lazy loader for Documents.
load()
Load all chat messages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶
Initialize with a Pandas DataFrame containing chat logs.
Parameters
chat_log – Pandas DataFrame containing chat logs.
user_id_col – Name of the column containing the user ID. Defaults to “ID”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load all chat messages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DiscordChatLoader¶
Discord | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.discord.DiscordChatLoader.html |
2ab909d2a679-0 | langchain.document_loaders.helpers.FileEncoding¶
class langchain.document_loaders.helpers.FileEncoding(encoding: Optional[str], confidence: float, language: Optional[str])[source]¶
File encoding as the NamedTuple.
Create new instance of FileEncoding(encoding, confidence, language)
Attributes
confidence
The confidence of the encoding.
encoding
The encoding of the file.
language
The language of the file.
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.FileEncoding.html |
a93298729aa7-0 | langchain.document_loaders.directory.DirectoryLoader¶
class langchain.document_loaders.directory.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[~langchain.document_loaders.unstructured.UnstructuredFileLoader], ~typing.Type[~langchain.document_loaders.text.TextLoader], ~typing.Type[~langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: ~typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4, *, sample_size: int = 0, randomize_sample: bool = False, sample_seed: ~typing.Optional[int] = None)[source]¶
Load from a directory.
Initialize with a path to directory and how to glob over it.
Parameters
path – Path to directory.
glob – Glob pattern to use to find files. Defaults to “**/[!.]*”
(all files except hidden).
silent_errors – Whether to silently ignore errors. Defaults to False.
load_hidden – Whether to load hidden files. Defaults to False.
loader_cls – Loader class to use for loading files.
Defaults to UnstructuredFileLoader.
loader_kwargs – Keyword arguments to pass to loader_cls. Defaults to None.
recursive – Whether to recursively search for files. Defaults to False.
show_progress – Whether to show a progress bar. Defaults to False.
use_multithreading – Whether to use multithreading. Defaults to False.
max_concurrency – The maximum number of threads to use. Defaults to 4.
sample_size – The maximum number of files you would like to load from the
directory. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html |
a93298729aa7-1 | sample_size – The maximum number of files you would like to load from the
directory.
randomize_sample – Suffle the files to get a random sample.
sample_seed – set the seed of the random shuffle for reporoducibility.
Methods
__init__(path[, glob, silent_errors, ...])
Initialize with a path to directory and how to glob over it.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_file(item, path, docs, pbar)
Load a file.
__init__(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[~langchain.document_loaders.unstructured.UnstructuredFileLoader], ~typing.Type[~langchain.document_loaders.text.TextLoader], ~typing.Type[~langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: ~typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4, *, sample_size: int = 0, randomize_sample: bool = False, sample_seed: ~typing.Optional[int] = None)[source]¶
Initialize with a path to directory and how to glob over it.
Parameters
path – Path to directory.
glob – Glob pattern to use to find files. Defaults to “**/[!.]*”
(all files except hidden).
silent_errors – Whether to silently ignore errors. Defaults to False.
load_hidden – Whether to load hidden files. Defaults to False. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html |
a93298729aa7-2 | load_hidden – Whether to load hidden files. Defaults to False.
loader_cls – Loader class to use for loading files.
Defaults to UnstructuredFileLoader.
loader_kwargs – Keyword arguments to pass to loader_cls. Defaults to None.
recursive – Whether to recursively search for files. Defaults to False.
show_progress – Whether to show a progress bar. Defaults to False.
use_multithreading – Whether to use multithreading. Defaults to False.
max_concurrency – The maximum number of threads to use. Defaults to 4.
sample_size – The maximum number of files you would like to load from the
directory.
randomize_sample – Suffle the files to get a random sample.
sample_seed – set the seed of the random shuffle for reporoducibility.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_file(item: Path, path: Path, docs: List[Document], pbar: Optional[Any]) → None[source]¶
Load a file.
Parameters
item – File path.
path – Directory path.
docs – List of documents to append to.
pbar – Progress bar. Defaults to None.
Examples using DirectoryLoader¶
StarRocks | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html |
5529d8f064a8-0 | langchain.document_loaders.unstructured.UnstructuredBaseLoader¶
class langchain.document_loaders.unstructured.UnstructuredBaseLoader(mode: str = 'single', post_processors: Optional[List[Callable]] = None, **unstructured_kwargs: Any)[source]¶
Base Loader that uses Unstructured.
Initialize with file path.
Methods
__init__([mode, post_processors])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(mode: str = 'single', post_processors: Optional[List[Callable]] = None, **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredBaseLoader.html |
d4c287a09c7f-0 | langchain.document_loaders.embaas.EmbaasBlobLoader¶
class langchain.document_loaders.embaas.EmbaasBlobLoader[source]¶
Bases: BaseEmbaasLoader, BaseBlobParser
Load Embaas blob.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the Embaas document extraction API.
param embaas_api_key: Optional[str] = None¶
The API key for the Embaas document extraction API.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the Embaas document extraction API. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
d4c287a09c7f-1 | Additional parameters to pass to the Embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
d4c287a09c7f-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Parses the blob lazily.
Parameters
blob – The blob to parse.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
d4c287a09c7f-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbaasBlobLoader¶
Embaas | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
355269932f5c-0 | langchain.document_loaders.parsers.docai.DocAIParsingResults¶
class langchain.document_loaders.parsers.docai.DocAIParsingResults(source_path: str, parsed_path: str)[source]¶
A dataclass to store Document AI parsing results.
Attributes
source_path
parsed_path
Methods
__init__(source_path, parsed_path)
__init__(source_path: str, parsed_path: str) → None¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParsingResults.html |
c84e8a13ed9f-0 | langchain.document_loaders.airbyte.AirbyteHubspotLoader¶
class langchain.document_loaders.airbyte.AirbyteHubspotLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Hubspot using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteHubspotLoader.html |
c84e8a13ed9f-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteHubspotLoader¶
Airbyte Hubspot | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteHubspotLoader.html |
7a3194e125c7-0 | langchain.document_loaders.pdf.PyPDFDirectoryLoader¶
class langchain.document_loaders.pdf.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False, extract_images: bool = False)[source]¶
Load a directory with PDF files using pypdf and chunks at character level.
Loader also stores page numbers in metadata.
Methods
__init__(path[, glob, silent_errors, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False, extract_images: bool = False)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFDirectoryLoader.html |
33195ef7c0b7-0 | langchain.document_loaders.mediawikidump.MWDumpLoader¶
class langchain.document_loaders.mediawikidump.MWDumpLoader(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
Load MediaWiki dump from an XML file.
Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) – XML local file path
encoding (str, optional) – Charset encoding, defaults to “utf8”
namespaces (List[int],optional) – The namespace of pages you want to parse.
See https://www.mediawiki.org/wiki/Help:Namespaces#Localisation
for a list of all common namespaces
skip_redirects (bool, optional) – TR=rue to skip pages that redirect to other pages,
False to keep them. False by default
stop_on_error (bool, optional) – False to skip over pages that cause parsing errors,
True to stop. True by default
Methods
__init__(file_path[, encoding, namespaces, ...])
lazy_load()
A lazy loader for Documents.
load()
Load from a file path.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
33195ef7c0b7-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MWDumpLoader¶
MediaWikiDump | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
a8594e5fe7a1-0 | langchain.document_loaders.obs_file.OBSFileLoader¶
class langchain.document_loaders.obs_file.OBSFileLoader(bucket: str, key: str, client: Any = None, endpoint: str = '', config: Optional[dict] = None)[source]¶
Load from the Huawei OBS file.
Initialize the OBSFileLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
key (str) – The name of the object in the OBS bucket.
client (ObsClient, optional) – An instance of the ObsClient to connect to OBS.
endpoint (str, optional) – The endpoint URL of your OBS bucket. This parameter is mandatory if client is not provided.
config (dict, optional) – The parameters for connecting to OBS, provided as a dictionary. This parameter is ignored if client is provided. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
Raises
ValueError – If the esdk-obs-python package is not installed.
TypeError – If the provided client is not an instance of ObsClient.
ValueError – If client is not provided, but endpoint is missing.
Note | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html |
a8594e5fe7a1-1 | ValueError – If client is not provided, but endpoint is missing.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSFileLoader with a new client:
```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
}
obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, config=config)
```
To create a new OBSFileLoader with an existing client:
```
from obs import ObsClient
# Assuming you have an existing ObsClient object ‘obs_client’
obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, client=obs_client)
```
To create a new OBSFileLoader without an existing client:
`
obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint="your-endpoint-url")
`
Methods
__init__(bucket, key[, client, endpoint, config])
Initialize the OBSFileLoader with the specified settings.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, key: str, client: Any = None, endpoint: str = '', config: Optional[dict] = None) → None[source]¶
Initialize the OBSFileLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
key (str) – The name of the object in the OBS bucket. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html |
a8594e5fe7a1-2 | key (str) – The name of the object in the OBS bucket.
client (ObsClient, optional) – An instance of the ObsClient to connect to OBS.
endpoint (str, optional) – The endpoint URL of your OBS bucket. This parameter is mandatory if client is not provided.
config (dict, optional) – The parameters for connecting to OBS, provided as a dictionary. This parameter is ignored if client is provided. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
Raises
ValueError – If the esdk-obs-python package is not installed.
TypeError – If the provided client is not an instance of ObsClient.
ValueError – If client is not provided, but endpoint is missing.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSFileLoader with a new client:
```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
}
obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, config=config)
``` | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html |
a8594e5fe7a1-3 | ```
To create a new OBSFileLoader with an existing client:
```
from obs import ObsClient
# Assuming you have an existing ObsClient object ‘obs_client’
obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, client=obs_client)
```
To create a new OBSFileLoader without an existing client:
`
obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint="your-endpoint-url")
`
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OBSFileLoader¶
Huawei OBS File | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html |
41bb91a09464-0 | langchain.document_loaders.xorbits.XorbitsLoader¶
class langchain.document_loaders.xorbits.XorbitsLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Xorbits DataFrame.
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
data_frame – Xorbits DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
data_frame – Xorbits DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document]¶
Lazy load records from dataframe.
load() → List[Document]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using XorbitsLoader¶
Xorbits Pandas DataFrame | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xorbits.XorbitsLoader.html |
9ea86f543a99-0 | langchain.document_loaders.iugu.IuguLoader¶
class langchain.document_loaders.iugu.IuguLoader(resource: str, api_token: Optional[str] = None)[source]¶
Load from IUGU.
Initialize the IUGU resource.
Parameters
resource – The name of the resource to fetch.
api_token – The IUGU API token to use.
Methods
__init__(resource[, api_token])
Initialize the IUGU resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(resource: str, api_token: Optional[str] = None) → None[source]¶
Initialize the IUGU resource.
Parameters
resource – The name of the resource to fetch.
api_token – The IUGU API token to use.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using IuguLoader¶
Iugu | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.iugu.IuguLoader.html |
20b55261e266-0 | langchain.document_loaders.google_speech_to_text.GoogleSpeechToTextLoader¶
class langchain.document_loaders.google_speech_to_text.GoogleSpeechToTextLoader(project_id: str, file_path: str, location: str = 'us-central1', recognizer_id: str = '_', config: Optional[RecognitionConfig] = None, config_mask: Optional[FieldMask] = None)[source]¶
Loader for Google Cloud Speech-to-Text audio transcripts.
It uses the Google Cloud Speech-to-Text API to transcribe audio files
and loads the transcribed text into one or more Documents,
depending on the specified format.
To use, you should have the google-cloud-speech python package installed.
Audio files can be specified via a Google Cloud Storage uri or a local file path.
For a detailed explanation of Google Cloud Speech-to-Text, refer to the product
documentation.
https://cloud.google.com/speech-to-text
Initializes the GoogleSpeechToTextLoader.
Parameters
project_id – Google Cloud Project ID.
file_path – A Google Cloud Storage URI or a local file path.
location – Speech-to-Text recognizer location.
recognizer_id – Speech-to-Text recognizer id.
config – Recognition options and features.
For more information:
https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v2.types.RecognitionConfig
config_mask – The list of fields in config that override the values in the
default_recognition_config of the recognizer during this
recognition request.
For more information:
https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v2.types.RecognizeRequest
Methods
__init__(project_id, file_path[, location, ...])
Initializes the GoogleSpeechToTextLoader.
lazy_load()
A lazy loader for Documents.
load() | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.google_speech_to_text.GoogleSpeechToTextLoader.html |
20b55261e266-1 | lazy_load()
A lazy loader for Documents.
load()
Transcribes the audio file and loads the transcript into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(project_id: str, file_path: str, location: str = 'us-central1', recognizer_id: str = '_', config: Optional[RecognitionConfig] = None, config_mask: Optional[FieldMask] = None)[source]¶
Initializes the GoogleSpeechToTextLoader.
Parameters
project_id – Google Cloud Project ID.
file_path – A Google Cloud Storage URI or a local file path.
location – Speech-to-Text recognizer location.
recognizer_id – Speech-to-Text recognizer id.
config – Recognition options and features.
For more information:
https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v2.types.RecognitionConfig
config_mask – The list of fields in config that override the values in the
default_recognition_config of the recognizer during this
recognition request.
For more information:
https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v2.types.RecognizeRequest
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Transcribes the audio file and loads the transcript into documents.
It uses the Google Cloud Speech-to-Text API to transcribe the audio file
and blocks until the transcription is finished.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GoogleSpeechToTextLoader¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.google_speech_to_text.GoogleSpeechToTextLoader.html |
20b55261e266-2 | Returns
List of Documents.
Examples using GoogleSpeechToTextLoader¶
Google Cloud Speech-to-Text | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.google_speech_to_text.GoogleSpeechToTextLoader.html |
8631b16ed97f-0 | langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader¶
class langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Load from Tencent Cloud COS file.
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
Methods
__init__(conf, bucket, key)
Initialize with COS config, bucket and key name.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conf: Any, bucket: str, key: str)[source]¶
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TencentCOSFileLoader¶
Tencent COS File | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader.html |
760b0fc2f356-0 | langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader¶
class langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Load from TensorFlow Dataset.
dataset_name¶
the name of the dataset to load
split_name¶
the name of the split to load.
load_max_docs¶
a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function¶
a function that converts a dataset sample
into a Document
Example
from langchain.document_loaders import TensorflowDatasetLoader
def mlqaen_example_to_document(example: dict) -> Document:
return Document(
page_content=decode_to_str(example["context"]),
metadata={
"id": decode_to_str(example["id"]),
"title": decode_to_str(example["title"]),
"question": decode_to_str(example["question"]),
"answer": decode_to_str(example["answers"]["text"][0]),
},
)
tsds_client = TensorflowDatasetLoader(
dataset_name="mlqa/en",
split_name="test",
load_max_docs=100,
sample_to_document_function=mlqaen_example_to_document,
)
Initialize the TensorflowDatasetLoader.
Parameters
dataset_name – the name of the dataset to load
split_name – the name of the split to load.
load_max_docs – a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function – a function that converts a dataset sample
into a Document.
Attributes
load_max_docs
The maximum number of documents to load.
sample_to_document_function
Custom function that transform a dataset sample into a Document.
Methods | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
760b0fc2f356-1 | sample_to_document_function
Custom function that transform a dataset sample into a Document.
Methods
__init__(dataset_name, split_name[, ...])
Initialize the TensorflowDatasetLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Initialize the TensorflowDatasetLoader.
Parameters
dataset_name – the name of the dataset to load
split_name – the name of the split to load.
load_max_docs – a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function – a function that converts a dataset sample
into a Document.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TensorflowDatasetLoader¶
TensorFlow Datasets | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
7866afdcc145-0 | langchain.document_loaders.rocksetdb.ColumnNotFoundError¶
class langchain.document_loaders.rocksetdb.ColumnNotFoundError(missing_key: str, query: str)[source]¶
Column not found error. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.ColumnNotFoundError.html |
fbe62addb689-0 | langchain.document_loaders.s3_file.S3FileLoader¶
class langchain.document_loaders.s3_file.S3FileLoader(bucket: str, key: str, *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Load from Amazon AWS S3 file.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
key – The key of the S3 object.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether or not to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether or not to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
fbe62addb689-1 | endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
Methods
__init__(bucket, key, *[, region_name, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
fbe62addb689-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, key: str, *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
key – The key of the S3 object.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether or not to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether or not to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
fbe62addb689-3 | client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using S3FileLoader¶
AWS S3 Directory
AWS S3 File | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
284026f0806d-0 | langchain.document_loaders.cube_semantic.CubeSemanticLoader¶
class langchain.document_loaders.cube_semantic.CubeSemanticLoader(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[source]¶
Load Cube semantic layer metadata.
Parameters
cube_api_url – REST API endpoint.
Use the REST API of your Cube’s deployment.
Please find out more information here:
https://cube.dev/docs/http-api/rest#configuration-base-path
cube_api_token – Cube API token.
Authentication tokens are generated based on your Cube’s API secret.
Please find out more information here:
https://cube.dev/docs/security#generating-json-web-tokens-jwt
load_dimension_values – Whether to load dimension values for every string
dimension or not.
dimension_values_limit – Maximum number of dimension values to load.
dimension_values_max_retries – Maximum number of retries to load dimension
values.
dimension_values_retry_delay – Delay between retries to load dimension values.
Methods
__init__(cube_api_url, cube_api_token[, ...])
lazy_load()
A lazy loader for Documents.
load()
Makes a call to Cube's REST API metadata endpoint.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Makes a call to Cube’s REST API metadata endpoint. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html |
284026f0806d-1 | Makes a call to Cube’s REST API metadata endpoint.
Returns
page_content=column_title + column_description
metadata
table_name
column_name
column_data_type
column_member_type
column_title
column_description
column_values
cube_data_obj_type
Return type
A list of documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CubeSemanticLoader¶
Cube Semantic Layer | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html |
ce43c99a5bcb-0 | langchain.document_loaders.airbyte.AirbyteGongLoader¶
class langchain.document_loaders.airbyte.AirbyteGongLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Gong using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteGongLoader.html |
ce43c99a5bcb-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteGongLoader¶
Airbyte Gong | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteGongLoader.html |
cff0d8bd2e7f-0 | langchain.document_loaders.base_o365.O365BaseLoader¶
class langchain.document_loaders.base_o365.O365BaseLoader[source]¶
Bases: BaseLoader, BaseModel
Base class for all loaders that uses O365 Package
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param auth_with_token: bool = False¶
Whether to authenticate with a token or not. Defaults to False.
param chunk_size: Union[int, str] = 5242880¶
Number of bytes to retrieve from each api call to the server. int or ‘auto’.
param settings: langchain.document_loaders.base_o365._O365Settings [Optional]¶
Settings for the Office365 API client.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base_o365.O365BaseLoader.html |
cff0d8bd2e7f-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base_o365.O365BaseLoader.html |
cff0d8bd2e7f-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base_o365.O365BaseLoader.html |
f0bc859d47a6-0 | langchain.document_loaders.airbyte.AirbyteSalesforceLoader¶
class langchain.document_loaders.airbyte.AirbyteSalesforceLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Salesforce using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteSalesforceLoader.html |
f0bc859d47a6-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteSalesforceLoader¶
Airbyte Salesforce | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteSalesforceLoader.html |
a132cad6880e-0 | langchain.document_loaders.slack_directory.SlackDirectoryLoader¶
class langchain.document_loaders.slack_directory.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Load from a Slack directory dump.
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
Methods
__init__(zip_path[, workspace_url])
Initialize the SlackDirectoryLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the Slack directory dump.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the Slack directory dump.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SlackDirectoryLoader¶
Slack | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.slack_directory.SlackDirectoryLoader.html |
b1c216c2342f-0 | langchain.document_loaders.parsers.html.bs4.BS4HTMLParser¶
class langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]¶
Pparse HTML files using Beautiful Soup.
Initialize a bs4 based HTML parser.
Methods
__init__(*[, features, get_text_separator])
Initialize a bs4 based HTML parser.
lazy_parse(blob)
Load HTML document into document objects.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any) → None[source]¶
Initialize a bs4 based HTML parser.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load HTML document into document objects.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html |
0bb63889ff7a-0 | langchain.document_loaders.arcgis_loader.ArcGISLoader¶
class langchain.document_loaders.arcgis_loader.ArcGISLoader(layer: Union[str, arcgis.features.FeatureLayer], gis: Optional[arcgis.gis.GIS] = None, where: str = '1=1', out_fields: Optional[Union[List[str], str]] = None, return_geometry: bool = False, result_record_count: Optional[int] = None, lyr_desc: Optional[str] = None, **kwargs: Any)[source]¶
Load records from an ArcGIS FeatureLayer.
Methods
__init__(layer[, gis, where, out_fields, ...])
lazy_load()
Lazy load records from FeatureLayer.
load()
Load all records from FeatureLayer.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(layer: Union[str, arcgis.features.FeatureLayer], gis: Optional[arcgis.gis.GIS] = None, where: str = '1=1', out_fields: Optional[Union[List[str], str]] = None, return_geometry: bool = False, result_record_count: Optional[int] = None, lyr_desc: Optional[str] = None, **kwargs: Any)[source]¶
lazy_load() → Iterator[Document][source]¶
Lazy load records from FeatureLayer.
load() → List[Document][source]¶
Load all records from FeatureLayer.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ArcGISLoader¶
ArcGIS | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arcgis_loader.ArcGISLoader.html |
Subsets and Splits