id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
cb487ba8a490-0 | langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Load from Spreedly API.
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
Methods
__init__(access_token, resource)
Initialize with an access token and a resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(access_token: str, resource: str) → None[source]¶
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SpreedlyLoader¶
Spreedly | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html |
fe1c46fd6831-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶
Parameters for the embaas document extraction API.
Attributes
mime_type
The mime type of the document.
file_extension
The file extension of the document.
file_name
The file name of the document.
should_chunk
Whether to chunk the document into pages.
chunk_size
The maximum size of the text chunks.
chunk_overlap
The maximum overlap allowed between chunks.
chunk_splitter
The text splitter class name for creating chunks.
separators
The separators for chunks.
should_embed
Whether to create embeddings for the document in the response.
model
The model to pass to the Embaas document extraction API.
instruction
The instruction to pass to the Embaas document extraction API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
fe1c46fd6831-1 | update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
fe1c46fd6831-2 | If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
8212f1616d9d-0 | langchain.document_loaders.readthedocs.ReadTheDocsLoader¶
class langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, patterns: Sequence[str] = ('*.htm', '*.html'), exclude_links_ratio: float = 1.0, **kwargs: Optional[Any])[source]¶
Load ReadTheDocs documentation directory.
Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
patterns – The file patterns to load, passed to glob.rglob.
exclude_links_ratio – The ratio of links:content to exclude pages from.
This is to reduce the frequency at which index pages make their
way into retrieved results. Recommended: 0.5
kwargs – named arguments passed to bs4.BeautifulSoup.
Methods
__init__(path[, encoding, errors, ...])
Initialize ReadTheDocsLoader
lazy_load() | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
8212f1616d9d-1 | Initialize ReadTheDocsLoader
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, patterns: Sequence[str] = ('*.htm', '*.html'), exclude_links_ratio: float = 1.0, **kwargs: Optional[Any])[source]¶
Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
patterns – The file patterns to load, passed to glob.rglob.
exclude_links_ratio – The ratio of links:content to exclude pages from.
This is to reduce the frequency at which index pages make their
way into retrieved results. Recommended: 0.5
kwargs – named arguments passed to bs4.BeautifulSoup.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
8212f1616d9d-2 | lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ReadTheDocsLoader¶
ReadTheDocs Documentation | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
314228b6f6d3-0 | langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser(extract_images: bool = False)[source]¶
Parse PDF with PyPDFium2.
Initialize the parser.
Methods
__init__([extract_images])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(extract_images: bool = False) → None[source]¶
Initialize the parser.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html |
0d0ab5fde645-0 | langchain.document_loaders.python.PythonLoader¶
class langchain.document_loaders.python.PythonLoader(file_path: str)[source]¶
Load Python files, respecting any non-default encoding if specified.
Initialize with a file path.
Parameters
file_path – The path to the file to load.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the file to load.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html |
c7e027cec286-0 | langchain.document_loaders.telegram.TelegramChatFileLoader¶
class langchain.document_loaders.telegram.TelegramChatFileLoader(path: str)[source]¶
Load from Telegram chat dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TelegramChatFileLoader¶
Telegram | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatFileLoader.html |
655bbaf49e7a-0 | langchain.document_loaders.telegram.TelegramChatApiLoader¶
class langchain.document_loaders.telegram.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Load Telegram chat json directory dump.
Initialize with API parameters.
Parameters
chat_entity – The chat entity to fetch data from.
api_id – The API ID.
api_hash – The API hash.
username – The username.
file_path – The file path to save the data to. Defaults to
“telegram_data.json”.
Methods
__init__([chat_entity, api_id, api_hash, ...])
Initialize with API parameters.
fetch_data_from_telegram()
Fetch data from Telegram API and save it as a JSON file.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Initialize with API parameters.
Parameters
chat_entity – The chat entity to fetch data from.
api_id – The API ID.
api_hash – The API hash.
username – The username.
file_path – The file path to save the data to. Defaults to
“telegram_data.json”.
async fetch_data_from_telegram() → None[source]¶
Fetch data from Telegram API and save it as a JSON file.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
655bbaf49e7a-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TelegramChatApiLoader¶
Telegram | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
ffd4a07effda-0 | langchain.document_loaders.text.TextLoader¶
class langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Load text file.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
encoding. (with the default system) –
autodetect_encoding – Whether to try to autodetect the file encoding
if the specified encoding fails.
Initialize with file path.
Methods
__init__(file_path[, encoding, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TextLoader¶
Cohere Reranker
Confident
Elasticsearch
Chat Over Documents with Vectara
Vectorstore
LanceDB
sqlite-vss
Weaviate
DashVector
ScaNN
Xata
Vectara
PGVector
Rockset
DingoDB
Zilliz
SingleStoreDB
Annoy
Typesense
Atlas | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
ffd4a07effda-1 | Zilliz
SingleStoreDB
Annoy
Typesense
Atlas
Activeloop Deep Lake
Neo4j Vector Index
Tair
Chroma
Alibaba Cloud OpenSearch
Baidu Cloud VectorSearch
StarRocks
scikit-learn
Tencent Cloud VectorDB
DocArray HnswSearch
MyScale
ClickHouse
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
BagelDB
Azure Cognitive Search
Cassandra
USearch
Milvus
Marqo
DocArray InMemorySearch
Postgres Embedding
Faiss
Epsilla
AnalyticDB
Hologres
your local model path
MongoDB Atlas
Meilisearch
Conversational Retrieval Agent
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
Structure answers with OpenAI functions
QA using Activeloop’s DeepLake
Graph QA
Caching
MultiVector Retriever
Parent Document Retriever
Combine agents and vector stores
Loading from LangChainHub | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
a5221828fdbc-0 | langchain.document_loaders.airbyte.AirbyteStripeLoader¶
class langchain.document_loaders.airbyte.AirbyteStripeLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Stripe using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html |
a5221828fdbc-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteStripeLoader¶
Airbyte Question Answering
Airbyte Stripe | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html |
bf9b26600add-0 | langchain.document_loaders.datadog_logs.DatadogLogsLoader¶
class langchain.document_loaders.datadog_logs.DatadogLogsLoader(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100)[source]¶
Load Datadog logs.
Logs are written into the page_content and into the metadata.
Initialize Datadog document loader.
Requirements:
Must have datadog_api_client installed. Install with pip install datadog_api_client.
Parameters
query – The query to run in Datadog.
api_key – The Datadog API key.
app_key – The Datadog APP key.
from_time – Optional. The start of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to 20 minutes ago.
to_time – Optional. The end of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to now.
limit – The maximum number of logs to return.
Defaults to 100.
Methods
__init__(query, api_key, app_key[, ...])
Initialize Datadog document loader.
lazy_load()
A lazy loader for Documents.
load()
Get logs from Datadog.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_log(log)
Create Document objects from Datadog log items.
__init__(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100) → None[source]¶
Initialize Datadog document loader. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
bf9b26600add-1 | Initialize Datadog document loader.
Requirements:
Must have datadog_api_client installed. Install with pip install datadog_api_client.
Parameters
query – The query to run in Datadog.
api_key – The Datadog API key.
app_key – The Datadog APP key.
from_time – Optional. The start of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to 20 minutes ago.
to_time – Optional. The end of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to now.
limit – The maximum number of logs to return.
Defaults to 100.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Get logs from Datadog.
Returns
A list of Document objects.
page_content
metadata
id
service
status
tags
timestamp
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_log(log: dict) → Document[source]¶
Create Document objects from Datadog log items.
Examples using DatadogLogsLoader¶
Datadog Logs | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
0126710426a6-0 | langchain.document_loaders.epub.UnstructuredEPubLoader¶
class langchain.document_loaders.epub.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load EPub files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredEPubLoader
loader = UnstructuredEPubLoader(“example.epub”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-epub
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html |
0126710426a6-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredEPubLoader¶
EPub | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html |
ac4ce66f8e3f-0 | langchain.document_loaders.html.UnstructuredHTMLLoader¶
class langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load HTML files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredHTMLLoader
loader = UnstructuredHTMLLoader(“example.html”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-html
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html |
7820cdcdf6fb-0 | langchain.document_loaders.rtf.UnstructuredRTFLoader¶
class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load RTF files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredRTFLoader
loader = UnstructuredRTFLoader(“example.rtf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-rtf
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
Methods
__init__(file_path[, mode])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
7820cdcdf6fb-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
a7e5edfee4d8-0 | langchain.document_loaders.airbyte.AirbyteTypeformLoader¶
class langchain.document_loaders.airbyte.AirbyteTypeformLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Typeform using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteTypeformLoader.html |
a7e5edfee4d8-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteTypeformLoader¶
Airbyte Typeform | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteTypeformLoader.html |
1b46cf9ee7ad-0 | langchain.document_loaders.bilibili.BiliBiliLoader¶
class langchain.document_loaders.bilibili.BiliBiliLoader(video_urls: List[str])[source]¶
Load BiliBili video transcripts.
Initialize with bilibili url.
Parameters
video_urls – List of bilibili urls.
Methods
__init__(video_urls)
Initialize with bilibili url.
lazy_load()
A lazy loader for Documents.
load()
Load Documents from bilibili url.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(video_urls: List[str])[source]¶
Initialize with bilibili url.
Parameters
video_urls – List of bilibili urls.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load Documents from bilibili url.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BiliBiliLoader¶
BiliBili | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bilibili.BiliBiliLoader.html |
e6b729064fa9-0 | langchain.document_loaders.pdf.MathpixPDFLoader¶
class langchain.document_loaders.pdf.MathpixPDFLoader(file_path: str, processed_file_format: str = 'md', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]¶
Load PDF files using Mathpix service.
Initialize with a file path.
Parameters
file_path – a file for loading.
processed_file_format – a format of the processed file. Default is “md”.
max_wait_time_seconds – a maximum time to wait for the response from
the server. Default is 500.
should_clean_pdf – a flag to clean the PDF file. Default is False.
**kwargs – additional keyword arguments.
Attributes
data
source
url
Methods
__init__(file_path[, processed_file_format, ...])
Initialize with a file path.
clean_pdf(contents)
Clean the PDF file.
get_processed_pdf(pdf_id)
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
send_pdf()
wait_for_processing(pdf_id)
Wait for processing to complete.
__init__(file_path: str, processed_file_format: str = 'md', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any) → None[source]¶
Initialize with a file path.
Parameters
file_path – a file for loading.
processed_file_format – a format of the processed file. Default is “md”.
max_wait_time_seconds – a maximum time to wait for the response from
the server. Default is 500.
should_clean_pdf – a flag to clean the PDF file. Default is False.
**kwargs – additional keyword arguments. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html |
e6b729064fa9-1 | **kwargs – additional keyword arguments.
clean_pdf(contents: str) → str[source]¶
Clean the PDF file.
Parameters
contents – a PDF file contents.
Returns:
get_processed_pdf(pdf_id: str) → str[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
send_pdf() → str[source]¶
wait_for_processing(pdf_id: str) → None[source]¶
Wait for processing to complete.
Parameters
pdf_id – a PDF id.
Returns: None | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html |
ff80065efdf2-0 | langchain.document_loaders.youtube.GoogleApiClient¶
class langchain.document_loaders.youtube.GoogleApiClient(credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.credentials/token.json'))[source]¶
Generic Google API Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. “https://developers.google.com/docs/api/quickstart/python”
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
Attributes
credentials_path
service_account_path
token_path
Methods
__init__([credentials_path, ...])
validate_channel_or_videoIds_is_set(values)
Validate that either folder_id or document_ids is set, but not both.
__init__(credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.credentials/token.json')) → None¶
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]¶
Validate that either folder_id or document_ids is set, but not both.
Examples using GoogleApiClient¶
YouTube transcripts | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiClient.html |
3908e70ec594-0 | langchain.document_loaders.parsers.pdf.PyMuPDFParser¶
class langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None, extract_images: bool = False)[source]¶
Parse PDF using PyMuPDF.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
Methods
__init__([text_kwargs, extract_images])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(text_kwargs: Optional[Mapping[str, Any]] = None, extract_images: bool = False) → None[source]¶
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html |
4d7d34bd3e4b-0 | langchain.document_loaders.news.NewsURLLoader¶
class langchain.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]¶
Load news articles from URLs using Unstructured.
Parameters
urls – URLs to load. Each is loaded into its own document.
text_mode – If True, extract text from URL and use that for page content.
Otherwise, extract raw HTML.
nlp – If True, perform NLP on the extracted contents, like providing a summary
and extracting keywords.
continue_on_failure – If True, continue loading documents even if
loading fails for a particular URL.
show_progress_bar – If True, use tqdm to show a loading progress bar. Requires
tqdm to be installed, pip install tqdm.
**newspaper_kwargs – Any additional named arguments to pass to
newspaper.Article().
Example
from langchain.document_loaders import NewsURLLoader
loader = NewsURLLoader(
urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
Newspaper reference:https://newspaper.readthedocs.io/en/latest/
Initialize with file path.
Methods
__init__(urls[, text_mode, nlp, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any) → None[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
4d7d34bd3e4b-1 | Initialize with file path.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NewsURLLoader¶
News URL | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
96ec8655892f-0 | langchain.document_loaders.xml.UnstructuredXMLLoader¶
class langchain.document_loaders.xml.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load XML file using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredXMLLoader
loader = UnstructuredXMLLoader(“example.xml”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-xml
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredXMLLoader¶
XML | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html |
1836ab36e67e-0 | langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html |
5e038de32947-0 | langchain.document_loaders.parsers.pdf.PDFPlumberParser¶
class langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, extract_images: bool = False)[source]¶
Parse PDF with PDFPlumber.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
dedupe – Avoiding the error of duplicate characters if dedupe=True.
Methods
__init__([text_kwargs, dedupe, extract_images])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, extract_images: bool = False) → None[source]¶
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
dedupe – Avoiding the error of duplicate characters if dedupe=True.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html |
203654ebefb3-0 | langchain.document_loaders.weather.WeatherDataLoader¶
class langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]¶
Load weather data with Open Weather Map API.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free
OpenWeatherMap API.
Initialize with parameters.
Methods
__init__(client, places)
Initialize with parameters.
from_params(places, *[, openweathermap_api_key])
lazy_load()
Lazily load weather data for the given locations.
load()
Load weather data for the given locations.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: OpenWeatherMapAPIWrapper, places: Sequence[str]) → None[source]¶
Initialize with parameters.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → WeatherDataLoader[source]¶
lazy_load() → Iterator[Document][source]¶
Lazily load weather data for the given locations.
load() → List[Document][source]¶
Load weather data for the given locations.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WeatherDataLoader¶
Weather | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html |
8759c4e0e27b-0 | langchain.document_loaders.snowflake_loader.SnowflakeLoader¶
class langchain.document_loaders.snowflake_loader.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Load from Snowflake API.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Initialize Snowflake document loader.
Parameters
query – The query to run in Snowflake.
user – Snowflake user.
password – Snowflake password.
account – Snowflake account.
warehouse – Snowflake warehouse.
role – Snowflake role.
database – Snowflake database
schema – Snowflake schema
parameters – Optional. Parameters to pass to the query.
page_content_columns – Optional. Columns written to Document page_content.
metadata_columns – Optional. Columns written to Document metadata.
Methods
__init__(query, user, password, account, ...)
Initialize Snowflake document loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html |
8759c4e0e27b-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Initialize Snowflake document loader.
Parameters
query – The query to run in Snowflake.
user – Snowflake user.
password – Snowflake password.
account – Snowflake account.
warehouse – Snowflake warehouse.
role – Snowflake role.
database – Snowflake database
schema – Snowflake schema
parameters – Optional. Parameters to pass to the query.
page_content_columns – Optional. Columns written to Document page_content.
metadata_columns – Optional. Columns written to Document metadata.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SnowflakeLoader¶
Snowflake | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html |
1107835b4654-0 | langchain.document_loaders.notiondb.NotionDBLoader¶
class langchain.document_loaders.notiondb.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]¶
Load from Notion DB.
Reads content from pages within a Notion Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
Defaults to 10.
Initialize with parameters.
Methods
__init__(integration_token, database_id[, ...])
Initialize with parameters.
lazy_load()
A lazy loader for Documents.
load()
Load documents from the Notion database.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_page(page_summary)
Read a page.
__init__(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10) → None[source]¶
Initialize with parameters.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_page(page_summary: Dict[str, Any]) → Document[source]¶
Read a page.
Parameters
page_summary – Page summary from Notion API. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html |
1107835b4654-1 | Read a page.
Parameters
page_summary – Page summary from Notion API.
Examples using NotionDBLoader¶
Notion DB
Notion DB 2/2 | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html |
ddbf3a85a89d-0 | langchain.document_loaders.sharepoint.SharePointLoader¶
class langchain.document_loaders.sharepoint.SharePointLoader[source]¶
Bases: O365BaseLoader
Load from SharePoint.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param auth_with_token: bool = False¶
Whether to authenticate with a token or not. Defaults to False.
param chunk_size: Union[int, str] = 5242880¶
Number of bytes to retrieve from each api call to the server. int or ‘auto’.
param document_library_id: str [Required]¶
The ID of the SharePoint document library to load data from.
param folder_path: Optional[str] = None¶
The path to the folder to load data from.
param object_ids: Optional[List[str]] = None¶
The IDs of the objects to load data from.
param settings: _O365Settings [Optional]¶
Settings for the Office365 API client.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
ddbf3a85a89d-1 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
ddbf3a85a89d-2 | load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using SharePointLoader¶
Microsoft SharePoint | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
f67554232a0b-0 | langchain.document_loaders.markdown.UnstructuredMarkdownLoader¶
class langchain.document_loaders.markdown.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Markdown files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredMarkdownLoader
loader = UnstructuredMarkdownLoader(“example.md”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-md
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html |
f67554232a0b-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredMarkdownLoader¶
StarRocks | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html |
0b06b498702e-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParser¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None)[source]¶
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
Methods
__init__([api_key])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(api_key: Optional[str] = None)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
Examples using OpenAIWhisperParser¶
Loading documents from a YouTube url | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html |
b505d77cd250-0 | langchain.document_loaders.notebook.remove_newlines¶
langchain.document_loaders.notebook.remove_newlines(x: Any) → Any[source]¶
Recursively remove newlines, no matter the data structure they are stored in. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.remove_newlines.html |
0b011c86cb06-0 | langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶
class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶
Load from Azure Blob Storage container.
Initialize with connection string, container and blob prefix.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
prefix
Prefix for blob names.
Methods
__init__(conn_str, container[, prefix])
Initialize with connection string, container and blob prefix.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conn_str: str, container: str, prefix: str = '')[source]¶
Initialize with connection string, container and blob prefix.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AzureBlobStorageContainerLoader¶
Azure Blob Storage
Azure Blob Storage Container | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html |
7e4a760dbb12-0 | langchain.document_loaders.gitbook.GitbookLoader¶
class langchain.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Initialize with web page and whether to load all paths.
Parameters
web_page – The web page to load or the starting point from where
relative paths are discovered.
load_all_paths – If set to True, all relative paths in the navbar
are loaded instead of only web_page.
base_url – If load_all_paths is True, the relative paths are
appended to this base url. Defaults to web_page.
content_selector – The CSS selector for the content to load.
Defaults to “main”.
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
Attributes
web_path
Methods
__init__(web_page[, load_all_paths, ...])
Initialize with web page and whether to load all paths.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Fetch text from one single GitBook page.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser]) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
7e4a760dbb12-1 | scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Initialize with web page and whether to load all paths.
Parameters
web_page – The web page to load or the starting point from where
relative paths are discovered.
load_all_paths – If set to True, all relative paths in the navbar
are loaded instead of only web_page.
base_url – If load_all_paths is True, the relative paths are
appended to this base url. Defaults to web_page.
content_selector – The CSS selector for the content to load.
Defaults to “main”.
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Fetch text from one single GitBook page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
7e4a760dbb12-2 | List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using GitbookLoader¶
GitBook | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
095dcdf9e8fe-0 | langchain.document_loaders.joplin.JoplinLoader¶
class langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]¶
Load notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the access token, you need to go to the Web Clipper options and
under “Advanced Options” you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Parameters
access_token – The access token to use.
port – The port where the Web Clipper service is running. Default is 41184.
host – The host where the Web Clipper service is running.
Default is localhost.
Methods
__init__([access_token, port, host])
param access_token
The access token to use.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost') → None[source]¶
Parameters
access_token – The access token to use.
port – The port where the Web Clipper service is running. Default is 41184.
host – The host where the Web Clipper service is running.
Default is localhost.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
095dcdf9e8fe-1 | load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using JoplinLoader¶
Joplin | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
1eaa1db52fe0-0 | langchain.document_loaders.pdf.PyMuPDFLoader¶
class langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str, *, headers: Optional[Dict] = None, extract_images: bool = False, **kwargs: Any)[source]¶
Load PDF files using PyMuPDF.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path, *[, headers, extract_images])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load(**kwargs)
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None, extract_images: bool = False, **kwargs: Any) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(**kwargs: Any) → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html |
4de4580e645e-0 | langchain.document_loaders.confluence.ConfluenceLoader¶
class langchain.document_loaders.confluence.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, session: Optional[Session] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]¶
Load Confluence pages.
Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Confluence API supports difference format of page content. The storage format is the
raw XML representation for storage. The view format is the HTML representation for
viewing with macros are rendered as though it is viewed by users. You can pass
a enum content_format argument to load() to specify the content format, this is
set to ContentFormat.STORAGE by default, the supported values are:
ContentFormat.EDITOR, ContentFormat.EXPORT_VIEW,
ContentFormat.ANONYMOUS_EXPORT_VIEW, ContentFormat.STORAGE,
and ContentFormat.VIEW. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-1 | and ContentFormat.VIEW.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
# Server on perm
loader = ConfluenceLoader(
url="https://confluence.yoursite.com/",
username="me",
api_key="your_password",
cloud=False
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) – _description_
api_key (str, optional) – _description_, defaults to None
username (str, optional) – _description_, defaults to None
oauth2 (dict, optional) – _description_, defaults to {}
token (str, optional) – _description_, defaults to None
cloud (bool, optional) – _description_, defaults to True
number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) – defaults to 2
max_retry_seconds (Optional[int], optional) – defaults to 10
confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with
Raises
ValueError – Errors while validating input
ImportError – Required dependencies not installed.
Methods
__init__(url[, api_key, username, session, ...])
is_public_page(page)
Check if a page is publicly accessible.
lazy_load()
A lazy loader for Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-2 | Check if a page is publicly accessible.
lazy_load()
A lazy loader for Documents.
load([space_key, page_ids, label, cql, ...])
param space_key
Space key retrieved from a confluence URL, defaults to None
load_and_split([text_splitter])
Load Documents and split into chunks.
paginate_request(retrieval_method, **kwargs)
Paginate the various methods to retrieve groups of pages.
process_attachment(page_id[, ocr_languages])
process_doc(link)
process_image(link[, ocr_languages])
process_page(page, include_attachments, ...)
process_pages(pages, ...[, ocr_languages, ...])
Process a list of pages into a list of documents.
process_pdf(link[, ocr_languages])
process_svg(link[, ocr_languages])
process_xls(link)
validate_init_args([url, api_key, username, ...])
Validates proper combinations of init arguments
__init__(url: str, api_key: Optional[str] = None, username: Optional[str] = None, session: Optional[Session] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]¶
is_public_page(page: dict) → bool[source]¶
Check if a page is publicly accessible.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-3 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, content_format: ContentFormat = ContentFormat.STORAGE, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None, keep_markdown_format: bool = False, keep_newlines: bool = False) → List[Document][source]¶
Parameters
space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None
label (Optional[str], optional) – Get all pages with this label, defaults to None
cql (Optional[str], optional) – CQL Expression, defaults to None
include_restricted_content (bool, optional) – defaults to False
include_archived_content (bool, optional) – Whether to include archived content,
defaults to False
include_attachments (bool, optional) – defaults to False
include_comments (bool, optional) – defaults to False
content_format (ContentFormat) – Specify content format, defaults to
ContentFormat.STORAGE, the supported values are:
ContentFormat.EDITOR, ContentFormat.EXPORT_VIEW,
ContentFormat.ANONYMOUS_EXPORT_VIEW,
ContentFormat.STORAGE, and ContentFormat.VIEW.
limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000 | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-4 | ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a
language, you’ll first need to install the appropriate
Tesseract language pack.
keep_markdown_format (bool) – Whether to keep the markdown format, defaults to
False
keep_newlines (bool) – Whether to keep the newlines format, defaults to
False
Raises
ValueError – _description_
ImportError – _description_
Returns
_description_
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
paginate_request(retrieval_method: Callable, **kwargs: Any) → List[source]¶
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesn’t match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we don’t get the “next” values from the “_links” key because
they only return the value from the result key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) – Function used to retrieve docs
Returns | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-5 | Parameters
retrieval_method (callable) – Function used to retrieve docs
Returns
List of documents
Return type
List
process_attachment(page_id: str, ocr_languages: Optional[str] = None) → List[str][source]¶
process_doc(link: str) → str[source]¶
process_image(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_page(page: dict, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None, keep_markdown_format: Optional[bool] = False, keep_newlines: bool = False) → Document[source]¶
process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None, keep_markdown_format: Optional[bool] = False, keep_newlines: bool = False) → List[Document][source]¶
Process a list of pages into a list of documents.
process_pdf(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_svg(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_xls(link: str) → str[source]¶
static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, session: Optional[Session] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) → Optional[List][source]¶
Validates proper combinations of init arguments
Examples using ConfluenceLoader¶
Confluence | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
c578d916a44a-0 | langchain.document_loaders.mongodb.MongodbLoader¶
class langchain.document_loaders.mongodb.MongodbLoader(connection_string: str, db_name: str, collection_name: str, *, filter_criteria: Optional[Dict] = None)[source]¶
Load MongoDB documents.
Methods
__init__(connection_string, db_name, ...[, ...])
aload()
Load data into Document objects.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(connection_string: str, db_name: str, collection_name: str, *, filter_criteria: Optional[Dict] = None) → None[source]¶
async aload() → List[Document][source]¶
Load data into Document objects.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
Attention:
This implementation starts an asyncio event loop which
will only work if running in a sync env. In an async env, it should
fail since there is already an event loop running.
This code should be updated to kick off the event loop from a separate
thread if running within an async context.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mongodb.MongodbLoader.html |
b6d4f70eff57-0 | langchain.document_loaders.quip.QuipLoader¶
class langchain.document_loaders.quip.QuipLoader(api_url: str, access_token: str, request_timeout: Optional[int] = 60)[source]¶
Load Quip pages.
Port of https://github.com/quip/quip-api/tree/master/samples/baqup
Parameters
api_url – https://platform.quip.com
access_token – token of access quip API. Please refer:
https – //quip.com/dev/automation/documentation/current#section/Authentication/Get-Access-to-Quip’s-APIs
request_timeout – timeout of request, default 60s.
Methods
__init__(api_url, access_token[, ...])
param api_url
https://platform.quip.com
get_thread_ids_by_folder_id(folder_id, ...)
Get thread ids by folder id and update in thread_ids
lazy_load()
A lazy loader for Documents.
load([folder_ids, thread_ids, max_docs, ...])
:param : param folder_ids: List of specific folder IDs to load, defaults to None :param : param thread_ids: List of specific thread IDs to load, defaults to None :param : param max_docs: Maximum number of docs to retrieve in total, defaults 1000 :param : param include_all_folders: Include all folders that your access_token can access, but doesn't include your private folder :param : param include_comments: Include comments, defaults to False :param : param include_images: Include images, defaults to False
load_and_split([text_splitter])
Load Documents and split into chunks.
process_thread(thread_id, include_images, ...)
process_thread_images(tree)
process_thread_messages(thread_id)
process_threads(thread_ids, include_images, ...)
Process a list of thread into a list of documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.quip.QuipLoader.html |
b6d4f70eff57-1 | Process a list of thread into a list of documents.
__init__(api_url: str, access_token: str, request_timeout: Optional[int] = 60)[source]¶
Parameters
api_url – https://platform.quip.com
access_token – token of access quip API. Please refer:
https – //quip.com/dev/automation/documentation/current#section/Authentication/Get-Access-to-Quip’s-APIs
request_timeout – timeout of request, default 60s.
get_thread_ids_by_folder_id(folder_id: str, depth: int, thread_ids: List[str]) → None[source]¶
Get thread ids by folder id and update in thread_ids
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(folder_ids: Optional[List[str]] = None, thread_ids: Optional[List[str]] = None, max_docs: Optional[int] = 1000, include_all_folders: bool = False, include_comments: bool = False, include_images: bool = False) → List[Document][source]¶
:param : param folder_ids: List of specific folder IDs to load, defaults to None
:param : param thread_ids: List of specific thread IDs to load, defaults to None
:param : param max_docs: Maximum number of docs to retrieve in total, defaults 1000
:param : param include_all_folders: Include all folders that your access_token
can access, but doesn’t include your private folder
:param : param include_comments: Include comments, defaults to False
:param : param include_images: Include images, defaults to False
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.quip.QuipLoader.html |
b6d4f70eff57-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
process_thread(thread_id: str, include_images: bool, include_messages: bool) → Optional[Document][source]¶
process_thread_images(tree: ElementTree) → str[source]¶
process_thread_messages(thread_id: str) → str[source]¶
process_threads(thread_ids: Sequence[str], include_images: bool, include_messages: bool) → List[Document][source]¶
Process a list of thread into a list of documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.quip.QuipLoader.html |
443d64fdbc76-0 | langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader¶
class langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsoft PowerPoint files using Unstructured.
Works with both .ppt and .pptx files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredPowerPointLoader
loader = UnstructuredPowerPointLoader(“example.pptx”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-pptx
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
443d64fdbc76-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredPowerPointLoader¶
Microsoft PowerPoint | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
06ab36eb9fc6-0 | langchain.document_loaders.psychic.PsychicLoader¶
class langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Load from Psychic.dev.
Initialize with API key, connector id, and account id.
Parameters
api_key – The Psychic API key.
account_id – The Psychic account id.
connector_id – The Psychic connector id.
Methods
__init__(api_key, account_id[, connector_id])
Initialize with API key, connector id, and account id.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Initialize with API key, connector id, and account id.
Parameters
api_key – The Psychic API key.
account_id – The Psychic account id.
connector_id – The Psychic connector id.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PsychicLoader¶
Psychic | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html |
125b4eb4d8fb-0 | langchain.document_loaders.csv_loader.CSVLoader¶
class langchain.document_loaders.csv_loader.CSVLoader(file_path: str, source_column: Optional[str] = None, metadata_columns: Sequence[str] = (), csv_args: Optional[Dict] = None, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Load a CSV file into a list of Documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the document’s page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all documents by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
Parameters
file_path – The path to the CSV file.
source_column – The name of the column in the CSV file to use as the source.
Optional. Defaults to None.
metadata_columns – A sequence of column names to use as metadata. Optional.
csv_args – A dictionary of arguments to pass to the csv.DictReader.
Optional. Defaults to None.
encoding – The encoding of the CSV file. Optional. Defaults to None.
autodetect_encoding – Whether to try to autodetect the file encoding.
Methods
__init__(file_path[, source_column, ...])
param file_path
The path to the CSV file.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
125b4eb4d8fb-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, source_column: Optional[str] = None, metadata_columns: Sequence[str] = (), csv_args: Optional[Dict] = None, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Parameters
file_path – The path to the CSV file.
source_column – The name of the column in the CSV file to use as the source.
Optional. Defaults to None.
metadata_columns – A sequence of column names to use as metadata. Optional.
csv_args – A dictionary of arguments to pass to the csv.DictReader.
Optional. Defaults to None.
encoding – The encoding of the CSV file. Optional. Defaults to None.
autodetect_encoding – Whether to try to autodetect the file encoding.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CSVLoader¶
ChatGPT Plugin
CSV | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
a6ef57175b7f-0 | langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to reuse
a parser independent of how the blob was originally loaded.
Methods
__init__()
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
abstract lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document][source]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html |
eccf581cc030-0 | langchain.document_loaders.blob_loaders.schema.Blob¶
class langchain.document_loaders.blob_loaders.schema.Blob[source]¶
Bases: BaseModel
Blob represents raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param data: Optional[Union[bytes, str]] = None¶
param encoding: str = 'utf-8'¶
param mimetype: Optional[str] = None¶
param path: Optional[Union[str, pathlib.PurePath]] = None¶
as_bytes() → bytes[source]¶
Read data as bytes.
as_bytes_io() → Generator[Union[BytesIO, BufferedReader], None, None][source]¶
Read data as a byte stream.
as_string() → str[source]¶
Read data as a string.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
eccf581cc030-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_data(data: Union[str, bytes], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, path: Optional[str] = None) → Blob[source]¶
Initialize the blob from in-memory data.
Parameters
data – the in-memory data associated with the blob
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
path – if provided, will be set as the source from which the data came
Returns
Blob instance
classmethod from_orm(obj: Any) → Model¶
classmethod from_path(path: Union[str, PurePath], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, guess_type: bool = True) → Blob[source]¶
Load the blob from a path like object.
Parameters
path – path like object to file to be read | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
eccf581cc030-2 | Parameters
path – path like object to file to be read
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
guess_type – If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
eccf581cc030-3 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property source: Optional[str]¶
The source location of the blob as string if known otherwise none.
Examples using Blob¶
docai.md
Embaas | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
0d993cedb2e3-0 | langchain.document_loaders.tsv.UnstructuredTSVLoader¶
class langchain.document_loaders.tsv.UnstructuredTSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load TSV files using Unstructured.
Like other
Unstructured loaders, UnstructuredTSVLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, the TSV file will be a single Unstructured Table element.
If you use the loader in “elements” mode, an HTML representation
of the table will be available in the “text_as_html” key in the
document metadata.
Examples
from langchain.document_loaders.tsv import UnstructuredTSVLoader
loader = UnstructuredTSVLoader(“stanley-cups.tsv”, mode=”elements”)
docs = loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredTSVLoader¶
TSV | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tsv.UnstructuredTSVLoader.html |
89706734a71e-0 | langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator¶
class langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator(remove_selectors: Optional[List[str]] = None)[source]¶
Evaluates the page HTML content using the unstructured library.
Initialize UnstructuredHtmlEvaluator.
Methods
__init__([remove_selectors])
Initialize UnstructuredHtmlEvaluator.
evaluate(page, browser, response)
Synchronously process the HTML content of the page.
evaluate_async(page, browser, response)
Asynchronously process the HTML content of the page.
__init__(remove_selectors: Optional[List[str]] = None)[source]¶
Initialize UnstructuredHtmlEvaluator.
evaluate(page: Page, browser: Browser, response: Response) → str[source]¶
Synchronously process the HTML content of the page.
async evaluate_async(page: AsyncPage, browser: AsyncBrowser, response: AsyncResponse) → str[source]¶
Asynchronously process the HTML content of the page. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator.html |
b8988843deba-0 | langchain.document_loaders.baiducloud_bos_file.BaiduBOSFileLoader¶
class langchain.document_loaders.baiducloud_bos_file.BaiduBOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Load from Baidu Cloud BOS file.
Initialize with BOS config, bucket and key name.
:param conf(BceClientConfiguration): BOS config.
:param bucket(str): BOS bucket.
:param key(str): BOS file key.
Methods
__init__(conf, bucket, key)
Initialize with BOS config, bucket and key name.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conf: Any, bucket: str, key: str)[source]¶
Initialize with BOS config, bucket and key name.
:param conf(BceClientConfiguration): BOS config.
:param bucket(str): BOS bucket.
:param key(str): BOS file key.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.baiducloud_bos_file.BaiduBOSFileLoader.html |
e3f2b098eec9-0 | langchain.document_loaders.stripe.StripeLoader¶
class langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]¶
Load from Stripe API.
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
Methods
__init__(resource[, access_token])
Initialize with a resource and an access token.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(resource: str, access_token: Optional[str] = None) → None[source]¶
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using StripeLoader¶
Stripe | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html |
2d15289e50ef-0 | langchain.document_loaders.imsdb.IMSDbLoader¶
class langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load IMSDb webpages.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpage.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser]) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
2d15289e50ef-1 | scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
2d15289e50ef-2 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using IMSDbLoader¶
IMSDb | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
4fe84d4fb681-0 | langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None, extract_images: bool = False)[source]¶
Load PDF using pypdf
Methods
__init__([password, extract_images])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(password: Optional[Union[str, bytes]] = None, extract_images: bool = False)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html |
69ad40d378f5-0 | langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader¶
class langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]¶
Load WhatsApp messages text file.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WhatsAppChatLoader¶
WhatsApp
WhatsApp Chat | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html |
1dcfeab517eb-0 | langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Load from Gutenberg.org.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GutenbergLoader¶
Gutenberg | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html |
cec534b0ac17-0 | langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader¶
class langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]] = None, exclude_dirs: Optional[Sequence[str]] = (), timeout: Optional[int] = 10, prevent_outside: bool = True, link_regex: Optional[Union[str, Pattern]] = None, headers: Optional[dict] = None, check_response_status: bool = False)[source]¶
Load all child links from a URL page.
Security Note: This loader is a crawler that will start crawlingat a given URL and then expand to crawl child links recursively.
Web crawlers should generally NOT be deployed with network access
to any internal servers.
Control access to who can submit crawling requests and what network access
the crawler has.
While crawling, the crawler may encounter malicious URLs that would lead to a
server-side request forgery (SSRF) attack.
To mitigate risks, the crawler by default will only load URLs from the same
domain as the start URL (controlled via prevent_outside named argument).
This will mitigate the risk of SSRF attacks, but will not eliminate it.
For example, if crawling a host which hosts several sites:
https://some_host/alice_site/
https://some_host/bob_site/
A malicious URL on Alice’s site could cause the crawler to make a malicious
GET request to an endpoint on Bob’s site. Both sites are hosted on the
same host, so such a request would not be prevented by default.
See https://python.langchain.com/docs/security
Initialize with URL to crawl and any subdirectories to exclude.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
cec534b0ac17-1 | Initialize with URL to crawl and any subdirectories to exclude.
Parameters
url – The URL to crawl.
max_depth – The max depth of the recursive loading.
use_async – Whether to use asynchronous loading.
If True, this function will not be lazy, but it will still work in the
expected way, just not lazy.
extractor – A function to extract document contents from raw html.
When extract function returns an empty string, the document is
ignored.
metadata_extractor – A function to extract metadata from raw html and the
source url (args in that order). Default extractor will attempt
to use BeautifulSoup4 to extract the title, description and language
of the page.
exclude_dirs – A list of subdirectories to exclude.
timeout – The timeout for the requests, in the unit of seconds. If None then
connection will not timeout.
prevent_outside – If True, prevent loading from urls which are not children
of the root url.
link_regex – Regex for extracting sub-links from the raw html of a web page.
check_response_status – If True, check HTTP response status and skip
URLs with error responses (400-599).
Methods
__init__(url[, max_depth, use_async, ...])
Initialize with URL to crawl and any subdirectories to exclude.
lazy_load()
Lazy load web pages.
load()
Load web pages.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
cec534b0ac17-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]] = None, exclude_dirs: Optional[Sequence[str]] = (), timeout: Optional[int] = 10, prevent_outside: bool = True, link_regex: Optional[Union[str, Pattern]] = None, headers: Optional[dict] = None, check_response_status: bool = False) → None[source]¶
Initialize with URL to crawl and any subdirectories to exclude.
Parameters
url – The URL to crawl.
max_depth – The max depth of the recursive loading.
use_async – Whether to use asynchronous loading.
If True, this function will not be lazy, but it will still work in the
expected way, just not lazy.
extractor – A function to extract document contents from raw html.
When extract function returns an empty string, the document is
ignored.
metadata_extractor – A function to extract metadata from raw html and the
source url (args in that order). Default extractor will attempt
to use BeautifulSoup4 to extract the title, description and language
of the page.
exclude_dirs – A list of subdirectories to exclude.
timeout – The timeout for the requests, in the unit of seconds. If None then
connection will not timeout.
prevent_outside – If True, prevent loading from urls which are not children
of the root url.
link_regex – Regex for extracting sub-links from the raw html of a web page.
check_response_status – If True, check HTTP response status and skip
URLs with error responses (400-599).
lazy_load() → Iterator[Document][source]¶
Lazy load web pages. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
cec534b0ac17-3 | lazy_load() → Iterator[Document][source]¶
Lazy load web pages.
When use_async is True, this function will not be lazy,
but it will still work in the expected way, just not lazy.
load() → List[Document][source]¶
Load web pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RecursiveUrlLoader¶
Recursive URL Loader | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
5d4802b1044c-0 | langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader¶
class langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Zendesk Support using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader.html |
5d4802b1044c-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteZendeskSupportLoader¶
Airbyte Zendesk Support | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader.html |
0233306da3e9-0 | langchain.document_loaders.mhtml.MHTMLLoader¶
class langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Parse MHTML files with BeautifulSoup.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – Path to file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when getting the text
from the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '') → None[source]¶
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – Path to file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when getting the text
from the soup.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html |
0233306da3e9-1 | load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MHTMLLoader¶
mhtml | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html |
cec4b30479b8-0 | langchain.document_loaders.wikipedia.WikipediaLoader¶
class langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Load from Wikipedia.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
Methods
__init__(query[, lang, load_max_docs, ...])
Initializes a new instance of the WikipediaLoader class.
lazy_load()
A lazy loader for Documents.
load()
Loads the query result from Wikipedia into a list of Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
cec4b30479b8-1 | Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WikipediaLoader¶
Wikipedia
Diffbot Graph Transformer | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
2f29d672671f-0 | langchain.document_loaders.tomarkdown.ToMarkdownLoader¶
class langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]¶
Load HTML using 2markdown API.
Initialize with url and api key.
Methods
__init__(url, api_key)
Initialize with url and api key.
lazy_load()
Lazily load the file.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, api_key: str)[source]¶
Initialize with url and api key.
lazy_load() → Iterator[Document][source]¶
Lazily load the file.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ToMarkdownLoader¶
2Markdown | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html |
69939151e61e-0 | langchain.document_loaders.url_playwright.PlaywrightEvaluator¶
class langchain.document_loaders.url_playwright.PlaywrightEvaluator[source]¶
Abstract base class for all evaluators.
Each evaluator should take a page, a browser instance, and a response
object, process the page as necessary, and return the resulting text.
Methods
__init__()
evaluate(page, browser, response)
Synchronously process the page and return the resulting text.
evaluate_async(page, browser, response)
Asynchronously process the page and return the resulting text.
__init__()¶
abstract evaluate(page: Page, browser: Browser, response: Response) → str[source]¶
Synchronously process the page and return the resulting text.
Parameters
page – The page to process.
browser – The browser instance.
response – The response from page.goto().
Returns
The text content of the page.
Return type
text
abstract async evaluate_async(page: AsyncPage, browser: AsyncBrowser, response: AsyncResponse) → str[source]¶
Asynchronously process the page and return the resulting text.
Parameters
page – The page to process.
browser – The browser instance.
response – The response from page.goto().
Returns
The text content of the page.
Return type
text | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightEvaluator.html |
11125c1caaa1-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Transcribe and parse audio files with OpenAI Whisper model.
Audio transcription with OpenAI Whisper model locally from transformers.
Parameters:
device - device to use
NOTE: By default uses the gpu if available,
if you want to use cpu, please set device = “cpu”
lang_model - whisper model to use, for example “openai/whisper-medium”
forced_decoder_ids - id states for decoder in multilanguage model,
usage example:
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained(“openai/whisper-medium”)
forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”,
task=”transcribe”)
forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”,
task=”translate”)
Initialize the parser.
Parameters
device – device to use.
lang_model – whisper model to use, for example “openai/whisper-medium”.
Defaults to None.
forced_decoder_ids – id states for decoder in a multilanguage model.
Defaults to None.
Methods
__init__([device, lang_model, ...])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Initialize the parser.
Parameters
device – device to use. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
11125c1caaa1-1 | Initialize the parser.
Parameters
device – device to use.
lang_model – whisper model to use, for example “openai/whisper-medium”.
Defaults to None.
forced_decoder_ids – id states for decoder in a multilanguage model.
Defaults to None.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
bc6fb583710a-0 | langchain.document_loaders.rss.RSSFeedLoader¶
class langchain.document_loaders.rss.RSSFeedLoader(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any)[source]¶
Load news articles from RSS feeds using Unstructured.
Parameters
urls – URLs for RSS feeds to load. Each articles in the feed is loaded into its own document.
opml – OPML file to load feed urls from. Only one of urls or opml should be provided. The value
string (can be a URL) –
string. (or OPML markup contents as byte or) –
continue_on_failure – If True, continue loading documents even if
loading fails for a particular URL.
show_progress_bar – If True, use tqdm to show a loading progress bar. Requires
tqdm to be installed, pip install tqdm.
**newsloader_kwargs – Any additional named arguments to pass to
NewsURLLoader.
Example
from langchain.document_loaders import RSSFeedLoader
loader = RSSFeedLoader(
urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
The loader uses feedparser to parse RSS feeds. The feedparser library is not installed by default so you should
install it if using this loader:
https://pythonhosted.org/feedparser/
If you use OPML, you should also install listparser:
https://pythonhosted.org/listparser/
Finally, newspaper is used to process each article:
https://newspaper.readthedocs.io/en/latest/
Initialize with urls or OPML.
Methods
__init__([urls, opml, continue_on_failure, ...])
Initialize with urls or OPML.
lazy_load() | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
bc6fb583710a-1 | Initialize with urls or OPML.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any) → None[source]¶
Initialize with urls or OPML.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RSSFeedLoader¶
RSS Feeds | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
Subsets and Splits