id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
306f625d40ef-0 | langchain.utils.strings.comma_list¶
langchain.utils.strings.comma_list(items: List[Any]) → str[source]¶
Convert a list to a comma-separated string. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.strings.comma_list.html |
a5734605922a-0 | langchain.utils.openai_functions.convert_pydantic_to_openai_tool¶
langchain.utils.openai_functions.convert_pydantic_to_openai_tool(model: Type[BaseModel], *, name: Optional[str] = None, description: Optional[str] = None) → ToolDescription[source]¶
Converts a Pydantic model to a function description for the OpenAI API. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.openai_functions.convert_pydantic_to_openai_tool.html |
dab95cb0d62f-0 | langchain.utils.utils.guard_import¶
langchain.utils.utils.guard_import(module_name: str, *, pip_name: Optional[str] = None, package: Optional[str] = None) → Any[source]¶
Dynamically imports a module and raises a helpful exception if the module is not
installed. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.guard_import.html |
f0da86dcdacf-0 | langchain.utils.utils.build_extra_kwargs¶
langchain.utils.utils.build_extra_kwargs(extra_kwargs: Dict[str, Any], values: Dict[str, Any], all_required_field_names: Set[str]) → Dict[str, Any][source]¶
Build extra kwargs from values and extra_kwargs.
Parameters
extra_kwargs – Extra kwargs passed in by user.
values – Values passed in by user.
all_required_field_names – All required field names for the pydantic class. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.build_extra_kwargs.html |
5a87f39023f1-0 | langchain.utils.openai_functions.FunctionDescription¶
class langchain.utils.openai_functions.FunctionDescription[source]¶
Representation of a callable function to the OpenAI API.
name: str¶
The name of the function.
description: str¶
A description of the function.
parameters: dict¶
The parameters of the function. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.openai_functions.FunctionDescription.html |
9fde07c69f2f-0 | langchain.utils.input.get_bolded_text¶
langchain.utils.input.get_bolded_text(text: str) → str[source]¶
Get bolded text. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.input.get_bolded_text.html |
3b462b59c626-0 | langchain.utils.strings.stringify_value¶
langchain.utils.strings.stringify_value(val: Any) → str[source]¶
Stringify a value.
Parameters
val – The value to stringify.
Returns
The stringified value.
Return type
str | lang/api.python.langchain.com/en/latest/utils/langchain.utils.strings.stringify_value.html |
cba182e29ffc-0 | langchain.utils.utils.get_pydantic_field_names¶
langchain.utils.utils.get_pydantic_field_names(pydantic_cls: Any) → Set[str][source]¶
Get field names, including aliases, for a pydantic class.
Parameters
pydantic_cls – Pydantic class. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.get_pydantic_field_names.html |
d40e8ea5d7e6-0 | langchain.utils.utils.mock_now¶
langchain.utils.utils.mock_now(dt_value)[source]¶
Context manager for mocking out datetime.now() in unit tests.
Example:
with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):
assert datetime.datetime.now() == datetime.datetime(2011, 2, 3, 10, 11) | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.mock_now.html |
ccda1b495f38-0 | langchain.utils.pydantic.get_pydantic_major_version¶
langchain.utils.pydantic.get_pydantic_major_version() → int[source]¶
Get the major version of Pydantic. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.pydantic.get_pydantic_major_version.html |
ce7f7f94e89b-0 | langchain.utils.env.get_from_dict_or_env¶
langchain.utils.env.get_from_dict_or_env(data: Dict[str, Any], key: str, env_key: str, default: Optional[str] = None) → str[source]¶
Get a value from a dictionary or an environment variable. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.env.get_from_dict_or_env.html |
2be8f6118a20-0 | langchain.utils.iter.Tee¶
class langchain.utils.iter.Tee(iterable: Iterator[T], n: int = 2, *, lock: Optional[ContextManager[Any]] = None)[source]¶
Create n separate asynchronous iterators over iterable
This splits a single iterable into multiple iterators, each providing
the same items in the same order.
All child iterators may advance separately but share the same items
from iterable – when the most advanced iterator retrieves an item,
it is buffered until the least advanced iterator has yielded it as well.
A tee works lazily and can handle an infinite iterable, provided
that all iterators advance.
async def derivative(sensor_data):
previous, current = a.tee(sensor_data, n=2)
await a.anext(previous) # advance one iterator
return a.map(operator.sub, previous, current)
Unlike itertools.tee(), tee() returns a custom type instead
of a tuple. Like a tuple, it can be indexed, iterated and unpacked
to get the child iterators. In addition, its aclose() method
immediately closes all children, and it can be used in an async with context
for the same effect.
If iterable is an iterator and read elsewhere, tee will not
provide these items. Also, tee must internally buffer each item until the
last iterator has yielded it; if the most and least advanced iterator differ
by most data, using a list is more efficient (but not lazy).
If the underlying iterable is concurrency safe (anext may be awaited
concurrently) the resulting iterators are concurrency safe as well. Otherwise,
the iterators are safe if there is only ever one single “most advanced” iterator.
To enforce sequential use of anext, provide a lock
- e.g. an asyncio.Lock instance in an asyncio application -
and access is automatically synchronised.
Methods | lang/api.python.langchain.com/en/latest/utils/langchain.utils.iter.Tee.html |
2be8f6118a20-1 | and access is automatically synchronised.
Methods
__init__(iterable[, n, lock])
close()
__init__(iterable: Iterator[T], n: int = 2, *, lock: Optional[ContextManager[Any]] = None)[source]¶
close() → None[source]¶ | lang/api.python.langchain.com/en/latest/utils/langchain.utils.iter.Tee.html |
813eb5c38092-0 | langchain.utils.json_schema.dereference_refs¶
langchain.utils.json_schema.dereference_refs(schema_obj: dict, *, full_schema: Optional[dict] = None, skip_keys: Optional[Sequence[str]] = None) → dict[source]¶
Try to substitute $refs in JSON Schema. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.json_schema.dereference_refs.html |
7ecc30160d32-0 | langchain.utils.utils.convert_to_secret_str¶
langchain.utils.utils.convert_to_secret_str(value: Union[SecretStr, str]) → SecretStr[source]¶
Convert a string to a SecretStr if needed. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.convert_to_secret_str.html |
0ab8db6e2340-0 | langchain.utils.aiter.Tee¶
class langchain.utils.aiter.Tee(iterable: AsyncIterator[T], n: int = 2, *, lock: Optional[AsyncContextManager[Any]] = None)[source]¶
Create n separate asynchronous iterators over iterable
This splits a single iterable into multiple iterators, each providing
the same items in the same order.
All child iterators may advance separately but share the same items
from iterable – when the most advanced iterator retrieves an item,
it is buffered until the least advanced iterator has yielded it as well.
A tee works lazily and can handle an infinite iterable, provided
that all iterators advance.
async def derivative(sensor_data):
previous, current = a.tee(sensor_data, n=2)
await a.anext(previous) # advance one iterator
return a.map(operator.sub, previous, current)
Unlike itertools.tee(), tee() returns a custom type instead
of a tuple. Like a tuple, it can be indexed, iterated and unpacked
to get the child iterators. In addition, its aclose() method
immediately closes all children, and it can be used in an async with context
for the same effect.
If iterable is an iterator and read elsewhere, tee will not
provide these items. Also, tee must internally buffer each item until the
last iterator has yielded it; if the most and least advanced iterator differ
by most data, using a list is more efficient (but not lazy).
If the underlying iterable is concurrency safe (anext may be awaited
concurrently) the resulting iterators are concurrency safe as well. Otherwise,
the iterators are safe if there is only ever one single “most advanced” iterator.
To enforce sequential use of anext, provide a lock
- e.g. an asyncio.Lock instance in an asyncio application -
and access is automatically synchronised.
Methods | lang/api.python.langchain.com/en/latest/utils/langchain.utils.aiter.Tee.html |
0ab8db6e2340-1 | and access is automatically synchronised.
Methods
__init__(iterable[, n, lock])
aclose()
__init__(iterable: AsyncIterator[T], n: int = 2, *, lock: Optional[AsyncContextManager[Any]] = None)[source]¶
async aclose() → None[source]¶ | lang/api.python.langchain.com/en/latest/utils/langchain.utils.aiter.Tee.html |
1bdf2871d948-0 | langchain.utils.utils.xor_args¶
langchain.utils.utils.xor_args(*arg_groups: Tuple[str, ...]) → Callable[source]¶
Validate specified keyword args are mutually exclusive. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.xor_args.html |
ea6aee6442f2-0 | langchain.utils.iter.tee_peer¶
langchain.utils.iter.tee_peer(iterator: Iterator[T], buffer: Deque[T], peers: List[Deque[T]], lock: ContextManager[Any]) → Generator[T, None, None][source]¶
An individual iterator of a tee() | lang/api.python.langchain.com/en/latest/utils/langchain.utils.iter.tee_peer.html |
d34702f3c4de-0 | langchain.utils.html.extract_sub_links¶
langchain.utils.html.extract_sub_links(raw_html: str, url: str, *, base_url: Optional[str] = None, pattern: Optional[Union[str, Pattern]] = None, prevent_outside: bool = True, exclude_prefixes: Sequence[str] = ()) → List[str][source]¶
Extract all links from a raw html string and convert into absolute paths.
Parameters
raw_html – original html.
url – the url of the html.
base_url – the base url to check for outside links against.
pattern – Regex to use for extracting links from raw html.
prevent_outside – If True, ignore external links which are not children
of the base url.
exclude_prefixes – Exclude any URLs that start with one of these prefixes.
Returns
sub links
Return type
List[str] | lang/api.python.langchain.com/en/latest/utils/langchain.utils.html.extract_sub_links.html |
09400e3a2ad2-0 | langchain.utils.aiter.NoLock¶
class langchain.utils.aiter.NoLock[source]¶
Dummy lock that provides the proper interface but no protection
Methods
__init__()
__init__()¶ | lang/api.python.langchain.com/en/latest/utils/langchain.utils.aiter.NoLock.html |
138433fb33c6-0 | langchain.utils.env.get_from_env¶
langchain.utils.env.get_from_env(key: str, env_key: str, default: Optional[str] = None) → str[source]¶
Get a value from a dictionary or an environment variable. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.env.get_from_env.html |
a677e3630830-0 | langchain.utils.strings.stringify_dict¶
langchain.utils.strings.stringify_dict(data: dict) → str[source]¶
Stringify a dictionary.
Parameters
data – The dictionary to stringify.
Returns
The stringified dictionary.
Return type
str | lang/api.python.langchain.com/en/latest/utils/langchain.utils.strings.stringify_dict.html |
75e498b90996-0 | langchain.utils.html.find_all_links¶
langchain.utils.html.find_all_links(raw_html: str, *, pattern: Optional[Union[str, Pattern]] = None) → List[str][source]¶
Extract all links from a raw html string.
Parameters
raw_html – original html.
pattern – Regex to use for extracting links from raw html.
Returns
all links
Return type
List[str] | lang/api.python.langchain.com/en/latest/utils/langchain.utils.html.find_all_links.html |
443e2b91a537-0 | langchain.utils.openai_functions.convert_pydantic_to_openai_function¶
langchain.utils.openai_functions.convert_pydantic_to_openai_function(model: Type[BaseModel], *, name: Optional[str] = None, description: Optional[str] = None) → FunctionDescription[source]¶
Converts a Pydantic model to a function description for the OpenAI API. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.openai_functions.convert_pydantic_to_openai_function.html |
1b16711d5af8-0 | langchain.utils.iter.NoLock¶
class langchain.utils.iter.NoLock[source]¶
Dummy lock that provides the proper interface but no protection
Methods
__init__()
__init__()¶ | lang/api.python.langchain.com/en/latest/utils/langchain.utils.iter.NoLock.html |
f2ae65cf1718-0 | langchain.utils.loading.try_load_from_hub¶
langchain.utils.loading.try_load_from_hub(path: Union[str, Path], loader: Callable[[str], T], valid_prefix: str, valid_suffixes: Set[str], **kwargs: Any) → Optional[T][source]¶
Load configuration from hub. Returns None if path is not a hub path. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.loading.try_load_from_hub.html |
61ccf7c83c48-0 | langchain.utils.openai.is_openai_v1¶
langchain.utils.openai.is_openai_v1() → bool[source]¶ | lang/api.python.langchain.com/en/latest/utils/langchain.utils.openai.is_openai_v1.html |
48d7096c5a72-0 | langchain.utils.aiter.tee_peer¶
async langchain.utils.aiter.tee_peer(iterator: AsyncIterator[T], buffer: Deque[T], peers: List[Deque[T]], lock: AsyncContextManager[Any]) → AsyncGenerator[T, None][source]¶
An individual iterator of a tee() | lang/api.python.langchain.com/en/latest/utils/langchain.utils.aiter.tee_peer.html |
abfdb7248744-0 | langchain.utils.aiter.atee¶
langchain.utils.aiter.atee¶
alias of Tee | lang/api.python.langchain.com/en/latest/utils/langchain.utils.aiter.atee.html |
84f57ab63f9a-0 | langchain.utils.math.cosine_similarity¶
langchain.utils.math.cosine_similarity(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray]) → ndarray[source]¶
Row-wise cosine similarity between two equal-width matrices. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.math.cosine_similarity.html |
f0067e92cc31-0 | langchain.utils.iter.safetee¶
langchain.utils.iter.safetee¶
alias of Tee | lang/api.python.langchain.com/en/latest/utils/langchain.utils.iter.safetee.html |
c90d182290c9-0 | langchain.utils.input.get_colored_text¶
langchain.utils.input.get_colored_text(text: str, color: str) → str[source]¶
Get colored text. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.input.get_colored_text.html |
98efc4794f9b-0 | langchain.utils.input.print_text¶
langchain.utils.input.print_text(text: str, color: Optional[str] = None, end: str = '', file: Optional[TextIO] = None) → None[source]¶
Print text with highlighting and no end characters. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.input.print_text.html |
1366b40eeaec-0 | langchain.utils.iter.batch_iterate¶
langchain.utils.iter.batch_iterate(size: int, iterable: Iterable[T]) → Iterator[List[T]][source]¶
Utility batching function. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.iter.batch_iterate.html |
89d6ab0dddc0-0 | langchain.utils.utils.check_package_version¶
langchain.utils.utils.check_package_version(package: str, lt_version: Optional[str] = None, lte_version: Optional[str] = None, gt_version: Optional[str] = None, gte_version: Optional[str] = None) → None[source]¶
Check the version of a package. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.check_package_version.html |
fa20f8b34455-0 | langchain.utils.formatting.StrictFormatter¶
class langchain.utils.formatting.StrictFormatter[source]¶
A subclass of formatter that checks for extra keys.
Methods
__init__()
check_unused_args(used_args, args, kwargs)
Check to see if extra parameters are passed.
convert_field(value, conversion)
format(format_string, /, *args, **kwargs)
format_field(value, format_spec)
get_field(field_name, args, kwargs)
get_value(key, args, kwargs)
parse(format_string)
validate_input_variables(format_string, ...)
vformat(format_string, args, kwargs)
Check that no arguments are provided.
__init__()¶
check_unused_args(used_args: Sequence[Union[int, str]], args: Sequence, kwargs: Mapping[str, Any]) → None[source]¶
Check to see if extra parameters are passed.
convert_field(value, conversion)¶
format(format_string, /, *args, **kwargs)¶
format_field(value, format_spec)¶
get_field(field_name, args, kwargs)¶
get_value(key, args, kwargs)¶
parse(format_string)¶
validate_input_variables(format_string: str, input_variables: List[str]) → None[source]¶
vformat(format_string: str, args: Sequence, kwargs: Mapping[str, Any]) → str[source]¶
Check that no arguments are provided. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.formatting.StrictFormatter.html |
087f4c446b85-0 | langchain.utils.aiter.py_anext¶
langchain.utils.aiter.py_anext(iterator: ~typing.AsyncIterator[~langchain.utils.aiter.T], default: ~typing.Union[~langchain.utils.aiter.T, ~typing.Any] = <object object>) → Awaitable[Union[T, None, Any]][source]¶
Pure-Python implementation of anext() for testing purposes.
Closely matches the builtin anext() C implementation.
Can be used to compare the built-in implementation of the inner
coroutines machinery to C-implementation of __anext__() and send()
or throw() on the returned generator. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.aiter.py_anext.html |
8a99f3e0050f-0 | langchain.utils.utils.raise_for_status_with_text¶
langchain.utils.utils.raise_for_status_with_text(response: Response) → None[source]¶
Raise an error with the response text. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.utils.raise_for_status_with_text.html |
0b8a206d35c0-0 | langchain.utils.input.get_color_mapping¶
langchain.utils.input.get_color_mapping(items: List[str], excluded_colors: Optional[List] = None) → Dict[str, str][source]¶
Get mapping for items to a support color. | lang/api.python.langchain.com/en/latest/utils/langchain.utils.input.get_color_mapping.html |
75531ebdd941-0 | langchain.utils.openai_functions.ToolDescription¶
class langchain.utils.openai_functions.ToolDescription[source]¶
Representation of a callable function to the OpenAI API.
type: Literal['function']¶
function: langchain.utils.openai_functions.FunctionDescription¶ | lang/api.python.langchain.com/en/latest/utils/langchain.utils.openai_functions.ToolDescription.html |
af6a19faef1b-0 | langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator¶
class langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator[source]¶
Bases: BaseModel
Generates synthetic data using the given LLM and few-shot template.
Utilizes the provided LLM to produce synthetic data based on the
few-shot prompt template.
template¶
Template for few-shot prompting.
Type
FewShotPromptTemplate
llm¶
Large Language Model to use for generation.
Type
Optional[BaseLanguageModel]
llm_chain¶
LLM chain with the LLM and few-shot template.
Type
Optional[Chain]
example_input_key¶
Key to use for storing example inputs.
Type
str
Usage Example:>>> template = FewShotPromptTemplate(...)
>>> llm = BaseLanguageModel(...)
>>> generator = SyntheticDataGenerator(template=template, llm=llm)
>>> results = generator.generate(subject="climate change", runs=5)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_input_key: str = 'example'¶
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param llm_chain: Optional[langchain.chains.base.Chain] = None¶
param results: list = []¶
param template: langchain.prompts.few_shot.FewShotPromptTemplate [Required]¶
async agenerate(subject: str, runs: int, extra: str = '', *args: Any, **kwargs: Any) → List[str][source]¶
Generate synthetic data using the given subject asynchronously.
Note: Since the LLM calls run concurrently,
you may have fewer duplicates by adding specific instructions to
the “extra” keyword argument.
Parameters | lang/api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html |
af6a19faef1b-1 | the “extra” keyword argument.
Parameters
subject (str) – The subject the synthetic data will be about.
runs (int) – Number of times to generate the data asynchronously.
extra (str) – Extra instructions for steerability in data generation.
Returns
List of generated synthetic data for the given subject.
Return type
List[str]
Usage Example:>>> results = await generator.agenerate(subject="climate change", runs=5,
extra="Focus on env impacts.")
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html |
af6a19faef1b-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
generate(subject: str, runs: int, *args: Any, **kwargs: Any) → List[str][source]¶
Generate synthetic data using the given subject string.
Parameters
subject (str) – The subject the synthetic data will be about.
runs (int) – Number of times to generate the data.
extra (str) – Extra instructions for steerability in data generation.
Returns
List of generated synthetic data.
Return type
List[str]
Usage Example:>>> results = generator.generate(subject="climate change", runs=5,
extra="Focus on environmental impacts.")
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html |
af6a19faef1b-3 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html |
8f0f6ccff58c-0 | langchain_experimental.tabular_synthetic_data.openai.create_openai_data_generator¶
langchain_experimental.tabular_synthetic_data.openai.create_openai_data_generator(output_schema: Union[Dict[str, Any], Type[BaseModel]], llm: ChatOpenAI, prompt: BasePromptTemplate, output_parser: Optional[BaseLLMOutputParser] = None, **kwargs: Any) → SyntheticDataGenerator[source]¶
Create an instance of SyntheticDataGenerator tailored for OpenAI models.
This function creates an LLM chain designed for structured output based on the
provided schema, language model, and prompt template. The resulting chain is then
used to instantiate and return a SyntheticDataGenerator.
Parameters
output_schema (Union[Dict[str, Any], Type[BaseModel]]) – Schema for expected
a (output. This can be either a dictionary representing a valid JsonSchema or) –
class. (Pydantic BaseModel) –
llm (ChatOpenAI) – OpenAI language model to use.
prompt (BasePromptTemplate) – Template to be used for generating prompts.
output_parser (Optional[BaseLLMOutputParser], optional) – Parser for
provided (processing model outputs. If none is) –
inferred (a default will be) –
types. (from the function) –
**kwargs – Additional keyword arguments to be passed to
create_structured_output_chain. –
Returns: SyntheticDataGenerator: An instance of the data generator set up with
the constructed chain.
Usage:To generate synthetic data with a structured output, first define your desired
output schema. Then, use this function to create a SyntheticDataGenerator
instance. After obtaining the generator, you can utilize its methods to produce
the desired synthetic data. | lang/api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.openai.create_openai_data_generator.html |
0f45bcde4ab6-0 | langchain.adapters.openai.ChatCompletion¶
class langchain.adapters.openai.ChatCompletion[source]¶
Chat completion.
Methods
__init__()
acreate()
create()
__init__()¶
async static acreate(messages: Sequence[Dict[str, Any]], *, provider: str = "'ChatOpenAI'", stream: Literal[False] = 'False', **kwargs: Any) → dict[source]¶
async static acreate(messages: Sequence[Dict[str, Any]], *, provider: str = "'ChatOpenAI'", stream: Literal[True], **kwargs: Any) → AsyncIterator
static create(messages: Sequence[Dict[str, Any]], *, provider: str = "'ChatOpenAI'", stream: Literal[False] = 'False', **kwargs: Any) → dict[source]¶
static create(messages: Sequence[Dict[str, Any]], *, provider: str = "'ChatOpenAI'", stream: Literal[True], **kwargs: Any) → Iterable | lang/api.python.langchain.com/en/latest/adapters/langchain.adapters.openai.ChatCompletion.html |
36b675d44581-0 | langchain.adapters.openai.convert_openai_messages¶
langchain.adapters.openai.convert_openai_messages(messages: Sequence[Dict[str, Any]]) → List[BaseMessage][source]¶
Convert dictionaries representing OpenAI messages to LangChain format.
Parameters
messages – List of dictionaries representing OpenAI messages
Returns
List of LangChain BaseMessage objects. | lang/api.python.langchain.com/en/latest/adapters/langchain.adapters.openai.convert_openai_messages.html |
09e9dc5a00d3-0 | langchain.adapters.openai.aenumerate¶
async langchain.adapters.openai.aenumerate(iterable: AsyncIterator[Any], start: int = 0) → AsyncIterator[tuple[int, Any]][source]¶
Async version of enumerate function. | lang/api.python.langchain.com/en/latest/adapters/langchain.adapters.openai.aenumerate.html |
27f3a88ba7f3-0 | langchain.adapters.openai.convert_message_to_dict¶
langchain.adapters.openai.convert_message_to_dict(message: BaseMessage) → dict[source]¶
Convert a LangChain message to a dictionary.
Parameters
message – The LangChain message.
Returns
The dictionary.
Examples using convert_message_to_dict¶
Twitter (via Apify) | lang/api.python.langchain.com/en/latest/adapters/langchain.adapters.openai.convert_message_to_dict.html |
c499d7d7dfcd-0 | langchain.adapters.openai.convert_dict_to_message¶
langchain.adapters.openai.convert_dict_to_message(_dict: Mapping[str, Any]) → BaseMessage[source]¶
Convert a dictionary to a LangChain message.
Parameters
_dict – The dictionary.
Returns
The LangChain message. | lang/api.python.langchain.com/en/latest/adapters/langchain.adapters.openai.convert_dict_to_message.html |
4a6eb603e9e4-0 | langchain.adapters.openai.convert_messages_for_finetuning¶
langchain.adapters.openai.convert_messages_for_finetuning(sessions: Iterable[ChatSession]) → List[List[dict]][source]¶
Convert messages to a list of lists of dictionaries for fine-tuning.
Parameters
sessions – The chat sessions.
Returns
The list of lists of dictionaries.
Examples using convert_messages_for_finetuning¶
Facebook Messenger
Chat loaders
iMessage | lang/api.python.langchain.com/en/latest/adapters/langchain.adapters.openai.convert_messages_for_finetuning.html |
f2b5823127a0-0 | langchain.document_loaders.modern_treasury.ModernTreasuryLoader¶
class langchain.document_loaders.modern_treasury.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]¶
Load from Modern Treasury.
Parameters
resource – The Modern Treasury resource to load.
organization_id – The Modern Treasury organization ID. It can also be
specified via the environment variable
“MODERN_TREASURY_ORGANIZATION_ID”.
api_key – The Modern Treasury API key. It can also be specified via
the environment variable “MODERN_TREASURY_API_KEY”.
Methods
__init__(resource[, organization_id, api_key])
param resource
The Modern Treasury resource to load.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None) → None[source]¶
Parameters
resource – The Modern Treasury resource to load.
organization_id – The Modern Treasury organization ID. It can also be
specified via the environment variable
“MODERN_TREASURY_ORGANIZATION_ID”.
api_key – The Modern Treasury API key. It can also be specified via
the environment variable “MODERN_TREASURY_API_KEY”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html |
f2b5823127a0-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ModernTreasuryLoader¶
Modern Treasury | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html |
f3932bf51abe-0 | langchain.document_loaders.docusaurus.DocusaurusLoader¶
class langchain.document_loaders.docusaurus.DocusaurusLoader(url: str, custom_html_tags: Optional[List[str]] = None, **kwargs: Any)[source]¶
Loader that leverages the SitemapLoader to loop through the generated pages of a
Docusaurus Documentation website and extracts the content by looking for specific
HTML tags. By default, the parser searches for the main content of the Docusaurus
page, which is normally the <article>. You also have the option to define your own
custom HTML tags by providing them as a list, for example: [“div”, “.main”, “a”].
Initialize DocusaurusLoader
:param url: The base URL of the Docusaurus website.
:param custom_html_tags: Optional custom html tags to extract content from pages.
:param kwargs: Additional args to extend the underlying SitemapLoader, for example:
filter_urls, blocksize, meta_function, is_local, continue_on_failure
Attributes
web_path
Methods
__init__(url[, custom_html_tags])
Initialize DocusaurusLoader :param url: The base URL of the Docusaurus website. :param custom_html_tags: Optional custom html tags to extract content from pages. :param kwargs: Additional args to extend the underlying SitemapLoader, for example: filter_urls, blocksize, meta_function, is_local, continue_on_failure.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_sitemap(soup)
Parse sitemap xml and load into a list of dicts.
scrape([parser]) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docusaurus.DocusaurusLoader.html |
f3932bf51abe-1 | Parse sitemap xml and load into a list of dicts.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(url: str, custom_html_tags: Optional[List[str]] = None, **kwargs: Any)[source]¶
Initialize DocusaurusLoader
:param url: The base URL of the Docusaurus website.
:param custom_html_tags: Optional custom html tags to extract content from pages.
:param kwargs: Additional args to extend the underlying SitemapLoader, for example:
filter_urls, blocksize, meta_function, is_local, continue_on_failure
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document]¶
Load sitemap.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_sitemap(soup: Any) → List[dict]¶
Parse sitemap xml and load into a list of dicts.
Parameters
soup – BeautifulSoup object.
Returns
List of dicts.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docusaurus.DocusaurusLoader.html |
f3932bf51abe-2 | Fetch all urls, then return soups for all results. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docusaurus.DocusaurusLoader.html |
5bf5ec6ead27-0 | langchain.document_loaders.twitter.TwitterTweetLoader¶
class langchain.document_loaders.twitter.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]¶
Load Twitter tweets.
Read tweets of the user’s Twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
Methods
__init__(auth_handler, twitter_users[, ...])
from_bearer_token(oauth2_bearer_token, ...)
Create a TwitterTweetLoader from OAuth2 bearer token.
from_secrets(access_token, ...[, number_tweets])
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load()
A lazy loader for Documents.
load()
Load tweets.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]¶
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load tweets. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html |
5bf5ec6ead27-1 | load() → List[Document][source]¶
Load tweets.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TwitterTweetLoader¶
Twitter | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html |
8270e2e39ded-0 | langchain.document_loaders.json_loader.JSONLoader¶
class langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Load a JSON file using a jq schema.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[]
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
Methods
__init__(file_path, jq_schema[, ...])
Initialize the JSONLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the JSON file.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html |
8270e2e39ded-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the JSON file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html |
5a5379be6863-0 | langchain.document_loaders.evernote.EverNoteLoader¶
class langchain.document_loaders.evernote.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]¶
Load from EverNote.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. ‘author’, ‘created’, ‘updated’ etc.
but not ‘content-raw’ or ‘resource’) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) – The path to the notebook export with a .enex extension
load_single_document (bool) – Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) – the ‘source’ which contains the file name of the export.
Initialize with file path.
Methods
__init__(file_path[, load_single_document])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents from EverNote export file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, load_single_document: bool = True)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents from EverNote export file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html |
5a5379be6863-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using EverNoteLoader¶
EverNote | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html |
4f3dd7d76eb9-0 | langchain.document_loaders.word_document.UnstructuredWordDocumentLoader¶
class langchain.document_loaders.word_document.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsoft Word file using Unstructured.
Works with both .docx and .doc files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredWordDocumentLoader
loader = UnstructuredWordDocumentLoader(“example.docx”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-docx
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html |
4f3dd7d76eb9-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredWordDocumentLoader¶
Microsoft Word | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html |
6a49c6183b64-0 | langchain.document_loaders.chromium.AsyncChromiumLoader¶
class langchain.document_loaders.chromium.AsyncChromiumLoader(urls: List[str])[source]¶
Scrape HTML pages from URLs using a
headless instance of the Chromium.
Initialize the loader with a list of URL paths.
Parameters
urls (List[str]) – A list of URLs to scrape content from.
Raises
ImportError – If the required ‘playwright’ package is not installed.
Methods
__init__(urls)
Initialize the loader with a list of URL paths.
ascrape_playwright(url)
Asynchronously scrape the content of a given URL using Playwright's async API.
lazy_load()
Lazily load text content from the provided URLs.
load()
Load and return all Documents from the provided URLs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str])[source]¶
Initialize the loader with a list of URL paths.
Parameters
urls (List[str]) – A list of URLs to scrape content from.
Raises
ImportError – If the required ‘playwright’ package is not installed.
async ascrape_playwright(url: str) → str[source]¶
Asynchronously scrape the content of a given URL using Playwright’s async API.
Parameters
url (str) – The URL to scrape.
Returns
The scraped HTML content or an error message if an exception occurs.
Return type
str
lazy_load() → Iterator[Document][source]¶
Lazily load text content from the provided URLs.
This method yields Documents one at a time as they’re scraped,
instead of waiting to scrape all URLs before returning.
Yields
Document – The scraped content encapsulated within a Document object.
load() → List[Document][source]¶
Load and return all Documents from the provided URLs. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chromium.AsyncChromiumLoader.html |
6a49c6183b64-1 | Load and return all Documents from the provided URLs.
Returns
A list of Document objects
containing the scraped content from each URL.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AsyncChromiumLoader¶
Beautiful Soup
Async Chromium
Set env var OPENAI_API_KEY or load from a .env file: | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chromium.AsyncChromiumLoader.html |
0de79a96b404-0 | langchain.document_loaders.parsers.language.cobol.CobolSegmenter¶
class langchain.document_loaders.parsers.language.cobol.CobolSegmenter(code: str)[source]¶
Code segmenter for COBOL.
Attributes
DIVISION_PATTERN
PARAGRAPH_PATTERN
SECTION_PATTERN
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.cobol.CobolSegmenter.html |
814670f5fe92-0 | langchain.document_loaders.notion.NotionDirectoryLoader¶
class langchain.document_loaders.notion.NotionDirectoryLoader(path: str, *, encoding: str = 'utf-8')[source]¶
Load Notion directory dump.
Initialize with a file path.
Methods
__init__(path, *[, encoding])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, *, encoding: str = 'utf-8') → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NotionDirectoryLoader¶
Notion DB
Notion DB 1/2
Perform context-aware text splitting | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notion.NotionDirectoryLoader.html |
b29ef58c3d16-0 | langchain.document_loaders.rocksetdb.RocksetLoader¶
class langchain.document_loaders.rocksetdb.RocksetLoader(client: ~typing.Any, query: ~typing.Any, content_keys: ~typing.List[str], metadata_keys: ~typing.Optional[~typing.List[str]] = None, content_columns_joiner: ~typing.Callable[[~typing.List[~typing.Tuple[str, ~typing.Any]]], str] = <function default_joiner>)[source]¶
Load from a Rockset database.
To use, you should have the rockset python package installed.
Example
# This code will load 3 records from the "langchain_demo"
# collection as Documents, with the `text` column used as
# the content
from langchain.document_loaders import RocksetLoader
from rockset import RocksetClient, Regions, models
loader = RocksetLoader(
RocksetClient(Regions.usw2a1, "<api key>"),
models.QueryRequestSql(
query="select * from langchain_demo limit 3"
),
["text"]
)
)
Initialize with Rockset client.
Parameters
client – Rockset client object.
query – Rockset query object.
content_keys – The collection columns to be written into the page_content
of the Documents.
metadata_keys – The collection columns to be written into the metadata of
the Documents. By default, this is all the keys in the document.
content_columns_joiner – Method that joins content_keys and its values into a
string. It’s method that takes in a List[Tuple[str, Any]]],
representing a list of tuples of (column name, column value).
By default, this is a method that joins each column value with a new
line. This method is only relevant if there are multiple content_keys.
Methods | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
b29ef58c3d16-1 | line. This method is only relevant if there are multiple content_keys.
Methods
__init__(client, query, content_keys[, ...])
Initialize with Rockset client.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: ~typing.Any, query: ~typing.Any, content_keys: ~typing.List[str], metadata_keys: ~typing.Optional[~typing.List[str]] = None, content_columns_joiner: ~typing.Callable[[~typing.List[~typing.Tuple[str, ~typing.Any]]], str] = <function default_joiner>)[source]¶
Initialize with Rockset client.
Parameters
client – Rockset client object.
query – Rockset query object.
content_keys – The collection columns to be written into the page_content
of the Documents.
metadata_keys – The collection columns to be written into the metadata of
the Documents. By default, this is all the keys in the document.
content_columns_joiner – Method that joins content_keys and its values into a
string. It’s method that takes in a List[Tuple[str, Any]]],
representing a list of tuples of (column name, column value).
By default, this is a method that joins each column value with a new
line. This method is only relevant if there are multiple content_keys.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
b29ef58c3d16-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RocksetLoader¶
Rockset | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
7a4cfe196fb7-0 | langchain.document_loaders.acreom.AcreomLoader¶
class langchain.document_loaders.acreom.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Load acreom vault from a directory.
Initialize the loader.
Attributes
FRONT_MATTER_REGEX
Regex to match front matter metadata in markdown files.
file_path
Path to the directory containing the markdown files.
encoding
Encoding to use when reading the files.
collect_metadata
Whether to collect metadata from the front matter.
Methods
__init__(path[, encoding, collect_metadata])
Initialize the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Initialize the loader.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AcreomLoader¶
acreom | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.acreom.AcreomLoader.html |
897c712d1597-0 | langchain.document_loaders.onedrive_file.OneDriveFileLoader¶
class langchain.document_loaders.onedrive_file.OneDriveFileLoader[source]¶
Bases: BaseLoader, BaseModel
Load a file from Microsoft OneDrive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param file: File [Required]¶
The file to load.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
897c712d1597-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load Documents
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
897c712d1597-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
f1ffabac1ec4-0 | langchain.document_loaders.chatgpt.ChatGPTLoader¶
class langchain.document_loaders.chatgpt.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]¶
Load conversations from exported ChatGPT data.
Initialize a class object.
Parameters
log_file – Path to the log file
num_logs – Number of logs to load. If 0, load all logs.
Methods
__init__(log_file[, num_logs])
Initialize a class object.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(log_file: str, num_logs: int = - 1)[source]¶
Initialize a class object.
Parameters
log_file – Path to the log file
num_logs – Number of logs to load. If 0, load all logs.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ChatGPTLoader¶
OpenAI
ChatGPT Data | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.ChatGPTLoader.html |
23c4aeee6af5-0 | langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader¶
class langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Load from Hugging Face Hub datasets.
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
Note: Currently the function assumes the content is a string.
If it is not download the dataset using huggingface library and convert
using the json or pandas loaders.
https://github.com/langchain-ai/langchain/issues/10674
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
Methods
__init__(path[, page_content_column, name, ...])
Initialize the HuggingFaceDatasetLoader.
lazy_load()
Load documents lazily.
load()
Load documents.
load_and_split([text_splitter]) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html |
23c4aeee6af5-1 | load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
Note: Currently the function assumes the content is a string.
If it is not download the dataset using huggingface library and convert
using the json or pandas loaders.
https://github.com/langchain-ai/langchain/issues/10674
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
lazy_load() → Iterator[Document][source]¶
Load documents lazily.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html |
23c4aeee6af5-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using HuggingFaceDatasetLoader¶
HuggingFace dataset | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html |
6145247f66c1-0 | langchain.document_loaders.embaas.BaseEmbaasLoader¶
class langchain.document_loaders.embaas.BaseEmbaasLoader[source]¶
Bases: BaseModel
Base loader for Embaas document extraction API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the Embaas document extraction API.
param embaas_api_key: Optional[str] = None¶
The API key for the Embaas document extraction API.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the Embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
6145247f66c1-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
6145247f66c1-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
ceee09be4ddd-0 | langchain.document_loaders.airbyte.AirbyteShopifyLoader¶
class langchain.document_loaders.airbyte.AirbyteShopifyLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Shopify using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteShopifyLoader.html |
ceee09be4ddd-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteShopifyLoader¶
Airbyte Shopify | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteShopifyLoader.html |
e07b962197e9-0 | langchain.document_loaders.geodataframe.GeoDataFrameLoader¶
class langchain.document_loaders.geodataframe.GeoDataFrameLoader(data_frame: Any, page_content_column: str = 'geometry')[source]¶
Load geopandas Dataframe.
Initialize with geopandas Dataframe.
Parameters
data_frame – geopandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “geometry”.
Methods
__init__(data_frame[, page_content_column])
Initialize with geopandas Dataframe.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, page_content_column: str = 'geometry')[source]¶
Initialize with geopandas Dataframe.
Parameters
data_frame – geopandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “geometry”.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document][source]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GeoDataFrameLoader¶
Geopandas | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.geodataframe.GeoDataFrameLoader.html |
7dff003674bd-0 | langchain.document_loaders.onedrive.OneDriveLoader¶
class langchain.document_loaders.onedrive.OneDriveLoader[source]¶
Bases: O365BaseLoader
Load from Microsoft OneDrive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param auth_with_token: bool = False¶
Whether to authenticate with a token or not. Defaults to False.
param chunk_size: Union[int, str] = 5242880¶
Number of bytes to retrieve from each api call to the server. int or ‘auto’.
param drive_id: str [Required]¶
The ID of the OneDrive drive to load data from.
param folder_path: Optional[str] = None¶
The path to the folder to load data from.
param object_ids: Optional[List[str]] = None¶
The IDs of the objects to load data from.
param settings: _O365Settings [Optional]¶
Settings for the Office365 API client.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html |
7dff003674bd-1 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html |
7dff003674bd-2 | load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using OneDriveLoader¶
Microsoft OneDrive | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html |
6848344c7eec-0 | langchain.document_loaders.googledrive.GoogleDriveLoader¶
class langchain.document_loaders.googledrive.GoogleDriveLoader[source]¶
Bases: BaseLoader, BaseModel
Load Google Docs from Google Drive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')¶
Path to the credentials file.
param document_ids: Optional[List[str]] = None¶
The document ids to load from.
param file_ids: Optional[List[str]] = None¶
The file ids to load from.
param file_loader_cls: Any = None¶
The file loader class to use.
param file_loader_kwargs: Dict[str, Any] = {}¶
The file loader kwargs to use.
param file_types: Optional[Sequence[str]] = None¶
The file types to load. Only applies when folder_id is given.
param folder_id: Optional[str] = None¶
The folder id to load from.
param load_trashed_files: bool = False¶
Whether to load trashed files. Only applies when folder_id is given.
param recursive: bool = False¶
Whether to load recursively. Only applies when folder_id is given.
param service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')¶
Path to the service account key file.
param token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')¶
Path to the token file.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
6848344c7eec-1 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
6848344c7eec-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
6848344c7eec-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using GoogleDriveLoader¶
Google Drive | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
7afa020b2c42-0 | langchain.document_loaders.apify_dataset.ApifyDatasetLoader¶
class langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]¶
Bases: BaseLoader, BaseModel
Load datasets from Apify web scraping, crawling, and data extraction platform.
For details, see https://docs.apify.com/platform/integrations/langchain
Example
from langchain.document_loaders import ApifyDatasetLoader
from langchain.schema import Document
loader = ApifyDatasetLoader(
dataset_id="YOUR-DATASET-ID",
dataset_mapping_function=lambda dataset_item: Document(
page_content=dataset_item["text"], metadata={"source": dataset_item["url"]}
),
)
documents = loader.load()
Initialize the loader with an Apify dataset ID and a mapping function.
Parameters
dataset_id (str) – The ID of the dataset on the Apify platform.
dataset_mapping_function (Callable) – A function that takes a single
dictionary (an Apify dataset item) and converts it to an instance
of the Document class.
param apify_client: Any = None¶
An instance of the ApifyClient class from the apify-client Python package.
param dataset_id: str [Required]¶
The ID of the dataset on the Apify platform.
param dataset_mapping_function: Callable[[Dict], langchain.schema.document.Document] [Required]¶
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
7afa020b2c42-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
7afa020b2c42-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
7afa020b2c42-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using ApifyDatasetLoader¶
Apify
Apify Dataset | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
a42b73913211-0 | langchain.document_loaders.github.GitHubIssuesLoader¶
class langchain.document_loaders.github.GitHubIssuesLoader[source]¶
Bases: BaseGitHubLoader
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param assignee: Optional[str] = None¶
Filter on assigned user. Pass ‘none’ for no user and ‘*’ for any user.
param creator: Optional[str] = None¶
Filter on the user that created the issue.
param direction: Optional[Literal['asc', 'desc']] = None¶
The direction to sort the results by. Can be one of: ‘asc’, ‘desc’.
param github_api_url: str = 'https://api.github.com'¶
URL of GitHub API
param include_prs: bool = True¶
If True include Pull Requests in results, otherwise ignore them.
param labels: Optional[List[str]] = None¶
Label names to filter one. Example: bug,ui,@high.
param mentioned: Optional[str] = None¶
Filter on a user that’s mentioned in the issue.
param milestone: Optional[Union[int, Literal['*', 'none']]] = None¶
If integer is passed, it should be a milestone’s number field.
If the string ‘*’ is passed, issues with any milestone are accepted.
If the string ‘none’ is passed, issues without milestones are returned.
param repo: str [Required]¶
Name of repository
param since: Optional[str] = None¶
Only show notifications updated after the given time. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
a42b73913211-1 | param since: Optional[str] = None¶
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
param sort: Optional[Literal['created', 'updated', 'comments']] = None¶
What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’.
Default is ‘created’.
param state: Optional[Literal['open', 'closed', 'all']] = None¶
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
a42b73913211-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load() → List[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
Subsets and Splits