code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def _validate_inputs_outputs( inputs: Set[str], outputs: Set[str], pipe: Pipeline ) -> None: """Safeguards to ensure that: - parameters are not specified under inputs - inputs are only free inputs - outputs do not contain free inputs """ inputs = {_strip_transcoding(k) for k in inputs} outputs = {_strip_transcoding(k) for k in outputs} if any(_is_parameter(i) for i in inputs): raise ModularPipelineError( "Parameters should be specified in the 'parameters' argument" ) free_inputs = {_strip_transcoding(i) for i in pipe.inputs()} if not inputs <= free_inputs: raise ModularPipelineError( "Inputs must not be outputs from another node in the same pipeline" ) if outputs & free_inputs: raise ModularPipelineError( "All outputs must be generated by some node within the pipeline" )
Safeguards to ensure that: - parameters are not specified under inputs - inputs are only free inputs - outputs do not contain free inputs
_validate_inputs_outputs
python
kedro-org/kedro
kedro/pipeline/modular_pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/modular_pipeline.py
Apache-2.0
def _validate_datasets_exist( inputs: Set[str], outputs: Set[str], parameters: Set[str], pipe: Pipeline, ) -> None: """Validate that inputs, parameters and outputs map correctly onto the provided nodes.""" inputs = {_strip_transcoding(k) for k in inputs} outputs = {_strip_transcoding(k) for k in outputs} existing = {_strip_transcoding(ds) for ds in pipe.datasets()} non_existent = (inputs | outputs | parameters) - existing if non_existent: sorted_non_existent = sorted(non_existent) possible_matches = [] for non_existent_input in sorted_non_existent: possible_matches += difflib.get_close_matches(non_existent_input, existing) error_msg = f"Failed to map datasets and/or parameters onto the nodes provided: {', '.join(sorted_non_existent)}" suggestions = ( f" - did you mean one of these instead: {', '.join(possible_matches)}" if possible_matches else "" ) raise ModularPipelineError(error_msg + suggestions)
Validate that inputs, parameters and outputs map correctly onto the provided nodes.
_validate_datasets_exist
python
kedro-org/kedro
kedro/pipeline/modular_pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/modular_pipeline.py
Apache-2.0
def _get_dataset_names_mapping( names: str | set[str] | dict[str, str] | None = None, ) -> dict[str, str]: """Take a name or a collection of dataset names and turn it into a mapping from the old dataset names to the provided ones if necessary. Args: names: A dataset name or collection of dataset names. When str or set[str] is provided, the listed names will stay the same as they are named in the provided pipeline. When dict[str, str] is provided, current names will be mapped to new names in the resultant pipeline. Returns: A dictionary that maps the old dataset names to the provided ones. Examples: >>> _get_dataset_names_mapping("dataset_name") {"dataset_name": "dataset_name"} # a str name will stay the same >>> _get_dataset_names_mapping(set(["ds_1", "ds_2"])) {"ds_1": "ds_1", "ds_2": "ds_2"} # a set[str] of names will stay the same >>> _get_dataset_names_mapping({"ds_1": "new_ds_1_name"}) {"ds_1": "new_ds_1_name"} # a dict[str, str] of names will map key to value """ if names is None: return {} if isinstance(names, str): return {names: names} if isinstance(names, dict): return copy.deepcopy(names) return {item: item for item in names}
Take a name or a collection of dataset names and turn it into a mapping from the old dataset names to the provided ones if necessary. Args: names: A dataset name or collection of dataset names. When str or set[str] is provided, the listed names will stay the same as they are named in the provided pipeline. When dict[str, str] is provided, current names will be mapped to new names in the resultant pipeline. Returns: A dictionary that maps the old dataset names to the provided ones. Examples: >>> _get_dataset_names_mapping("dataset_name") {"dataset_name": "dataset_name"} # a str name will stay the same >>> _get_dataset_names_mapping(set(["ds_1", "ds_2"])) {"ds_1": "ds_1", "ds_2": "ds_2"} # a set[str] of names will stay the same >>> _get_dataset_names_mapping({"ds_1": "new_ds_1_name"}) {"ds_1": "new_ds_1_name"} # a dict[str, str] of names will map key to value
_get_dataset_names_mapping
python
kedro-org/kedro
kedro/pipeline/modular_pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/modular_pipeline.py
Apache-2.0
def _normalize_param_name(name: str) -> str: """Make sure that a param name has a `params:` prefix before passing to the node""" return name if name.startswith("params:") else f"params:{name}"
Make sure that a param name has a `params:` prefix before passing to the node
_normalize_param_name
python
kedro-org/kedro
kedro/pipeline/modular_pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/modular_pipeline.py
Apache-2.0
def _get_param_names_mapping( names: str | set[str] | dict[str, str] | None = None, ) -> dict[str, str]: """Take a parameter or a collection of parameter names and turn it into a mapping from existing parameter names to new ones if necessary. It follows the same rule as `_get_dataset_names_mapping` and prefixes the keys on the resultant dictionary with `params:` to comply with node's syntax. Args: names: A parameter name or collection of parameter names. When str or set[str] is provided, the listed names will stay the same as they are named in the provided pipeline. When dict[str, str] is provided, current names will be mapped to new names in the resultant pipeline. Returns: A dictionary that maps the old parameter names to the provided ones. Examples: >>> _get_param_names_mapping("param_name") {"params:param_name": "params:param_name"} # a str name will stay the same >>> _get_param_names_mapping(set(["param_1", "param_2"])) # a set[str] of names will stay the same {"params:param_1": "params:param_1", "params:param_2": "params:param_2"} >>> _get_param_names_mapping({"param_1": "new_name_for_param_1"}) # a dict[str, str] of names will map key to valu {"params:param_1": "params:new_name_for_param_1"} """ params = {} for name, new_name in _get_dataset_names_mapping(names).items(): if _is_all_parameters(name): params[name] = name # don't map parameters into params:parameters else: param_name = _normalize_param_name(name) param_new_name = _normalize_param_name(new_name) params[param_name] = param_new_name return params
Take a parameter or a collection of parameter names and turn it into a mapping from existing parameter names to new ones if necessary. It follows the same rule as `_get_dataset_names_mapping` and prefixes the keys on the resultant dictionary with `params:` to comply with node's syntax. Args: names: A parameter name or collection of parameter names. When str or set[str] is provided, the listed names will stay the same as they are named in the provided pipeline. When dict[str, str] is provided, current names will be mapped to new names in the resultant pipeline. Returns: A dictionary that maps the old parameter names to the provided ones. Examples: >>> _get_param_names_mapping("param_name") {"params:param_name": "params:param_name"} # a str name will stay the same >>> _get_param_names_mapping(set(["param_1", "param_2"])) # a set[str] of names will stay the same {"params:param_1": "params:param_1", "params:param_2": "params:param_2"} >>> _get_param_names_mapping({"param_1": "new_name_for_param_1"}) # a dict[str, str] of names will map key to valu {"params:param_1": "params:new_name_for_param_1"}
_get_param_names_mapping
python
kedro-org/kedro
kedro/pipeline/modular_pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/modular_pipeline.py
Apache-2.0
def __init__( # noqa: PLR0913 self, func: Callable, inputs: str | list[str] | dict[str, str] | None, outputs: str | list[str] | dict[str, str] | None, *, name: str | None = None, tags: str | Iterable[str] | None = None, confirms: str | list[str] | None = None, namespace: str | None = None, ): """Create a node in the pipeline by providing a function to be called along with variable names for inputs and/or outputs. Args: func: A function that corresponds to the node logic. The function should have at least one input or output. inputs: The name or the list of the names of variables used as inputs to the function. The number of names should match the number of arguments in the definition of the provided function. When dict[str, str] is provided, variable names will be mapped to function argument names. outputs: The name or the list of the names of variables used as outputs of the function. The number of names should match the number of outputs returned by the provided function. When dict[str, str] is provided, variable names will be mapped to the named outputs the function returns. name: Optional node name to be used when displaying the node in logs or any other visualisations. Valid node name must contain only letters, digits, hyphens, underscores and/or fullstops. tags: Optional set of tags to be applied to the node. Valid node tag must contain only letters, digits, hyphens, underscores and/or fullstops. confirms: Optional name or the list of the names of the datasets that should be confirmed. This will result in calling ``confirm()`` method of the corresponding dataset instance. Specified dataset names do not necessarily need to be present in the node ``inputs`` or ``outputs``. namespace: Optional node namespace. Raises: ValueError: Raised in the following cases: a) When the provided arguments do not conform to the format suggested by the type hint of the argument. b) When the node produces multiple outputs with the same name. c) When an input has the same name as an output. d) When the given node name violates the requirements: it must contain only letters, digits, hyphens, underscores and/or fullstops. """ if not callable(func): raise ValueError( _node_error_message( f"first argument must be a function, not '{type(func).__name__}'." ) ) if inputs and not isinstance(inputs, (list, dict, str)): raise ValueError( _node_error_message( f"'inputs' type must be one of [String, List, Dict, None], " f"not '{type(inputs).__name__}'." ) ) for _input in _to_list(inputs): if not isinstance(_input, str): raise ValueError( _node_error_message( f"names of variables used as inputs to the function " f"must be of 'String' type, but {_input} from {inputs} " f"is '{type(_input)}'." ) ) if outputs and not isinstance(outputs, (list, dict, str)): raise ValueError( _node_error_message( f"'outputs' type must be one of [String, List, Dict, None], " f"not '{type(outputs).__name__}'." ) ) for _output in _to_list(outputs): if not isinstance(_output, str): raise ValueError( _node_error_message( f"names of variables used as outputs of the function " f"must be of 'String' type, but {_output} from {outputs} " f"is '{type(_output)}'." ) ) if not inputs and not outputs: raise ValueError( _node_error_message("it must have some 'inputs' or 'outputs'.") ) self._validate_inputs(func, inputs) self._func = func self._inputs = inputs # The type of _outputs is picked up as possibly being None, however the checks above prevent that # ever being the case. Mypy doesn't get that though, so it complains about the assignment of outputs to # _outputs with different types. self._outputs: str | list[str] | dict[str, str] = outputs # type: ignore[assignment] if name and not re.match(r"[\w\.-]+$", name): raise ValueError( f"'{name}' is not a valid node name. It must contain only " f"letters, digits, hyphens, underscores and/or fullstops." ) self._name = name self._namespace = namespace self._tags = set(_to_list(tags)) for tag in self._tags: if not re.match(r"[\w\.-]+$", tag): raise ValueError( f"'{tag}' is not a valid node tag. It must contain only " f"letters, digits, hyphens, underscores and/or fullstops." ) self._validate_unique_outputs() self._validate_inputs_dif_than_outputs() self._confirms = confirms
Create a node in the pipeline by providing a function to be called along with variable names for inputs and/or outputs. Args: func: A function that corresponds to the node logic. The function should have at least one input or output. inputs: The name or the list of the names of variables used as inputs to the function. The number of names should match the number of arguments in the definition of the provided function. When dict[str, str] is provided, variable names will be mapped to function argument names. outputs: The name or the list of the names of variables used as outputs of the function. The number of names should match the number of outputs returned by the provided function. When dict[str, str] is provided, variable names will be mapped to the named outputs the function returns. name: Optional node name to be used when displaying the node in logs or any other visualisations. Valid node name must contain only letters, digits, hyphens, underscores and/or fullstops. tags: Optional set of tags to be applied to the node. Valid node tag must contain only letters, digits, hyphens, underscores and/or fullstops. confirms: Optional name or the list of the names of the datasets that should be confirmed. This will result in calling ``confirm()`` method of the corresponding dataset instance. Specified dataset names do not necessarily need to be present in the node ``inputs`` or ``outputs``. namespace: Optional node namespace. Raises: ValueError: Raised in the following cases: a) When the provided arguments do not conform to the format suggested by the type hint of the argument. b) When the node produces multiple outputs with the same name. c) When an input has the same name as an output. d) When the given node name violates the requirements: it must contain only letters, digits, hyphens, underscores and/or fullstops.
__init__
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def _copy(self, **overwrite_params: Any) -> Node: """ Helper function to copy the node, replacing some values. """ params = { "func": self._func, "inputs": self._inputs, "outputs": self._outputs, "name": self._name, "namespace": self._namespace, "tags": self._tags, "confirms": self._confirms, } params.update(overwrite_params) return Node(**params) # type: ignore[arg-type]
Helper function to copy the node, replacing some values.
_copy
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def func(self) -> Callable: """Exposes the underlying function of the node. Returns: Return the underlying function of the node. """ return self._func
Exposes the underlying function of the node. Returns: Return the underlying function of the node.
func
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def func(self, func: Callable) -> None: """Sets the underlying function of the node. Useful if user wants to decorate the function in a node's Hook implementation. Args: func: The new function for node's execution. """ self._func = func
Sets the underlying function of the node. Useful if user wants to decorate the function in a node's Hook implementation. Args: func: The new function for node's execution.
func
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def tags(self) -> set[str]: """Return the tags assigned to the node. Returns: Return the set of all assigned tags to the node. """ return set(self._tags)
Return the tags assigned to the node. Returns: Return the set of all assigned tags to the node.
tags
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def tag(self, tags: str | Iterable[str]) -> Node: """Create a new ``Node`` which is an exact copy of the current one, but with more tags added to it. Args: tags: The tags to be added to the new node. Returns: A copy of the current ``Node`` object with the tags added. """ return self._copy(tags=self.tags | set(_to_list(tags)))
Create a new ``Node`` which is an exact copy of the current one, but with more tags added to it. Args: tags: The tags to be added to the new node. Returns: A copy of the current ``Node`` object with the tags added.
tag
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def name(self) -> str: """Node's name. Returns: Node's name if provided or the name of its function. """ node_name = self._name or str(self) if self.namespace: return f"{self.namespace}.{node_name}" return node_name
Node's name. Returns: Node's name if provided or the name of its function.
name
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def short_name(self) -> str: """Node's name. Returns: Returns a short, user-friendly name that is not guaranteed to be unique. The namespace is stripped out of the node name. """ if self._name: return self._name return self._func_name.replace("_", " ").title()
Node's name. Returns: Returns a short, user-friendly name that is not guaranteed to be unique. The namespace is stripped out of the node name.
short_name
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def namespace(self) -> str | None: """Node's namespace. Returns: String representing node's namespace, typically from outer to inner scopes. """ return self._namespace
Node's namespace. Returns: String representing node's namespace, typically from outer to inner scopes.
namespace
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def inputs(self) -> list[str]: """Return node inputs as a list, in the order required to bind them properly to the node's function. Returns: Node input names as a list. """ if isinstance(self._inputs, dict): return _dict_inputs_to_list(self._func, self._inputs) return _to_list(self._inputs)
Return node inputs as a list, in the order required to bind them properly to the node's function. Returns: Node input names as a list.
inputs
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def outputs(self) -> list[str]: """Return node outputs as a list preserving the original order if possible. Returns: Node output names as a list. """ return _to_list(self._outputs)
Return node outputs as a list preserving the original order if possible. Returns: Node output names as a list.
outputs
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def confirms(self) -> list[str]: """Return dataset names to confirm as a list. Returns: Dataset names to confirm as a list. """ return _to_list(self._confirms)
Return dataset names to confirm as a list. Returns: Dataset names to confirm as a list.
confirms
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def run(self, inputs: dict[str, Any] | None = None) -> dict[str, Any]: """Run this node using the provided inputs and return its results in a dictionary. Args: inputs: Dictionary of inputs as specified at the creation of the node. Raises: ValueError: In the following cases: a) The node function inputs are incompatible with the node input definition. Example 1: node definition input is a list of 2 DataFrames, whereas only 1 was provided or 2 different ones were provided. b) The node function outputs are incompatible with the node output definition. Example 1: node function definition is a dictionary, whereas function returns a list. Example 2: node definition output is a list of 5 strings, whereas the function returns a list of 4 objects. Exception: Any exception thrown during execution of the node. Returns: All produced node outputs are returned in a dictionary, where the keys are defined by the node outputs. """ self._logger.info("Running node: %s", str(self)) outputs = None if not (inputs is None or isinstance(inputs, dict)): raise ValueError( f"Node.run() expects a dictionary or None, " f"but got {type(inputs)} instead" ) try: inputs = {} if inputs is None else inputs if not self._inputs: outputs = self._run_with_no_inputs(inputs) elif isinstance(self._inputs, str): outputs = self._run_with_one_input(inputs, self._inputs) elif isinstance(self._inputs, list): outputs = self._run_with_list(inputs, self._inputs) elif isinstance(self._inputs, dict): outputs = self._run_with_dict(inputs, self._inputs) return self._outputs_to_dictionary(outputs) # purposely catch all exceptions except Exception as exc: self._logger.error( "Node %s failed with error: \n%s", str(self), str(exc), extra={"markup": True}, ) raise exc
Run this node using the provided inputs and return its results in a dictionary. Args: inputs: Dictionary of inputs as specified at the creation of the node. Raises: ValueError: In the following cases: a) The node function inputs are incompatible with the node input definition. Example 1: node definition input is a list of 2 DataFrames, whereas only 1 was provided or 2 different ones were provided. b) The node function outputs are incompatible with the node output definition. Example 1: node function definition is a dictionary, whereas function returns a list. Example 2: node definition output is a list of 5 strings, whereas the function returns a list of 4 objects. Exception: Any exception thrown during execution of the node. Returns: All produced node outputs are returned in a dictionary, where the keys are defined by the node outputs.
run
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def node( # noqa: PLR0913 func: Callable, inputs: str | list[str] | dict[str, str] | None, outputs: str | list[str] | dict[str, str] | None, *, name: str | None = None, tags: str | Iterable[str] | None = None, confirms: str | list[str] | None = None, namespace: str | None = None, ) -> Node: """Create a node in the pipeline by providing a function to be called along with variable names for inputs and/or outputs. Args: func: A function that corresponds to the node logic. The function should have at least one input or output. inputs: The name or the list of the names of variables used as inputs to the function. The number of names should match the number of arguments in the definition of the provided function. When dict[str, str] is provided, variable names will be mapped to function argument names. outputs: The name or the list of the names of variables used as outputs to the function. The number of names should match the number of outputs returned by the provided function. When dict[str, str] is provided, variable names will be mapped to the named outputs the function returns. name: Optional node name to be used when displaying the node in logs or any other visualisations. tags: Optional set of tags to be applied to the node. confirms: Optional name or the list of the names of the datasets that should be confirmed. This will result in calling ``confirm()`` method of the corresponding dataset instance. Specified dataset names do not necessarily need to be present in the node ``inputs`` or ``outputs``. namespace: Optional node namespace. Returns: A Node object with mapped inputs, outputs and function. Example: :: >>> import pandas as pd >>> import numpy as np >>> >>> def clean_data(cars: pd.DataFrame, >>> boats: pd.DataFrame) -> dict[str, pd.DataFrame]: >>> return dict(cars_df=cars.dropna(), boats_df=boats.dropna()) >>> >>> def halve_dataframe(data: pd.DataFrame) -> List[pd.DataFrame]: >>> return np.array_split(data, 2) >>> >>> nodes = [ >>> node(clean_data, >>> inputs=['cars2017', 'boats2017'], >>> outputs=dict(cars_df='clean_cars2017', >>> boats_df='clean_boats2017')), >>> node(halve_dataframe, >>> 'clean_cars2017', >>> ['train_cars2017', 'test_cars2017']), >>> node(halve_dataframe, >>> dict(data='clean_boats2017'), >>> ['train_boats2017', 'test_boats2017']) >>> ] """ return Node( func, inputs, outputs, name=name, tags=tags, confirms=confirms, namespace=namespace, )
Create a node in the pipeline by providing a function to be called along with variable names for inputs and/or outputs. Args: func: A function that corresponds to the node logic. The function should have at least one input or output. inputs: The name or the list of the names of variables used as inputs to the function. The number of names should match the number of arguments in the definition of the provided function. When dict[str, str] is provided, variable names will be mapped to function argument names. outputs: The name or the list of the names of variables used as outputs to the function. The number of names should match the number of outputs returned by the provided function. When dict[str, str] is provided, variable names will be mapped to the named outputs the function returns. name: Optional node name to be used when displaying the node in logs or any other visualisations. tags: Optional set of tags to be applied to the node. confirms: Optional name or the list of the names of the datasets that should be confirmed. This will result in calling ``confirm()`` method of the corresponding dataset instance. Specified dataset names do not necessarily need to be present in the node ``inputs`` or ``outputs``. namespace: Optional node namespace. Returns: A Node object with mapped inputs, outputs and function. Example: :: >>> import pandas as pd >>> import numpy as np >>> >>> def clean_data(cars: pd.DataFrame, >>> boats: pd.DataFrame) -> dict[str, pd.DataFrame]: >>> return dict(cars_df=cars.dropna(), boats_df=boats.dropna()) >>> >>> def halve_dataframe(data: pd.DataFrame) -> List[pd.DataFrame]: >>> return np.array_split(data, 2) >>> >>> nodes = [ >>> node(clean_data, >>> inputs=['cars2017', 'boats2017'], >>> outputs=dict(cars_df='clean_cars2017', >>> boats_df='clean_boats2017')), >>> node(halve_dataframe, >>> 'clean_cars2017', >>> ['train_cars2017', 'test_cars2017']), >>> node(halve_dataframe, >>> dict(data='clean_boats2017'), >>> ['train_boats2017', 'test_boats2017']) >>> ]
node
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def _dict_inputs_to_list( func: Callable[[Any], Any], inputs: dict[str, str] ) -> list[str]: """Convert a dict representation of the node inputs to a list, ensuring the appropriate order for binding them to the node's function. """ sig = inspect.signature(func, follow_wrapped=False).bind(**inputs) return [*sig.args, *sig.kwargs.values()]
Convert a dict representation of the node inputs to a list, ensuring the appropriate order for binding them to the node's function.
_dict_inputs_to_list
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def _to_list(element: str | Iterable[str] | dict[str, str] | None) -> list[str]: """Make a list out of node inputs/outputs. Returns: list[str]: Node input/output names as a list to standardise. """ if element is None: return [] if isinstance(element, str): return [element] if isinstance(element, dict): return list(element.values()) return list(element)
Make a list out of node inputs/outputs. Returns: list[str]: Node input/output names as a list to standardise.
_to_list
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def _get_readable_func_name(func: Callable) -> str: """Get a user-friendly readable name of the function provided. Returns: str: readable name of the provided callable func. """ if hasattr(func, "__name__"): return func.__name__ name = repr(func) if "functools.partial" in name: name = "<partial>" return name
Get a user-friendly readable name of the function provided. Returns: str: readable name of the provided callable func.
_get_readable_func_name
python
kedro-org/kedro
kedro/pipeline/node.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/node.py
Apache-2.0
def __init__( self, nodes: Iterable[Node | Pipeline], *, tags: str | Iterable[str] | None = None, ): """Initialise ``Pipeline`` with a list of ``Node`` instances. Args: nodes: The iterable of nodes the ``Pipeline`` will be made of. If you provide pipelines among the list of nodes, those pipelines will be expanded and all their nodes will become part of this new pipeline. tags: Optional set of tags to be applied to all the pipeline nodes. Raises: ValueError: When an empty list of nodes is provided, or when not all nodes have unique names. CircularDependencyError: When visiting all the nodes is not possible due to the existence of a circular dependency. OutputNotUniqueError: When multiple ``Node`` instances produce the same output. ConfirmNotUniqueError: When multiple ``Node`` instances attempt to confirm the same dataset. Example: :: >>> from kedro.pipeline import Pipeline >>> from kedro.pipeline import node >>> >>> # In the following scenario first_ds and second_ds >>> # are datasets provided by io. Pipeline will pass these >>> # datasets to first_node function and provides the result >>> # to the second_node as input. >>> >>> def first_node(first_ds, second_ds): >>> return dict(third_ds=first_ds+second_ds) >>> >>> def second_node(third_ds): >>> return third_ds >>> >>> pipeline = Pipeline([ >>> node(first_node, ['first_ds', 'second_ds'], ['third_ds']), >>> node(second_node, dict(third_ds='third_ds'), 'fourth_ds')]) >>> >>> pipeline.describe() >>> """ if nodes is None: raise ValueError( "'nodes' argument of 'Pipeline' is None. It must be an " "iterable of nodes and/or pipelines instead." ) nodes_list = list(nodes) # in case it's a generator _validate_duplicate_nodes(nodes_list) nodes_chain = list( chain.from_iterable( [[n] if isinstance(n, Node) else n.nodes for n in nodes_list] ) ) _validate_transcoded_inputs_outputs(nodes_chain) _tags = set(_to_list(tags)) if _tags: tagged_nodes = [n.tag(_tags) for n in nodes_chain] else: tagged_nodes = nodes_chain self._nodes_by_name = {node.name: node for node in tagged_nodes} _validate_unique_outputs(tagged_nodes) _validate_unique_confirms(tagged_nodes) # input -> nodes with input self._nodes_by_input: dict[str, set[Node]] = defaultdict(set) for node in tagged_nodes: for input_ in node.inputs: self._nodes_by_input[_strip_transcoding(input_)].add(node) # output -> node with output self._nodes_by_output: dict[str, Node] = {} for node in tagged_nodes: for output in node.outputs: self._nodes_by_output[_strip_transcoding(output)] = node self._nodes = tagged_nodes self._toposorter = TopologicalSorter(self.node_dependencies) # test for circular dependencies without executing the toposort for efficiency try: self._toposorter.prepare() except CycleError as exc: loop = list(set(exc.args[1])) message = f"Circular dependencies exist among the following {len(loop)} item(s): {loop}" raise CircularDependencyError(message) from exc self._toposorted_nodes: list[Node] = [] self._toposorted_groups: list[list[Node]] = []
Initialise ``Pipeline`` with a list of ``Node`` instances. Args: nodes: The iterable of nodes the ``Pipeline`` will be made of. If you provide pipelines among the list of nodes, those pipelines will be expanded and all their nodes will become part of this new pipeline. tags: Optional set of tags to be applied to all the pipeline nodes. Raises: ValueError: When an empty list of nodes is provided, or when not all nodes have unique names. CircularDependencyError: When visiting all the nodes is not possible due to the existence of a circular dependency. OutputNotUniqueError: When multiple ``Node`` instances produce the same output. ConfirmNotUniqueError: When multiple ``Node`` instances attempt to confirm the same dataset. Example: :: >>> from kedro.pipeline import Pipeline >>> from kedro.pipeline import node >>> >>> # In the following scenario first_ds and second_ds >>> # are datasets provided by io. Pipeline will pass these >>> # datasets to first_node function and provides the result >>> # to the second_node as input. >>> >>> def first_node(first_ds, second_ds): >>> return dict(third_ds=first_ds+second_ds) >>> >>> def second_node(third_ds): >>> return third_ds >>> >>> pipeline = Pipeline([ >>> node(first_node, ['first_ds', 'second_ds'], ['third_ds']), >>> node(second_node, dict(third_ds='third_ds'), 'fourth_ds')]) >>> >>> pipeline.describe() >>>
__init__
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def __repr__(self) -> str: # pragma: no cover """Pipeline ([node1, ..., node10 ...], name='pipeline_name')""" max_nodes_to_display = 10 nodes_reprs = [repr(node) for node in self.nodes[:max_nodes_to_display]] if len(self.nodes) > max_nodes_to_display: nodes_reprs.append("...") sep = ",\n" nodes_reprs_str = f"[\n{sep.join(nodes_reprs)}\n]" if nodes_reprs else "[]" constructor_repr = f"({nodes_reprs_str})" return f"{self.__class__.__name__}{constructor_repr}"
Pipeline ([node1, ..., node10 ...], name='pipeline_name')
__repr__
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def all_inputs(self) -> set[str]: """All inputs for all nodes in the pipeline. Returns: All node input names as a Set. """ return set.union(set(), *(node.inputs for node in self._nodes))
All inputs for all nodes in the pipeline. Returns: All node input names as a Set.
all_inputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def all_outputs(self) -> set[str]: """All outputs of all nodes in the pipeline. Returns: All node outputs. """ return set.union(set(), *(node.outputs for node in self._nodes))
All outputs of all nodes in the pipeline. Returns: All node outputs.
all_outputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def inputs(self) -> set[str]: """The names of free inputs that must be provided at runtime so that the pipeline is runnable. Does not include intermediate inputs which are produced and consumed by the inner pipeline nodes. Resolves transcoded names where necessary. Returns: The set of free input names needed by the pipeline. """ return self._remove_intermediates(self.all_inputs())
The names of free inputs that must be provided at runtime so that the pipeline is runnable. Does not include intermediate inputs which are produced and consumed by the inner pipeline nodes. Resolves transcoded names where necessary. Returns: The set of free input names needed by the pipeline.
inputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def outputs(self) -> set[str]: """The names of outputs produced when the whole pipeline is run. Does not include intermediate outputs that are consumed by other pipeline nodes. Resolves transcoded names where necessary. Returns: The set of final pipeline outputs. """ return self._remove_intermediates(self.all_outputs())
The names of outputs produced when the whole pipeline is run. Does not include intermediate outputs that are consumed by other pipeline nodes. Resolves transcoded names where necessary. Returns: The set of final pipeline outputs.
outputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def datasets(self) -> set[str]: """The names of all datasets used by the ``Pipeline``, including inputs and outputs. Returns: The set of all pipeline datasets. """ return self.all_outputs() | self.all_inputs()
The names of all datasets used by the ``Pipeline``, including inputs and outputs. Returns: The set of all pipeline datasets.
datasets
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def set_to_string(set_of_strings: set[str]) -> str: """Convert set to a string but return 'None' in case of an empty set. """ return ", ".join(sorted(set_of_strings)) if set_of_strings else "None"
Convert set to a string but return 'None' in case of an empty set.
describe.set_to_string
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def describe(self, names_only: bool = True) -> str: """Obtain the order of execution and expected free input variables in a loggable pre-formatted string. The order of nodes matches the order of execution given by the topological sort. Args: names_only: The flag to describe names_only pipeline with just node names. Example: :: >>> pipeline = Pipeline([ ... ]) >>> >>> logger = logging.getLogger(__name__) >>> >>> logger.info(pipeline.describe()) After invocation the following will be printed as an info level log statement: :: #### Pipeline execution order #### Inputs: C, D func1([C]) -> [A] func2([D]) -> [B] func3([A, D]) -> [E] Outputs: B, E ################################## Returns: The pipeline description as a formatted string. """ def set_to_string(set_of_strings: set[str]) -> str: """Convert set to a string but return 'None' in case of an empty set. """ return ", ".join(sorted(set_of_strings)) if set_of_strings else "None" nodes_as_string = "\n".join( node.name if names_only else str(node) for node in self.nodes ) str_representation = ( "#### Pipeline execution order ####\n" "Inputs: {0}\n\n" "{1}\n\n" "Outputs: {2}\n" "##################################" ) return str_representation.format( set_to_string(self.inputs()), nodes_as_string, set_to_string(self.outputs()) )
Obtain the order of execution and expected free input variables in a loggable pre-formatted string. The order of nodes matches the order of execution given by the topological sort. Args: names_only: The flag to describe names_only pipeline with just node names. Example: :: >>> pipeline = Pipeline([ ... ]) >>> >>> logger = logging.getLogger(__name__) >>> >>> logger.info(pipeline.describe()) After invocation the following will be printed as an info level log statement: :: #### Pipeline execution order #### Inputs: C, D func1([C]) -> [A] func2([D]) -> [B] func3([A, D]) -> [E] Outputs: B, E ################################## Returns: The pipeline description as a formatted string.
describe
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def node_dependencies(self) -> dict[Node, set[Node]]: """All dependencies of nodes where the first Node has a direct dependency on the second Node. Returns: Dictionary where keys are nodes and values are sets made up of their parent nodes. Independent nodes have this as empty sets. """ dependencies: dict[Node, set[Node]] = {node: set() for node in self._nodes} for parent in self._nodes: for output in parent.outputs: for child in self._nodes_by_input[_strip_transcoding(output)]: dependencies[child].add(parent) return dependencies
All dependencies of nodes where the first Node has a direct dependency on the second Node. Returns: Dictionary where keys are nodes and values are sets made up of their parent nodes. Independent nodes have this as empty sets.
node_dependencies
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def nodes(self) -> list[Node]: """Return a list of the pipeline nodes in topological order, i.e. if node A needs to be run before node B, it will appear earlier in the list. Returns: The list of all pipeline nodes in topological order. """ if not self._toposorted_nodes: self._toposorted_nodes = [n for group in self.grouped_nodes for n in group] return list(self._toposorted_nodes)
Return a list of the pipeline nodes in topological order, i.e. if node A needs to be run before node B, it will appear earlier in the list. Returns: The list of all pipeline nodes in topological order.
nodes
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def grouped_nodes(self) -> list[list[Node]]: """Return a list of the pipeline nodes in topologically ordered groups, i.e. if node A needs to be run before node B, it will appear in an earlier group. Returns: The pipeline nodes in topologically ordered groups. """ if not self._toposorted_groups: while self._toposorter: group = sorted(self._toposorter.get_ready()) self._toposorted_groups.append(group) self._toposorter.done(*group) return [list(group) for group in self._toposorted_groups]
Return a list of the pipeline nodes in topologically ordered groups, i.e. if node A needs to be run before node B, it will appear in an earlier group. Returns: The pipeline nodes in topologically ordered groups.
grouped_nodes
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def grouped_nodes_by_namespace(self) -> dict[str, dict[str, Any]]: """Return a dictionary of the pipeline nodes grouped by namespace with information about the nodes, their type, and dependencies. The structure of the dictionary is: {'node_name/namespace_name' : {'name': 'node_name/namespace_name','type': 'namespace' or 'node','nodes': [list of nodes],'dependencies': [list of dependencies]}} This property is intended to be used by deployment plugins to group nodes by namespace. """ grouped_nodes: dict[str, dict[str, Any]] = defaultdict(dict) for node in self.nodes: key = node.namespace or node.name if key not in grouped_nodes: grouped_nodes[key] = {} grouped_nodes[key]["name"] = key grouped_nodes[key]["type"] = "namespace" if node.namespace else "node" grouped_nodes[key]["nodes"] = [] grouped_nodes[key]["dependencies"] = set() grouped_nodes[key]["nodes"].append(node) dependencies = set() for parent in self.node_dependencies[node]: if parent.namespace and parent.namespace != key: dependencies.add(parent.namespace) elif parent.namespace and parent.namespace == key: continue else: dependencies.add(parent.name) grouped_nodes[key]["dependencies"].update(dependencies) return grouped_nodes
Return a dictionary of the pipeline nodes grouped by namespace with information about the nodes, their type, and dependencies. The structure of the dictionary is: {'node_name/namespace_name' : {'name': 'node_name/namespace_name','type': 'namespace' or 'node','nodes': [list of nodes],'dependencies': [list of dependencies]}} This property is intended to be used by deployment plugins to group nodes by namespace.
grouped_nodes_by_namespace
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def only_nodes(self, *node_names: str) -> Pipeline: """Create a new ``Pipeline`` which will contain only the specified nodes by name. Args: *node_names: One or more node names. The returned ``Pipeline`` will only contain these nodes. Raises: ValueError: When some invalid node name is given. Returns: A new ``Pipeline``, containing only ``nodes``. """ unregistered_nodes = set(node_names) - set(self._nodes_by_name.keys()) if unregistered_nodes: # check if unregistered nodes are available under namespace namespaces = [] for unregistered_node in unregistered_nodes: namespaces.extend( [ node_name for node_name in self._nodes_by_name.keys() if node_name.endswith(f".{unregistered_node}") ] ) if namespaces: raise ValueError( f"Pipeline does not contain nodes named {list(unregistered_nodes)}. " f"Did you mean: {namespaces}?" ) raise ValueError( f"Pipeline does not contain nodes named {list(unregistered_nodes)}." ) nodes = [self._nodes_by_name[name] for name in node_names] return Pipeline(nodes)
Create a new ``Pipeline`` which will contain only the specified nodes by name. Args: *node_names: One or more node names. The returned ``Pipeline`` will only contain these nodes. Raises: ValueError: When some invalid node name is given. Returns: A new ``Pipeline``, containing only ``nodes``.
only_nodes
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def only_nodes_with_namespace(self, node_namespace: str) -> Pipeline: """Creates a new ``Pipeline`` containing only nodes with the specified namespace. Args: node_namespace: One node namespace. Raises: ValueError: When pipeline contains no nodes with the specified namespace. Returns: A new ``Pipeline`` containing nodes with the specified namespace. """ nodes = [ n for n in self._nodes if n.namespace and n.namespace.startswith(node_namespace) ] if not nodes: raise ValueError( f"Pipeline does not contain nodes with namespace '{node_namespace}'" ) return Pipeline(nodes)
Creates a new ``Pipeline`` containing only nodes with the specified namespace. Args: node_namespace: One node namespace. Raises: ValueError: When pipeline contains no nodes with the specified namespace. Returns: A new ``Pipeline`` containing nodes with the specified namespace.
only_nodes_with_namespace
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def _get_nodes_with_inputs_transcode_compatible( self, datasets: set[str] ) -> set[Node]: """Retrieves nodes that use the given `datasets` as inputs. If provided a name, but no format, for a transcoded dataset, it includes all nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Raises: ValueError: if any of the given datasets do not exist in the ``Pipeline`` object Returns: Set of ``Nodes`` that use the given datasets as inputs. """ missing = sorted( datasets - self.datasets() - self._transcode_compatible_names() ) if missing: raise ValueError(f"Pipeline does not contain datasets named {missing}") relevant_nodes = set() for input_ in datasets: if _strip_transcoding(input_) == input_: relevant_nodes.update(self._nodes_by_input[_strip_transcoding(input_)]) else: for node_ in self._nodes_by_input[_strip_transcoding(input_)]: if input_ in node_.inputs: relevant_nodes.add(node_) return relevant_nodes
Retrieves nodes that use the given `datasets` as inputs. If provided a name, but no format, for a transcoded dataset, it includes all nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Raises: ValueError: if any of the given datasets do not exist in the ``Pipeline`` object Returns: Set of ``Nodes`` that use the given datasets as inputs.
_get_nodes_with_inputs_transcode_compatible
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def _get_nodes_with_outputs_transcode_compatible( self, datasets: set[str] ) -> set[Node]: """Retrieves nodes that output to the given `datasets`. If provided a name, but no format, for a transcoded dataset, it includes the node that outputs to that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Raises: ValueError: if any of the given datasets do not exist in the ``Pipeline`` object Returns: Set of ``Nodes`` that output to the given datasets. """ missing = sorted( datasets - self.datasets() - self._transcode_compatible_names() ) if missing: raise ValueError(f"Pipeline does not contain datasets named {missing}") relevant_nodes = set() for output in datasets: if _strip_transcoding(output) in self._nodes_by_output: node_with_output = self._nodes_by_output[_strip_transcoding(output)] if ( _strip_transcoding(output) == output or output in node_with_output.outputs ): relevant_nodes.add(node_with_output) return relevant_nodes
Retrieves nodes that output to the given `datasets`. If provided a name, but no format, for a transcoded dataset, it includes the node that outputs to that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Raises: ValueError: if any of the given datasets do not exist in the ``Pipeline`` object Returns: Set of ``Nodes`` that output to the given datasets.
_get_nodes_with_outputs_transcode_compatible
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def only_nodes_with_inputs(self, *inputs: str) -> Pipeline: """Create a new ``Pipeline`` object with the nodes which depend directly on the provided inputs. If provided a name, but no format, for a transcoded input, it includes all the nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *inputs: A list of inputs which should be used as a starting point of the new ``Pipeline``. Raises: ValueError: Raised when any of the given inputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes depending directly on the provided inputs are being copied. """ starting = set(inputs) nodes = self._get_nodes_with_inputs_transcode_compatible(starting) return Pipeline(nodes)
Create a new ``Pipeline`` object with the nodes which depend directly on the provided inputs. If provided a name, but no format, for a transcoded input, it includes all the nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *inputs: A list of inputs which should be used as a starting point of the new ``Pipeline``. Raises: ValueError: Raised when any of the given inputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes depending directly on the provided inputs are being copied.
only_nodes_with_inputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def from_inputs(self, *inputs: str) -> Pipeline: """Create a new ``Pipeline`` object with the nodes which depend directly or transitively on the provided inputs. If provided a name, but no format, for a transcoded input, it includes all the nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *inputs: A list of inputs which should be used as a starting point of the new ``Pipeline`` Raises: ValueError: Raised when any of the given inputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes depending directly or transitively on the provided inputs are being copied. """ starting = set(inputs) result: set[Node] = set() next_nodes = self._get_nodes_with_inputs_transcode_compatible(starting) while next_nodes: result |= next_nodes outputs = set(chain.from_iterable(node.outputs for node in next_nodes)) starting = outputs next_nodes = set( chain.from_iterable( self._nodes_by_input[_strip_transcoding(input_)] for input_ in starting ) ) return Pipeline(result)
Create a new ``Pipeline`` object with the nodes which depend directly or transitively on the provided inputs. If provided a name, but no format, for a transcoded input, it includes all the nodes that use inputs with that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *inputs: A list of inputs which should be used as a starting point of the new ``Pipeline`` Raises: ValueError: Raised when any of the given inputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes depending directly or transitively on the provided inputs are being copied.
from_inputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def only_nodes_with_outputs(self, *outputs: str) -> Pipeline: """Create a new ``Pipeline`` object with the nodes which are directly required to produce the provided outputs. If provided a name, but no format, for a transcoded dataset, it includes all the nodes that output to that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *outputs: A list of outputs which should be the final outputs of the new ``Pipeline``. Raises: ValueError: Raised when any of the given outputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes which are directly required to produce the provided outputs are being copied. """ starting = set(outputs) nodes = self._get_nodes_with_outputs_transcode_compatible(starting) return Pipeline(nodes)
Create a new ``Pipeline`` object with the nodes which are directly required to produce the provided outputs. If provided a name, but no format, for a transcoded dataset, it includes all the nodes that output to that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *outputs: A list of outputs which should be the final outputs of the new ``Pipeline``. Raises: ValueError: Raised when any of the given outputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes which are directly required to produce the provided outputs are being copied.
only_nodes_with_outputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def to_outputs(self, *outputs: str) -> Pipeline: """Create a new ``Pipeline`` object with the nodes which are directly or transitively required to produce the provided outputs. If provided a name, but no format, for a transcoded dataset, it includes all the nodes that output to that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *outputs: A list of outputs which should be the final outputs of the new ``Pipeline``. Raises: ValueError: Raised when any of the given outputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes which are directly or transitively required to produce the provided outputs are being copied. """ starting = set(outputs) result: set[Node] = set() next_nodes = self._get_nodes_with_outputs_transcode_compatible(starting) while next_nodes: result |= next_nodes inputs = set(chain.from_iterable(node.inputs for node in next_nodes)) starting = inputs next_nodes = { self._nodes_by_output[_strip_transcoding(output)] for output in starting if _strip_transcoding(output) in self._nodes_by_output } return Pipeline(result)
Create a new ``Pipeline`` object with the nodes which are directly or transitively required to produce the provided outputs. If provided a name, but no format, for a transcoded dataset, it includes all the nodes that output to that name, otherwise it matches to the fully-qualified name only (i.e. name@format). Args: *outputs: A list of outputs which should be the final outputs of the new ``Pipeline``. Raises: ValueError: Raised when any of the given outputs do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes which are directly or transitively required to produce the provided outputs are being copied.
to_outputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def from_nodes(self, *node_names: str) -> Pipeline: """Create a new ``Pipeline`` object with the nodes which depend directly or transitively on the provided nodes. Args: *node_names: A list of node_names which should be used as a starting point of the new ``Pipeline``. Raises: ValueError: Raised when any of the given names do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes depending directly or transitively on the provided nodes are being copied. """ res = self.only_nodes(*node_names) res += self.from_inputs(*map(_strip_transcoding, res.all_outputs())) return res
Create a new ``Pipeline`` object with the nodes which depend directly or transitively on the provided nodes. Args: *node_names: A list of node_names which should be used as a starting point of the new ``Pipeline``. Raises: ValueError: Raised when any of the given names do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes depending directly or transitively on the provided nodes are being copied.
from_nodes
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def to_nodes(self, *node_names: str) -> Pipeline: """Create a new ``Pipeline`` object with the nodes required directly or transitively by the provided nodes. Args: *node_names: A list of node_names which should be used as an end point of the new ``Pipeline``. Raises: ValueError: Raised when any of the given names do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes required directly or transitively by the provided nodes are being copied. """ res = self.only_nodes(*node_names) res += self.to_outputs(*map(_strip_transcoding, res.all_inputs())) return res
Create a new ``Pipeline`` object with the nodes required directly or transitively by the provided nodes. Args: *node_names: A list of node_names which should be used as an end point of the new ``Pipeline``. Raises: ValueError: Raised when any of the given names do not exist in the ``Pipeline`` object. Returns: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes required directly or transitively by the provided nodes are being copied.
to_nodes
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def only_nodes_with_tags(self, *tags: str) -> Pipeline: """Creates a new ``Pipeline`` object with the nodes which contain *any* of the provided tags. The resulting ``Pipeline`` is empty if no tags are provided. Args: *tags: A list of node tags which should be used to lookup the nodes of the new ``Pipeline``. Returns: Pipeline: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes containing *any* of the tags provided are being copied. """ unique_tags = set(tags) nodes = [node for node in self._nodes if unique_tags & node.tags] return Pipeline(nodes)
Creates a new ``Pipeline`` object with the nodes which contain *any* of the provided tags. The resulting ``Pipeline`` is empty if no tags are provided. Args: *tags: A list of node tags which should be used to lookup the nodes of the new ``Pipeline``. Returns: Pipeline: A new ``Pipeline`` object, containing a subset of the nodes of the current one such that only nodes containing *any* of the tags provided are being copied.
only_nodes_with_tags
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def filter( # noqa: PLR0913 self, tags: Iterable[str] | None = None, from_nodes: Iterable[str] | None = None, to_nodes: Iterable[str] | None = None, node_names: Iterable[str] | None = None, from_inputs: Iterable[str] | None = None, to_outputs: Iterable[str] | None = None, node_namespace: str | None = None, ) -> Pipeline: """Creates a new ``Pipeline`` object with the nodes that meet all of the specified filtering conditions. The new pipeline object is the intersection of pipelines that meet each filtering condition. This is distinct from chaining multiple filters together. Args: tags: A list of node tags which should be used to lookup the nodes of the new ``Pipeline``. from_nodes: A list of node names which should be used as a starting point of the new ``Pipeline``. to_nodes: A list of node names which should be used as an end point of the new ``Pipeline``. node_names: A list of node names which should be selected for the new ``Pipeline``. from_inputs: A list of inputs which should be used as a starting point of the new ``Pipeline`` to_outputs: A list of outputs which should be the final outputs of the new ``Pipeline``. node_namespace: One node namespace which should be used to select nodes in the new ``Pipeline``. Returns: A new ``Pipeline`` object with nodes that meet all of the specified filtering conditions. Raises: ValueError: The filtered ``Pipeline`` has no nodes. Example: :: >>> pipeline = Pipeline( >>> [ >>> node(func, "A", "B", name="node1"), >>> node(func, "B", "C", name="node2"), >>> node(func, "C", "D", name="node3"), >>> ] >>> ) >>> pipeline.filter(node_names=["node1", "node3"], from_inputs=["A"]) >>> # Gives a new pipeline object containing node1 and node3. """ # Use [node_namespace] so only_nodes_with_namespace can follow the same # *filter_args pattern as the other filtering methods, which all take iterables. node_namespace_iterable = [node_namespace] if node_namespace else None filter_methods = { self.only_nodes_with_tags: tags, self.from_nodes: from_nodes, self.to_nodes: to_nodes, self.only_nodes: node_names, self.from_inputs: from_inputs, self.to_outputs: to_outputs, self.only_nodes_with_namespace: node_namespace_iterable, } subset_pipelines = { filter_method(*filter_args) # type: ignore for filter_method, filter_args in filter_methods.items() if filter_args } # Intersect all the pipelines subsets. We apply each filter to the original # pipeline object (self) rather than incrementally chaining filter methods # together. Hence the order of filtering does not affect the outcome, and the # resultant pipeline is unambiguously defined. # If this were not the case then, for example, # pipeline.filter(node_names=["node1", "node3"], from_inputs=["A"]) # would give different outcomes depending on the order of filter methods: # only_nodes and then from_inputs would give node1, while only_nodes and then # from_inputs would give node1 and node3. filtered_pipeline = Pipeline(self._nodes) for subset_pipeline in subset_pipelines: filtered_pipeline &= subset_pipeline if not filtered_pipeline.nodes: raise ValueError( "Pipeline contains no nodes after applying all provided filters. " "Please ensure that at least one pipeline with nodes has been defined." ) return filtered_pipeline
Creates a new ``Pipeline`` object with the nodes that meet all of the specified filtering conditions. The new pipeline object is the intersection of pipelines that meet each filtering condition. This is distinct from chaining multiple filters together. Args: tags: A list of node tags which should be used to lookup the nodes of the new ``Pipeline``. from_nodes: A list of node names which should be used as a starting point of the new ``Pipeline``. to_nodes: A list of node names which should be used as an end point of the new ``Pipeline``. node_names: A list of node names which should be selected for the new ``Pipeline``. from_inputs: A list of inputs which should be used as a starting point of the new ``Pipeline`` to_outputs: A list of outputs which should be the final outputs of the new ``Pipeline``. node_namespace: One node namespace which should be used to select nodes in the new ``Pipeline``. Returns: A new ``Pipeline`` object with nodes that meet all of the specified filtering conditions. Raises: ValueError: The filtered ``Pipeline`` has no nodes. Example: :: >>> pipeline = Pipeline( >>> [ >>> node(func, "A", "B", name="node1"), >>> node(func, "B", "C", name="node2"), >>> node(func, "C", "D", name="node3"), >>> ] >>> ) >>> pipeline.filter(node_names=["node1", "node3"], from_inputs=["A"]) >>> # Gives a new pipeline object containing node1 and node3.
filter
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def tag(self, tags: str | Iterable[str]) -> Pipeline: """Tags all the nodes in the pipeline. Args: tags: The tags to be added to the nodes. Returns: New ``Pipeline`` object with nodes tagged. """ nodes = [n.tag(tags) for n in self._nodes] return Pipeline(nodes)
Tags all the nodes in the pipeline. Args: tags: The tags to be added to the nodes. Returns: New ``Pipeline`` object with nodes tagged.
tag
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def to_json(self) -> str: """Return a json representation of the pipeline.""" transformed = [ { "name": n.name, "inputs": list(n.inputs), "outputs": list(n.outputs), "tags": list(n.tags), } for n in self._nodes ] pipeline_versioned = { "kedro_version": kedro.__version__, "pipeline": transformed, } return json.dumps(pipeline_versioned)
Return a json representation of the pipeline.
to_json
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def _validate_transcoded_inputs_outputs(nodes: list[Node]) -> None: """Users should not be allowed to refer to a transcoded dataset both with and without the separator. """ all_inputs_outputs = set( chain( chain.from_iterable(node.inputs for node in nodes), chain.from_iterable(node.outputs for node in nodes), ) ) invalid = set() for dataset_name in all_inputs_outputs: name = _strip_transcoding(dataset_name) if name != dataset_name and name in all_inputs_outputs: invalid.add(name) if invalid: raise ValueError( f"The following datasets are used with transcoding, but " f"were referenced without the separator: {', '.join(invalid)}.\n" f"Please specify a transcoding option or " f"rename the datasets." )
Users should not be allowed to refer to a transcoded dataset both with and without the separator.
_validate_transcoded_inputs_outputs
python
kedro-org/kedro
kedro/pipeline/pipeline.py
https://github.com/kedro-org/kedro/blob/master/kedro/pipeline/pipeline.py
Apache-2.0
def load_ipython_extension(ipython: InteractiveShell) -> None: """ Main entry point when %load_ext kedro.ipython is executed, either manually or automatically through `kedro ipython` or `kedro jupyter lab/notebook`. IPython will look for this function specifically. See https://ipython.readthedocs.io/en/stable/config/extensions/index.html """ ipython.register_magic_function(func=magic_reload_kedro, magic_name="reload_kedro") # type: ignore[call-arg] logger.info("Registered line magic '%reload_kedro'") ipython.register_magic_function(func=magic_load_node, magic_name="load_node") # type: ignore[call-arg] logger.info("Registered line magic '%load_node'") if _find_kedro_project(Path.cwd()) is None: logger.warning( "Kedro extension was registered but couldn't find a Kedro project. " "Make sure you run '%reload_kedro <project_root>'." ) return reload_kedro()
Main entry point when %load_ext kedro.ipython is executed, either manually or automatically through `kedro ipython` or `kedro jupyter lab/notebook`. IPython will look for this function specifically. See https://ipython.readthedocs.io/en/stable/config/extensions/index.html
load_ipython_extension
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def magic_reload_kedro( line: str, local_ns: dict[str, Any] | None = None, conf_source: str | None = None, ) -> None: """ The `%reload_kedro` IPython line magic. See https://docs.kedro.org/en/stable/notebooks_and_ipython/kedro_and_notebooks.html#reload-kedro-line-magic for more. """ args = parse_argstring(magic_reload_kedro, line) reload_kedro(args.path, args.env, args.params, local_ns, args.conf_source)
The `%reload_kedro` IPython line magic. See https://docs.kedro.org/en/stable/notebooks_and_ipython/kedro_and_notebooks.html#reload-kedro-line-magic for more.
magic_reload_kedro
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def reload_kedro( path: str | None = None, env: str | None = None, extra_params: dict[str, Any] | None = None, local_namespace: dict[str, Any] | None = None, conf_source: str | None = None, ) -> None: # pragma: no cover """Function that underlies the %reload_kedro Line magic. This should not be imported or run directly but instead invoked through %reload_kedro.""" project_path = _resolve_project_path(path, local_namespace) metadata = bootstrap_project(project_path) _remove_cached_modules(metadata.package_name) configure_project(metadata.package_name) session = KedroSession.create( project_path, env=env, extra_params=extra_params, conf_source=conf_source, ) context = session.load_context() catalog = context.catalog get_ipython().push( # type: ignore[no-untyped-call] variables={ "context": context, "catalog": catalog, "session": session, "pipelines": pipelines, } ) logger.info("Kedro project %s", str(metadata.project_name)) logger.info( "Defined global variable 'context', 'session', 'catalog' and 'pipelines'" ) for line_magic in load_entry_points("line_magic"): register_line_magic(needs_local_scope(line_magic)) # type: ignore[no-untyped-call] logger.info("Registered line magic '%s'", line_magic.__name__) # type: ignore[attr-defined]
Function that underlies the %reload_kedro Line magic. This should not be imported or run directly but instead invoked through %reload_kedro.
reload_kedro
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def _resolve_project_path( path: str | None = None, local_namespace: dict[str, Any] | None = None ) -> Path: """ Resolve the project path to use with reload_kedro, updating or adding it (in-place) to the local ipython Namespace (``local_namespace``) if necessary. Arguments: path: the path to use as a string object local_namespace: Namespace with local variables of the scope where the line magic is invoked in a dict. """ if path: project_path = Path(path).expanduser().resolve() else: if ( local_namespace and local_namespace.get("context") and hasattr(local_namespace["context"], "project_path") ): project_path = local_namespace["context"].project_path else: project_path = _find_kedro_project(Path.cwd()) if project_path: logger.info( "Resolved project path as: %s.\nTo set a different path, run " "'%%reload_kedro <project_root>'", project_path, ) if ( project_path and local_namespace and local_namespace.get("context") and hasattr(local_namespace["context"], "project_path") # Avoid name collision and project_path != local_namespace["context"].project_path ): logger.info("Updating path to Kedro project: %s...", project_path) return project_path
Resolve the project path to use with reload_kedro, updating or adding it (in-place) to the local ipython Namespace (``local_namespace``) if necessary. Arguments: path: the path to use as a string object local_namespace: Namespace with local variables of the scope where the line magic is invoked in a dict.
_resolve_project_path
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def _guess_run_environment() -> str: # pragma: no cover """Best effort to guess the IPython/Jupyter environment""" # https://github.com/microsoft/vscode-jupyter/issues/7380 if os.environ.get("VSCODE_PID") or os.environ.get("VSCODE_CWD"): return "vscode" elif _is_databricks(): return "databricks" elif hasattr(get_ipython(), "kernel"): # type: ignore[no-untyped-call] # IPython terminal does not have this attribute return "jupyter" else: return "ipython"
Best effort to guess the IPython/Jupyter environment
_guess_run_environment
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def magic_load_node(args: str) -> None: """The line magic %load_node <node_name>. Currently, this feature is only available for Jupyter Notebook (>7.0), Jupyter Lab, IPython, and VSCode Notebook. This line magic will generate code in multiple cells to load datasets from `DataCatalog`, import relevant functions and modules, node function definition and a function call. If generating code is not possible, it will print the code instead. """ parameters = parse_argstring(magic_load_node, args) node_name = parameters.node cells = _load_node(node_name, pipelines) run_environment = _guess_run_environment() if run_environment in ("ipython", "vscode", "jupyter"): # Combine multiple cells into one for IPython or VSCode or Jupyter combined_cell = "\n\n".join(cells) _create_cell_with_text(combined_cell) else: # For other environments or if detection fails, just print the cells _print_cells(cells)
The line magic %load_node <node_name>. Currently, this feature is only available for Jupyter Notebook (>7.0), Jupyter Lab, IPython, and VSCode Notebook. This line magic will generate code in multiple cells to load datasets from `DataCatalog`, import relevant functions and modules, node function definition and a function call. If generating code is not possible, it will print the code instead.
magic_load_node
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def input_params_dict(self) -> dict[str, str] | None: """A mapping of {variable name: dataset_name}""" var_positional_arg_name = self._find_var_positional_arg() inputs_params_dict = {} for param, dataset_name in self.arguments.items(): if param == var_positional_arg_name: # If the argument is *args, use the dataset name instead for arg in dataset_name: inputs_params_dict[arg] = arg else: inputs_params_dict[param] = dataset_name return inputs_params_dict
A mapping of {variable name: dataset_name}
input_params_dict
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def _find_var_positional_arg(self) -> str | None: """Find the name of the VAR_POSITIONAL argument( *args), if any.""" for k, v in self.signature.parameters.items(): if v.kind == inspect.Parameter.VAR_POSITIONAL: return k return None
Find the name of the VAR_POSITIONAL argument( *args), if any.
_find_var_positional_arg
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def _create_cell_with_text(text: str) -> None: """Create a new cell with the provided text content.""" get_ipython().set_next_input(text) # type: ignore[no-untyped-call]
Create a new cell with the provided text content.
_create_cell_with_text
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def _load_node(node_name: str, pipelines: _ProjectPipelines) -> list[str]: """Prepare the code to load dataset from catalog, import statements and function body. Args: node_name (str): The name of the node. Returns: list[str]: A list of string which is the generated code, each string represent a notebook cell. """ warnings.warn( "This is an experimental feature, only Jupyter Notebook (>7.0), Jupyter Lab, IPython, and VSCode Notebook " "are supported. If you encounter unexpected behaviour or would like to suggest " "feature enhancements, add it under this github issue https://github.com/kedro-org/kedro/issues/3580" ) node = _find_node(node_name, pipelines) node_func = node.func imports_cell = _prepare_imports(node_func) function_definition_cell = _prepare_function_body(node_func) node_bound_arguments = _get_node_bound_arguments(node) inputs_params_mapping = _prepare_node_inputs(node_bound_arguments) node_inputs_cell = _format_node_inputs_text(inputs_params_mapping) function_call_cell = _prepare_function_call(node_func, node_bound_arguments) cells: list[str] = [] if node_inputs_cell: cells.append(node_inputs_cell) cells.append(imports_cell) cells.append(function_definition_cell) cells.append(function_call_cell) return cells
Prepare the code to load dataset from catalog, import statements and function body. Args: node_name (str): The name of the node. Returns: list[str]: A list of string which is the generated code, each string represent a notebook cell.
_load_node
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def _prepare_imports(node_func: Callable) -> str: """Prepare the import statements for loading a node.""" python_file = inspect.getsourcefile(node_func) logger.info(f"Loading node definition from {python_file}") # Confirm source file was found if python_file: import_statement = [] with open(python_file) as file: # Handle multiline imports, i.e. # from lib import ( # a, # b, # c # ) # This will not work with all edge cases but good enough with common cases that # are formatted automatically by black, ruff etc. inside_bracket = False # Parse any line start with from or import statement for _ in file.readlines(): line = _.strip() if not inside_bracket: # The common case if line.startswith("from") or line.startswith("import"): import_statement.append(line) if line.endswith("("): inside_bracket = True # Inside multi-lines import, append everything. else: import_statement.append(line) if line.endswith(")"): inside_bracket = False clean_imports = "\n".join(import_statement).strip() return clean_imports else: raise FileNotFoundError(f"Could not find {node_func.__name__}")
Prepare the import statements for loading a node.
_prepare_imports
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def _prepare_function_call( node_func: Callable, node_bound_arguments: _NodeBoundArguments ) -> str: """Prepare the text for the function call.""" func_name = node_func.__name__ args = node_bound_arguments.input_params_dict kwargs = node_bound_arguments.kwargs # Construct the statement of func_name(a=1,b=2,c=3) args_str_literal = [f"{node_input}" for node_input in args] if args else [] kwargs_str_literal = [ f"{node_input}={dataset_name}" for node_input, dataset_name in kwargs.items() ] func_params = ", ".join(args_str_literal + kwargs_str_literal) body = f"""{func_name}({func_params})""" return body
Prepare the text for the function call.
_prepare_function_call
python
kedro-org/kedro
kedro/ipython/__init__.py
https://github.com/kedro-org/kedro/blob/master/kedro/ipython/__init__.py
Apache-2.0
def from_config( cls: type, name: str, config: dict[str, Any], load_version: str | None = None, save_version: str | None = None, ) -> AbstractDataset: """Create a dataset instance using the configuration provided. Args: name: Data set name. config: Data set config dictionary. load_version: Version string to be used for ``load`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. save_version: Version string to be used for ``save`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. Returns: An instance of an ``AbstractDataset`` subclass. Raises: DatasetError: When the function fails to create the dataset from its config. """ try: class_obj, config = parse_dataset_definition( config, load_version, save_version ) except Exception as exc: raise DatasetError( f"An exception occurred when parsing config " f"for dataset '{name}':\n{exc!s}" ) from exc try: dataset = class_obj(**config) except TypeError as err: raise DatasetError( f"\n{err}.\nDataset '{name}' must only contain arguments valid for the " f"constructor of '{class_obj.__module__}.{class_obj.__qualname__}'." ) from err except Exception as err: raise DatasetError( f"\n{err}.\nFailed to instantiate dataset '{name}' " f"of type '{class_obj.__module__}.{class_obj.__qualname__}'." ) from err return dataset
Create a dataset instance using the configuration provided. Args: name: Data set name. config: Data set config dictionary. load_version: Version string to be used for ``load`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. save_version: Version string to be used for ``save`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. Returns: An instance of an ``AbstractDataset`` subclass. Raises: DatasetError: When the function fails to create the dataset from its config.
from_config
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def to_config(self) -> dict[str, Any]: """Converts the dataset instance into a dictionary-based configuration for serialization. Ensures that any subclass-specific details are handled, with additional logic for versioning and caching implemented for `CachedDataset`. Adds a key for the dataset's type using its module and class name and includes the initialization arguments. For `CachedDataset` it extracts the underlying dataset's configuration, handles the `versioned` flag and removes unnecessary metadata. It also ensures the embedded dataset's configuration is appropriately flattened or transformed. If the dataset has a version key, it sets the `versioned` flag in the configuration. Removes the `metadata` key from the configuration if present. Returns: A dictionary containing the dataset's type and initialization arguments. """ return_config: dict[str, Any] = { f"{TYPE_KEY}": f"{type(self).__module__}.{type(self).__name__}" } if self._init_args: # type: ignore[attr-defined] self._init_args.pop("self", None) # type: ignore[attr-defined] return_config.update(self._init_args) # type: ignore[attr-defined] if type(self).__name__ == "CachedDataset": cached_ds = return_config.pop("dataset") cached_ds_return_config: dict[str, Any] = {} if isinstance(cached_ds, dict): cached_ds_return_config = cached_ds elif isinstance(cached_ds, AbstractDataset): cached_ds_return_config = cached_ds.to_config() if VERSIONED_FLAG_KEY in cached_ds_return_config: return_config[VERSIONED_FLAG_KEY] = cached_ds_return_config.pop( VERSIONED_FLAG_KEY ) # Pop metadata from configuration cached_ds_return_config.pop("metadata", None) return_config["dataset"] = cached_ds_return_config # Set `versioned` key if version present in the dataset if return_config.pop(VERSION_KEY, None): return_config[VERSIONED_FLAG_KEY] = True # Pop metadata from configuration return_config.pop("metadata", None) return return_config
Converts the dataset instance into a dictionary-based configuration for serialization. Ensures that any subclass-specific details are handled, with additional logic for versioning and caching implemented for `CachedDataset`. Adds a key for the dataset's type using its module and class name and includes the initialization arguments. For `CachedDataset` it extracts the underlying dataset's configuration, handles the `versioned` flag and removes unnecessary metadata. It also ensures the embedded dataset's configuration is appropriately flattened or transformed. If the dataset has a version key, it sets the `versioned` flag in the configuration. Removes the `metadata` key from the configuration if present. Returns: A dictionary containing the dataset's type and initialization arguments.
to_config
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _to_str(obj: Any, is_root: bool = False) -> str: """Returns a string representation where 1. The root level (i.e. the Dataset.__init__ arguments) are formatted like Dataset(key=value). 2. Dictionaries have the keys alphabetically sorted recursively. 3. None values are not shown. """ fmt = "{}={}" if is_root else "'{}': {}" # 1 if isinstance(obj, dict): sorted_dict = sorted(obj.items(), key=lambda pair: str(pair[0])) # 2 text = ", ".join( fmt.format(key, _to_str(value)) # 2 for key, value in sorted_dict if value is not None # 3 ) return text if is_root else "{" + text + "}" # 1 # not a dictionary return str(obj)
Returns a string representation where 1. The root level (i.e. the Dataset.__init__ arguments) are formatted like Dataset(key=value). 2. Dictionaries have the keys alphabetically sorted recursively. 3. None values are not shown.
__str__._to_str
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def __str__(self) -> str: # TODO: Replace with __repr__ implementation in 0.20.0 release. def _to_str(obj: Any, is_root: bool = False) -> str: """Returns a string representation where 1. The root level (i.e. the Dataset.__init__ arguments) are formatted like Dataset(key=value). 2. Dictionaries have the keys alphabetically sorted recursively. 3. None values are not shown. """ fmt = "{}={}" if is_root else "'{}': {}" # 1 if isinstance(obj, dict): sorted_dict = sorted(obj.items(), key=lambda pair: str(pair[0])) # 2 text = ", ".join( fmt.format(key, _to_str(value)) # 2 for key, value in sorted_dict if value is not None # 3 ) return text if is_root else "{" + text + "}" # 1 # not a dictionary return str(obj) return f"{type(self).__name__}({_to_str(self._describe(), True)})"
Returns a string representation where 1. The root level (i.e. the Dataset.__init__ arguments) are formatted like Dataset(key=value). 2. Dictionaries have the keys alphabetically sorted recursively. 3. None values are not shown.
__str__
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _load_wrapper(cls, load_func: Callable[[Self], _DO]) -> Callable[[Self], _DO]: """Decorate `load_func` with logging and error handling code.""" @wraps(load_func) def load(self: Self) -> _DO: self._logger.debug("Loading %s", str(self)) try: return load_func(self) except DatasetError: raise except Exception as exc: # This exception handling is by design as the composed datasets # can throw any type of exception. message = f"Failed while loading data from dataset {self!s}.\n{exc!s}" raise DatasetError(message) from exc load.__annotations__["return"] = load_func.__annotations__.get("return") load.__loadwrapped__ = True # type: ignore[attr-defined] return load
Decorate `load_func` with logging and error handling code.
_load_wrapper
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _save_wrapper( cls, save_func: Callable[[Self, _DI], None] ) -> Callable[[Self, _DI], None]: """Decorate `save_func` with logging and error handling code.""" @wraps(save_func) def save(self: Self, data: _DI) -> None: if data is None: raise DatasetError("Saving 'None' to a 'Dataset' is not allowed") try: self._logger.debug("Saving %s", str(self)) save_func(self, data) except (DatasetError, FileNotFoundError, NotADirectoryError): raise except Exception as exc: message = f"Failed while saving data to dataset {self!s}.\n{exc!s}" raise DatasetError(message) from exc save.__annotations__["data"] = save_func.__annotations__.get("data", Any) save.__annotations__["return"] = save_func.__annotations__.get("return") save.__savewrapped__ = True # type: ignore[attr-defined] return save
Decorate `save_func` with logging and error handling code.
_save_wrapper
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def new_init(self, *args, **kwargs) -> None: # type: ignore[no-untyped-def] """Executes the original __init__, then save the arguments used to initialize the instance. """ # Call the original __init__ method init_func(self, *args, **kwargs) # Capture and save the arguments passed to the original __init__ self._init_args = getcallargs(init_func, self, *args, **kwargs)
Executes the original __init__, then save the arguments used to initialize the instance.
__init_subclass__.new_init
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def __init_subclass__(cls, **kwargs: Any) -> None: """Customizes the behavior of subclasses of AbstractDataset during their creation. This method is automatically invoked when a subclass of AbstractDataset is defined. Decorates the `load` and `save` methods provided by the class. If `_load` or `_save` are defined, alias them as a prerequisite. """ # Save the original __init__ method of the subclass init_func: Callable = cls.__init__ @wraps(init_func) def new_init(self, *args, **kwargs) -> None: # type: ignore[no-untyped-def] """Executes the original __init__, then save the arguments used to initialize the instance. """ # Call the original __init__ method init_func(self, *args, **kwargs) # Capture and save the arguments passed to the original __init__ self._init_args = getcallargs(init_func, self, *args, **kwargs) # Replace the subclass's __init__ with the new_init # A hook for subclasses to capture initialization arguments and save them # in the AbstractDataset._init_args field cls.__init__ = new_init # type: ignore[method-assign] super().__init_subclass__(**kwargs) if hasattr(cls, "_load") and not cls._load.__qualname__.startswith("Abstract"): cls.load = cls._load # type: ignore[method-assign] if hasattr(cls, "_save") and not cls._save.__qualname__.startswith("Abstract"): cls.save = cls._save # type: ignore[method-assign] if hasattr(cls, "load") and not cls.load.__qualname__.startswith("Abstract"): cls.load = cls._load_wrapper( # type: ignore[assignment] cls.load if not getattr(cls.load, "__loadwrapped__", False) else cls.load.__wrapped__ # type: ignore[attr-defined] ) if hasattr(cls, "save") and not cls.save.__qualname__.startswith("Abstract"): cls.save = cls._save_wrapper( # type: ignore[assignment] cls.save if not getattr(cls.save, "__savewrapped__", False) else cls.save.__wrapped__ # type: ignore[attr-defined] )
Customizes the behavior of subclasses of AbstractDataset during their creation. This method is automatically invoked when a subclass of AbstractDataset is defined. Decorates the `load` and `save` methods provided by the class. If `_load` or `_save` are defined, alias them as a prerequisite.
__init_subclass__
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def load(self) -> _DO: """Loads data by delegation to the provided load method. Returns: Data returned by the provided load method. Raises: DatasetError: When underlying load method raises error. """ raise NotImplementedError( f"'{self.__class__.__name__}' is a subclass of AbstractDataset and " f"it must implement the 'load' method" )
Loads data by delegation to the provided load method. Returns: Data returned by the provided load method. Raises: DatasetError: When underlying load method raises error.
load
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def save(self, data: _DI) -> None: """Saves data by delegation to the provided save method. Args: data: the value to be saved by provided save method. Raises: DatasetError: when underlying save method raises error. FileNotFoundError: when save method got file instead of dir, on Windows. NotADirectoryError: when save method got file instead of dir, on Unix. """ raise NotImplementedError( f"'{self.__class__.__name__}' is a subclass of AbstractDataset and " f"it must implement the 'save' method" )
Saves data by delegation to the provided save method. Args: data: the value to be saved by provided save method. Raises: DatasetError: when underlying save method raises error. FileNotFoundError: when save method got file instead of dir, on Windows. NotADirectoryError: when save method got file instead of dir, on Unix.
save
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def exists(self) -> bool: """Checks whether a dataset's output already exists by calling the provided _exists() method. Returns: Flag indicating whether the output already exists. Raises: DatasetError: when underlying exists method raises error. """ try: self._logger.debug("Checking whether target of %s exists", str(self)) return self._exists() except Exception as exc: message = f"Failed during exists check for dataset {self!s}.\n{exc!s}" raise DatasetError(message) from exc
Checks whether a dataset's output already exists by calling the provided _exists() method. Returns: Flag indicating whether the output already exists. Raises: DatasetError: when underlying exists method raises error.
exists
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def release(self) -> None: """Release any cached data. Raises: DatasetError: when underlying release method raises error. """ try: self._logger.debug("Releasing %s", str(self)) self._release() except Exception as exc: message = f"Failed during release for dataset {self!s}.\n{exc!s}" raise DatasetError(message) from exc
Release any cached data. Raises: DatasetError: when underlying release method raises error.
release
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def generate_timestamp() -> str: """Generate the timestamp to be used by versioning. Returns: String representation of the current timestamp. """ current_ts = datetime.now(tz=timezone.utc).strftime(VERSION_FORMAT) return current_ts[:-4] + current_ts[-1:] # Don't keep microseconds
Generate the timestamp to be used by versioning. Returns: String representation of the current timestamp.
generate_timestamp
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def parse_dataset_definition( config: dict[str, Any], load_version: str | None = None, save_version: str | None = None, ) -> tuple[type[AbstractDataset], dict[str, Any]]: """Parse and instantiate a dataset class using the configuration provided. Args: config: Data set config dictionary. It *must* contain the `type` key with fully qualified class name or the class object. load_version: Version string to be used for ``load`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. save_version: Version string to be used for ``save`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. Raises: DatasetError: If the function fails to parse the configuration provided. Returns: 2-tuple: (Dataset class object, configuration dictionary) """ save_version = save_version or generate_timestamp() config = copy.deepcopy(config) # TODO: remove when removing old catalog as moved to KedroDataCatalog if TYPE_KEY not in config: raise DatasetError( "'type' is missing from dataset catalog configuration." "\nHint: If this catalog entry is intended for variable interpolation, " "make sure that the top level key is preceded by an underscore." ) dataset_type = config.pop(TYPE_KEY) class_obj = None if isinstance(dataset_type, str): if len(dataset_type.strip(".")) != len(dataset_type): raise DatasetError( "'type' class path does not support relative " "paths or paths ending with a dot." ) class_paths = (prefix + dataset_type for prefix in _DEFAULT_PACKAGES) for class_path in class_paths: tmp = _load_obj(class_path) if tmp is not None: class_obj = tmp break else: hint = ( "Hint: If you are trying to use a dataset from `kedro-datasets`, " "make sure that the package is installed in your current environment. " "You can do so by running `pip install kedro-datasets` or " "`pip install kedro-datasets[<dataset-group>]` to install `kedro-datasets` along with " "related dependencies for the specific dataset group." ) raise DatasetError( f"Class '{dataset_type}' not found, is this a typo?" f"\n{hint}" ) if not class_obj: class_obj = dataset_type if not issubclass(class_obj, AbstractDataset): raise DatasetError( f"Dataset type '{class_obj.__module__}.{class_obj.__qualname__}' " f"is invalid: all dataset types must extend 'AbstractDataset'." ) if VERSION_KEY in config: # remove "version" key so that it's not passed # to the "unversioned" dataset constructor message = ( "'%s' attribute removed from dataset configuration since it is a " "reserved word and cannot be directly specified" ) logging.getLogger(__name__).warning(message, VERSION_KEY) del config[VERSION_KEY] # dataset is either versioned explicitly by the user or versioned is set to true by default # on the dataset if config.pop(VERSIONED_FLAG_KEY, False) or getattr( class_obj, VERSIONED_FLAG_KEY, False ): config[VERSION_KEY] = Version(load_version, save_version) return class_obj, config
Parse and instantiate a dataset class using the configuration provided. Args: config: Data set config dictionary. It *must* contain the `type` key with fully qualified class name or the class object. load_version: Version string to be used for ``load`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. save_version: Version string to be used for ``save`` operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled. Raises: DatasetError: If the function fails to parse the configuration provided. Returns: 2-tuple: (Dataset class object, configuration dictionary)
parse_dataset_definition
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def __init__( self, filepath: PurePosixPath, version: Version | None, exists_function: Callable[[str], bool] | None = None, glob_function: Callable[[str], list[str]] | None = None, ): """Creates a new instance of ``AbstractVersionedDataset``. Args: filepath: Filepath in POSIX format to a file. version: If specified, should be an instance of ``kedro.io.core.Version``. If its ``load`` attribute is None, the latest version will be loaded. If its ``save`` attribute is None, save version will be autogenerated. exists_function: Function that is used for determining whether a path exists in a filesystem. glob_function: Function that is used for finding all paths in a filesystem, which match a given pattern. """ self._filepath = filepath self._version = version self._exists_function = exists_function or _local_exists self._glob_function = glob_function or iglob # 1 entry for load version, 1 for save version self._version_cache = Cache(maxsize=2) # type: Cache
Creates a new instance of ``AbstractVersionedDataset``. Args: filepath: Filepath in POSIX format to a file. version: If specified, should be an instance of ``kedro.io.core.Version``. If its ``load`` attribute is None, the latest version will be loaded. If its ``save`` attribute is None, save version will be autogenerated. exists_function: Function that is used for determining whether a path exists in a filesystem. glob_function: Function that is used for finding all paths in a filesystem, which match a given pattern.
__init__
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _fetch_latest_save_version(self) -> str: """Generate and cache the current save version""" return generate_timestamp()
Generate and cache the current save version
_fetch_latest_save_version
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def resolve_load_version(self) -> str | None: """Compute the version the dataset should be loaded with.""" if not self._version: return None if self._version.load: return self._version.load # type: ignore[no-any-return] return self._fetch_latest_load_version()
Compute the version the dataset should be loaded with.
resolve_load_version
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def resolve_save_version(self) -> str | None: """Compute the version the dataset should be saved with.""" if not self._version: return None if self._version.save: return self._version.save # type: ignore[no-any-return] return self._fetch_latest_save_version()
Compute the version the dataset should be saved with.
resolve_save_version
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _save_wrapper( cls, save_func: Callable[[Self, _DI], None] ) -> Callable[[Self, _DI], None]: """Decorate `save_func` with logging and error handling code.""" @wraps(save_func) def save(self: Self, data: _DI) -> None: self._version_cache.clear() save_version = ( self.resolve_save_version() ) # Make sure last save version is set try: super()._save_wrapper(save_func)(self, data) except (FileNotFoundError, NotADirectoryError) as err: # FileNotFoundError raised in Win, NotADirectoryError raised in Unix _default_version = "YYYY-MM-DDThh.mm.ss.sssZ" raise DatasetError( f"Cannot save versioned dataset '{self._filepath.name}' to " f"'{self._filepath.parent.as_posix()}' because a file with the same " f"name already exists in the directory. This is likely because " f"versioning was enabled on a dataset already saved previously. Either " f"remove '{self._filepath.name}' from the directory or manually " f"convert it into a versioned dataset by placing it in a versioned " f"directory (e.g. with default versioning format " f"'{self._filepath.as_posix()}/{_default_version}/{self._filepath.name}" f"')." ) from err load_version = self.resolve_load_version() if load_version != save_version: warnings.warn( _CONSISTENCY_WARNING.format(save_version, load_version, str(self)) ) self._version_cache.clear() return save
Decorate `save_func` with logging and error handling code.
_save_wrapper
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def exists(self) -> bool: """Checks whether a dataset's output already exists by calling the provided _exists() method. Returns: Flag indicating whether the output already exists. Raises: DatasetError: when underlying exists method raises error. """ self._logger.debug("Checking whether target of %s exists", str(self)) try: return self._exists() except VersionNotFoundError: return False except Exception as exc: # SKIP_IF_NO_SPARK message = f"Failed during exists check for dataset {self!s}.\n{exc!s}" raise DatasetError(message) from exc
Checks whether a dataset's output already exists by calling the provided _exists() method. Returns: Flag indicating whether the output already exists. Raises: DatasetError: when underlying exists method raises error.
exists
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _parse_filepath(filepath: str) -> dict[str, str]: """Split filepath on protocol and path. Based on `fsspec.utils.infer_storage_options`. Args: filepath: Either local absolute file path or URL (s3://bucket/file.csv) Returns: Parsed filepath. """ if ( re.match(r"^[a-zA-Z]:[\\/]", filepath) or re.match(r"^[a-zA-Z0-9]+://", filepath) is None ): return {"protocol": "file", "path": filepath} parsed_path = urlsplit(filepath) protocol = parsed_path.scheme or "file" if protocol in HTTP_PROTOCOLS: return {"protocol": protocol, "path": filepath} path = parsed_path.path if protocol == "file": windows_path = re.match(r"^/([a-zA-Z])[:|]([\\/].*)$", path) if windows_path: path = ":".join(windows_path.groups()) if parsed_path.query: path = f"{path}?{parsed_path.query}" if parsed_path.fragment: path = f"{path}#{parsed_path.fragment}" options = {"protocol": protocol, "path": path} if parsed_path.netloc and protocol in CLOUD_PROTOCOLS: host_with_port = parsed_path.netloc.rsplit("@", 1)[-1] host = host_with_port.rsplit(":", 1)[0] options["path"] = host + options["path"] # - Azure Data Lake Storage Gen2 URIs can store the container name in the # 'username' field of a URL (@ syntax), so we need to add it to the path # - Oracle Cloud Infrastructure (OCI) Object Storage filesystem (ocifs) also # uses the @ syntax for I/O operations: "oci://bucket@namespace/path_to_file" if protocol in ["abfss", "oci"] and parsed_path.username: options["path"] = parsed_path.username + "@" + options["path"] return options
Split filepath on protocol and path. Based on `fsspec.utils.infer_storage_options`. Args: filepath: Either local absolute file path or URL (s3://bucket/file.csv) Returns: Parsed filepath.
_parse_filepath
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def get_protocol_and_path( filepath: str | os.PathLike, version: Version | None = None ) -> tuple[str, str]: """Parses filepath on protocol and path. .. warning:: Versioning is not supported for HTTP protocols. Args: filepath: raw filepath e.g.: ``gcs://bucket/test.json``. version: instance of ``kedro.io.core.Version`` or None. Returns: Protocol and path. Raises: DatasetError: when protocol is http(s) and version is not None. """ options_dict = _parse_filepath(str(filepath)) path = options_dict["path"] protocol = options_dict["protocol"] if protocol in HTTP_PROTOCOLS: if version is not None: raise DatasetError( "Versioning is not supported for HTTP protocols. " "Please remove the `versioned` flag from the dataset configuration." ) path = path.split(PROTOCOL_DELIMITER, 1)[-1] return protocol, path
Parses filepath on protocol and path. .. warning:: Versioning is not supported for HTTP protocols. Args: filepath: raw filepath e.g.: ``gcs://bucket/test.json``. version: instance of ``kedro.io.core.Version`` or None. Returns: Protocol and path. Raises: DatasetError: when protocol is http(s) and version is not None.
get_protocol_and_path
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def get_filepath_str(raw_path: PurePath, protocol: str) -> str: """Returns filepath. Returns full filepath (with protocol) if protocol is HTTP(s). Args: raw_path: filepath without protocol. protocol: protocol. Returns: Filepath string. """ path = raw_path.as_posix() if protocol in HTTP_PROTOCOLS: path = "".join((protocol, PROTOCOL_DELIMITER, path)) return path
Returns filepath. Returns full filepath (with protocol) if protocol is HTTP(s). Args: raw_path: filepath without protocol. protocol: protocol. Returns: Filepath string.
get_filepath_str
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def validate_on_forbidden_chars(**kwargs: Any) -> None: """Validate that string values do not include white-spaces or ;""" for key, value in kwargs.items(): if " " in value or ";" in value: raise DatasetError( f"Neither white-space nor semicolon are allowed in '{key}'." )
Validate that string values do not include white-spaces or ;
validate_on_forbidden_chars
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def __contains__(self, ds_name: str) -> bool: """Check if a dataset is in the catalog.""" ...
Check if a dataset is in the catalog.
__contains__
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def config_resolver(self) -> CatalogConfigResolver: """Return a copy of the datasets dictionary.""" ...
Return a copy of the datasets dictionary.
config_resolver
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def from_config(cls, catalog: dict[str, dict[str, Any]] | None) -> _C: """Create a catalog instance from configuration.""" ...
Create a catalog instance from configuration.
from_config
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _get_dataset( self, dataset_name: str, version: Any = None, suggest: bool = True, ) -> AbstractDataset: """Retrieve a dataset by its name.""" ...
Retrieve a dataset by its name.
_get_dataset
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def list(self, regex_search: str | None = None) -> list[str]: """List all dataset names registered in the catalog.""" ...
List all dataset names registered in the catalog.
list
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def save(self, name: str, data: Any) -> None: """Save data to a registered dataset.""" ...
Save data to a registered dataset.
save
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def load(self, name: str, version: str | None = None) -> Any: """Load data from a registered dataset.""" ...
Load data from a registered dataset.
load
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def add(self, ds_name: str, dataset: Any, replace: bool = False) -> None: """Add a new dataset to the catalog.""" ...
Add a new dataset to the catalog.
add
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def add_feed_dict(self, datasets: dict[str, Any], replace: bool = False) -> None: """Add datasets to the catalog using the data provided through the `feed_dict`.""" ...
Add datasets to the catalog using the data provided through the `feed_dict`.
add_feed_dict
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def exists(self, name: str) -> bool: """Checks whether registered dataset exists by calling its `exists()` method.""" ...
Checks whether registered dataset exists by calling its `exists()` method.
exists
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def release(self, name: str) -> None: """Release any cached data associated with a dataset.""" ...
Release any cached data associated with a dataset.
release
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def confirm(self, name: str) -> None: """Confirm a dataset by its name.""" ...
Confirm a dataset by its name.
confirm
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def shallow_copy(self, extra_dataset_patterns: Patterns | None = None) -> _C: """Returns a shallow copy of the current object.""" ...
Returns a shallow copy of the current object.
shallow_copy
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0
def _validate_versions( datasets: dict[str, AbstractDataset] | None, load_versions: dict[str, str], save_version: str | None, ) -> tuple[dict[str, str], str | None]: """Validates and synchronises dataset versions for loading and saving. Ensures consistency of dataset versions across a catalog, particularly for versioned datasets. It updates load versions and validates that all save versions are consistent. Args: datasets: A dictionary mapping dataset names to their instances. if None, no validation occurs. load_versions: A mapping between dataset names and versions to load. save_version: Version string to be used for ``save`` operations by all datasets with versioning enabled. Returns: Updated ``load_versions`` with load versions specified in the ``datasets`` and resolved ``save_version``. Raises: VersionAlreadyExistsError: If a dataset's save version conflicts with the catalog's save version. """ if not datasets: return load_versions, save_version cur_load_versions = load_versions.copy() cur_save_version = save_version for ds_name, ds in datasets.items(): # TODO: Move to kedro/io/kedro_data_catalog.py when removing DataCatalog # TODO: Make it a protected static method for KedroDataCatalog # TODO: Replace with isinstance(ds, CachedDataset) - current implementation avoids circular import cur_ds = ds._dataset if ds.__class__.__name__ == "CachedDataset" else ds # type: ignore[attr-defined] if isinstance(cur_ds, AbstractVersionedDataset) and cur_ds._version: if cur_ds._version.load: cur_load_versions[ds_name] = cur_ds._version.load if cur_ds._version.save: cur_save_version = cur_save_version or cur_ds._version.save if cur_save_version != cur_ds._version.save: raise VersionAlreadyExistsError( f"Cannot add a dataset `{ds_name}` with `{cur_ds._version.save}` save version. " f"Save version set for the catalog is `{cur_save_version}`" f"All datasets in the catalog must have the same save version." ) return cur_load_versions, cur_save_version
Validates and synchronises dataset versions for loading and saving. Ensures consistency of dataset versions across a catalog, particularly for versioned datasets. It updates load versions and validates that all save versions are consistent. Args: datasets: A dictionary mapping dataset names to their instances. if None, no validation occurs. load_versions: A mapping between dataset names and versions to load. save_version: Version string to be used for ``save`` operations by all datasets with versioning enabled. Returns: Updated ``load_versions`` with load versions specified in the ``datasets`` and resolved ``save_version``. Raises: VersionAlreadyExistsError: If a dataset's save version conflicts with the catalog's save version.
_validate_versions
python
kedro-org/kedro
kedro/io/core.py
https://github.com/kedro-org/kedro/blob/master/kedro/io/core.py
Apache-2.0