response
stringlengths 1
33.1k
| instruction
stringlengths 22
582k
|
---|---|
Convert a path-like GCP resource name into a dictionary.
For example, the path `projects/my-project/locations/my-location/instances/my-instance` will be converted
to a dict:
`{"projects": "my-project",
"locations": "my-location",
"instances": "my-instance",}` | def resource_path_to_dict(resource_name: str) -> dict[str, str]:
"""Convert a path-like GCP resource name into a dictionary.
For example, the path `projects/my-project/locations/my-location/instances/my-instance` will be converted
to a dict:
`{"projects": "my-project",
"locations": "my-location",
"instances": "my-instance",}`
"""
if not resource_name:
return {}
path_items = resource_name.split("/")
if len(path_items) % 2:
raise ValueError(
"Invalid resource_name. Expected the path-like name consisting of key/value pairs "
"'key1/value1/key2/value2/...', for example 'projects/<project>/locations/<location>'."
)
iterator = iter(path_items)
return dict(zip(iterator, iterator)) |
Create Operators needed for model evaluation and returns.
This function is deprecated. All the functionality of legacy MLEngine and new features are available
on the Vertex AI platform.
To create and view Model Evaluation, please check the documentation:
https://cloud.google.com/vertex-ai/docs/evaluation/using-model-evaluation#create_an_evaluation.
It gets prediction over inputs via Cloud ML Engine BatchPrediction API by
calling MLEngineBatchPredictionOperator, then summarize and validate
the result via Cloud Dataflow using DataFlowPythonOperator.
For details and pricing about Batch prediction, please refer to the website
https://cloud.google.com/ml-engine/docs/how-tos/batch-predict
and for Cloud Dataflow, https://cloud.google.com/dataflow/docs/
It returns three chained operators for prediction, summary, and validation,
named as ``<prefix>-prediction``, ``<prefix>-summary``, and ``<prefix>-validation``,
respectively.
(``<prefix>`` should contain only alphanumeric characters or hyphen.)
The upstream and downstream can be set accordingly like:
.. code-block:: python
pred, _, val = create_evaluate_ops(...)
pred.set_upstream(upstream_op)
...
downstream_op.set_upstream(val)
Callers will provide two python callables, metric_fn and validate_fn, in
order to customize the evaluation behavior as they wish.
- metric_fn receives a dictionary per instance derived from json in the
batch prediction result. The keys might vary depending on the model.
It should return a tuple of metrics.
- validation_fn receives a dictionary of the averaged metrics that metric_fn
generated over all instances.
The key/value of the dictionary matches to what's given by
metric_fn_and_keys arg.
The dictionary contains an additional metric, 'count' to represent the
total number of instances received for evaluation.
The function would raise an exception to mark the task as failed, in a
case the validation result is not okay to proceed (i.e. to set the trained
version as default).
Typical examples are like this:
.. code-block:: python
def get_metric_fn_and_keys():
import math # imports should be outside of the metric_fn below.
def error_and_squared_error(inst):
label = float(inst["input_label"])
classes = float(inst["classes"]) # 0 or 1
err = abs(classes - label)
squared_err = math.pow(classes - label, 2)
return (err, squared_err) # returns a tuple.
return error_and_squared_error, ["err", "mse"] # key order must match.
def validate_err_and_count(summary):
if summary["err"] > 0.2:
raise ValueError("Too high err>0.2; summary=%s" % summary)
if summary["mse"] > 0.05:
raise ValueError("Too high mse>0.05; summary=%s" % summary)
if summary["count"] < 1000:
raise ValueError("Too few instances<1000; summary=%s" % summary)
return summary
For the details on the other BatchPrediction-related arguments (project_id,
job_id, region, data_format, input_paths, prediction_path, model_uri),
please refer to MLEngineBatchPredictionOperator too.
:param task_prefix: a prefix for the tasks. Only alphanumeric characters and
hyphen are allowed (no underscores), since this will be used as dataflow
job name, which doesn't allow other characters.
:param data_format: either of 'TEXT', 'TF_RECORD', 'TF_RECORD_GZIP'
:param input_paths: a list of input paths to be sent to BatchPrediction.
:param prediction_path: GCS path to put the prediction results in.
:param metric_fn_and_keys: a tuple of metric_fn and metric_keys:
- metric_fn is a function that accepts a dictionary (for an instance),
and returns a tuple of metric(s) that it calculates.
- metric_keys is a list of strings to denote the key of each metric.
:param validate_fn: a function to validate whether the averaged metric(s) is
good enough to push the model.
:param batch_prediction_job_id: the id to use for the Cloud ML Batch
prediction job. Passed directly to the MLEngineBatchPredictionOperator as
the job_id argument.
:param project_id: the Google Cloud project id in which to execute
Cloud ML Batch Prediction and Dataflow jobs. If None, then the `dag`'s
`default_args['project_id']` will be used.
:param region: the Google Cloud region in which to execute Cloud ML
Batch Prediction and Dataflow jobs. If None, then the `dag`'s
`default_args['region']` will be used.
:param dataflow_options: options to run Dataflow jobs. If None, then the
`dag`'s `default_args['dataflow_default_options']` will be used.
:param model_uri: GCS path of the model exported by Tensorflow using
``tensorflow.estimator.export_savedmodel()``. It cannot be used with
model_name or version_name below. See MLEngineBatchPredictionOperator for
more detail.
:param model_name: Used to indicate a model to use for prediction. Can be
used in combination with version_name, but cannot be used together with
model_uri. See MLEngineBatchPredictionOperator for more detail. If None,
then the `dag`'s `default_args['model_name']` will be used.
:param version_name: Used to indicate a model version to use for prediction,
in combination with model_name. Cannot be used together with model_uri.
See MLEngineBatchPredictionOperator for more detail. If None, then the
`dag`'s `default_args['version_name']` will be used.
:param dag: The `DAG` to use for all Operators.
:param py_interpreter: Python version of the beam pipeline.
If None, this defaults to the python3.
To track python versions supported by beam and related
issues check: https://issues.apache.org/jira/browse/BEAM-1251
:returns: a tuple of three operators, (prediction, summary, validation)
PythonOperator) | def create_evaluate_ops(
task_prefix: str,
data_format: str,
input_paths: list[str],
prediction_path: str,
metric_fn_and_keys: tuple[T, Iterable[str]],
validate_fn: T,
batch_prediction_job_id: str | None = None,
region: str | None = None,
project_id: str | None = None,
dataflow_options: dict | None = None,
model_uri: str | None = None,
model_name: str | None = None,
version_name: str | None = None,
dag: DAG | None = None,
py_interpreter="python3",
) -> tuple[MLEngineStartBatchPredictionJobOperator, BeamRunPythonPipelineOperator, PythonOperator]:
r"""
Create Operators needed for model evaluation and returns.
This function is deprecated. All the functionality of legacy MLEngine and new features are available
on the Vertex AI platform.
To create and view Model Evaluation, please check the documentation:
https://cloud.google.com/vertex-ai/docs/evaluation/using-model-evaluation#create_an_evaluation.
It gets prediction over inputs via Cloud ML Engine BatchPrediction API by
calling MLEngineBatchPredictionOperator, then summarize and validate
the result via Cloud Dataflow using DataFlowPythonOperator.
For details and pricing about Batch prediction, please refer to the website
https://cloud.google.com/ml-engine/docs/how-tos/batch-predict
and for Cloud Dataflow, https://cloud.google.com/dataflow/docs/
It returns three chained operators for prediction, summary, and validation,
named as ``<prefix>-prediction``, ``<prefix>-summary``, and ``<prefix>-validation``,
respectively.
(``<prefix>`` should contain only alphanumeric characters or hyphen.)
The upstream and downstream can be set accordingly like:
.. code-block:: python
pred, _, val = create_evaluate_ops(...)
pred.set_upstream(upstream_op)
...
downstream_op.set_upstream(val)
Callers will provide two python callables, metric_fn and validate_fn, in
order to customize the evaluation behavior as they wish.
- metric_fn receives a dictionary per instance derived from json in the
batch prediction result. The keys might vary depending on the model.
It should return a tuple of metrics.
- validation_fn receives a dictionary of the averaged metrics that metric_fn
generated over all instances.
The key/value of the dictionary matches to what's given by
metric_fn_and_keys arg.
The dictionary contains an additional metric, 'count' to represent the
total number of instances received for evaluation.
The function would raise an exception to mark the task as failed, in a
case the validation result is not okay to proceed (i.e. to set the trained
version as default).
Typical examples are like this:
.. code-block:: python
def get_metric_fn_and_keys():
import math # imports should be outside of the metric_fn below.
def error_and_squared_error(inst):
label = float(inst["input_label"])
classes = float(inst["classes"]) # 0 or 1
err = abs(classes - label)
squared_err = math.pow(classes - label, 2)
return (err, squared_err) # returns a tuple.
return error_and_squared_error, ["err", "mse"] # key order must match.
def validate_err_and_count(summary):
if summary["err"] > 0.2:
raise ValueError("Too high err>0.2; summary=%s" % summary)
if summary["mse"] > 0.05:
raise ValueError("Too high mse>0.05; summary=%s" % summary)
if summary["count"] < 1000:
raise ValueError("Too few instances<1000; summary=%s" % summary)
return summary
For the details on the other BatchPrediction-related arguments (project_id,
job_id, region, data_format, input_paths, prediction_path, model_uri),
please refer to MLEngineBatchPredictionOperator too.
:param task_prefix: a prefix for the tasks. Only alphanumeric characters and
hyphen are allowed (no underscores), since this will be used as dataflow
job name, which doesn't allow other characters.
:param data_format: either of 'TEXT', 'TF_RECORD', 'TF_RECORD_GZIP'
:param input_paths: a list of input paths to be sent to BatchPrediction.
:param prediction_path: GCS path to put the prediction results in.
:param metric_fn_and_keys: a tuple of metric_fn and metric_keys:
- metric_fn is a function that accepts a dictionary (for an instance),
and returns a tuple of metric(s) that it calculates.
- metric_keys is a list of strings to denote the key of each metric.
:param validate_fn: a function to validate whether the averaged metric(s) is
good enough to push the model.
:param batch_prediction_job_id: the id to use for the Cloud ML Batch
prediction job. Passed directly to the MLEngineBatchPredictionOperator as
the job_id argument.
:param project_id: the Google Cloud project id in which to execute
Cloud ML Batch Prediction and Dataflow jobs. If None, then the `dag`'s
`default_args['project_id']` will be used.
:param region: the Google Cloud region in which to execute Cloud ML
Batch Prediction and Dataflow jobs. If None, then the `dag`'s
`default_args['region']` will be used.
:param dataflow_options: options to run Dataflow jobs. If None, then the
`dag`'s `default_args['dataflow_default_options']` will be used.
:param model_uri: GCS path of the model exported by Tensorflow using
``tensorflow.estimator.export_savedmodel()``. It cannot be used with
model_name or version_name below. See MLEngineBatchPredictionOperator for
more detail.
:param model_name: Used to indicate a model to use for prediction. Can be
used in combination with version_name, but cannot be used together with
model_uri. See MLEngineBatchPredictionOperator for more detail. If None,
then the `dag`'s `default_args['model_name']` will be used.
:param version_name: Used to indicate a model version to use for prediction,
in combination with model_name. Cannot be used together with model_uri.
See MLEngineBatchPredictionOperator for more detail. If None, then the
`dag`'s `default_args['version_name']` will be used.
:param dag: The `DAG` to use for all Operators.
:param py_interpreter: Python version of the beam pipeline.
If None, this defaults to the python3.
To track python versions supported by beam and related
issues check: https://issues.apache.org/jira/browse/BEAM-1251
:returns: a tuple of three operators, (prediction, summary, validation)
PythonOperator)
"""
batch_prediction_job_id = batch_prediction_job_id or ""
dataflow_options = dataflow_options or {}
region = region or ""
# Verify that task_prefix doesn't have any special characters except hyphen
# '-', which is the only allowed non-alphanumeric character by Dataflow.
if not re.fullmatch(r"[a-zA-Z][-A-Za-z0-9]*", task_prefix):
raise AirflowException(
"Malformed task_id for DataFlowPythonOperator (only alphanumeric "
"and hyphens are allowed but got: " + task_prefix
)
metric_fn, metric_keys = metric_fn_and_keys
if not callable(metric_fn):
raise AirflowException("`metric_fn` param must be callable.")
if not callable(validate_fn):
raise AirflowException("`validate_fn` param must be callable.")
if dag is not None and dag.default_args is not None:
default_args = dag.default_args
project_id = project_id or default_args.get("project_id")
region = region or default_args["region"]
model_name = model_name or default_args.get("model_name")
version_name = version_name or default_args.get("version_name")
dataflow_options = dataflow_options or default_args.get("dataflow_default_options")
evaluate_prediction = MLEngineStartBatchPredictionJobOperator(
task_id=(task_prefix + "-prediction"),
project_id=project_id,
job_id=batch_prediction_job_id,
region=region,
data_format=data_format,
input_paths=input_paths,
output_path=prediction_path,
uri=model_uri,
model_name=model_name,
version_name=version_name,
dag=dag,
)
metric_fn_encoded = base64.b64encode(dill.dumps(metric_fn, recurse=True)).decode()
evaluate_summary = BeamRunPythonPipelineOperator(
task_id=(task_prefix + "-summary"),
runner=BeamRunnerType.DataflowRunner,
py_file=os.path.join(os.path.dirname(__file__), "mlengine_prediction_summary.py"),
default_pipeline_options=dataflow_options,
pipeline_options={
"prediction_path": prediction_path,
"metric_fn_encoded": metric_fn_encoded,
"metric_keys": ",".join(metric_keys),
},
py_interpreter=py_interpreter,
py_requirements=["apache-beam[gcp]>=2.46.0"],
dag=dag,
)
evaluate_summary.set_upstream(evaluate_prediction)
def apply_validate_fn(*args, templates_dict, **kwargs):
prediction_path = templates_dict["prediction_path"]
scheme, bucket, obj, _, _ = urlsplit(prediction_path)
if scheme != "gs" or not bucket or not obj:
raise ValueError(f"Wrong format prediction_path: {prediction_path}")
summary = os.path.join(obj.strip("/"), "prediction.summary.json")
gcs_hook = GCSHook()
summary = json.loads(gcs_hook.download(bucket, summary).decode("utf-8"))
return validate_fn(summary)
evaluate_validation = PythonOperator(
task_id=(task_prefix + "-validation"),
python_callable=apply_validate_fn,
templates_dict={"prediction_path": prediction_path},
dag=dag,
)
evaluate_validation.set_upstream(evaluate_summary)
return evaluate_prediction, evaluate_summary, evaluate_validation |
Summary PTransform used in Dataflow. | def MakeSummary(pcoll, metric_fn, metric_keys):
"""Summary PTransform used in Dataflow."""
return (
pcoll
| "ApplyMetricFnPerInstance" >> beam.Map(metric_fn)
| "PairWith1" >> beam.Map(lambda tup: (*tup, 1))
| "SumTuple" >> beam.CombineGlobally(beam.combiners.TupleCombineFn(*([sum] * (len(metric_keys) + 1))))
| "AverageAndMakeDict"
>> beam.Map(
lambda tup: dict(
[(name, tup[i] / tup[-1]) for i, name in enumerate(metric_keys)] + [("count", tup[-1])]
)
)
) |
Obtain prediction summary. | def run(argv=None):
"""Obtain prediction summary."""
parser = argparse.ArgumentParser()
parser.add_argument(
"--prediction_path",
required=True,
help=(
"The GCS folder that contains BatchPrediction results, containing "
"prediction.results-NNNNN-of-NNNNN files in the json format. "
"Output will be also stored in this folder, as a file"
"'prediction.summary.json'."
),
)
parser.add_argument(
"--metric_fn_encoded",
required=True,
help=(
"An encoded function that calculates and returns a tuple of "
"metric(s) for a given instance (as a dictionary). It should be "
"encoded via base64.b64encode(dill.dumps(fn, recurse=True))."
),
)
parser.add_argument(
"--metric_keys",
required=True,
help=(
"A comma-separated keys of the aggregated metric(s) in the summary "
"output. The order and the size of the keys must match to the "
"output of metric_fn. The summary will have an additional key, "
"'count', to represent the total number of instances, so this flag "
"shouldn't include 'count'."
),
)
known_args, pipeline_args = parser.parse_known_args(argv)
metric_fn = dill.loads(base64.b64decode(known_args.metric_fn_encoded))
if not callable(metric_fn):
raise ValueError("--metric_fn_encoded must be an encoded callable.")
metric_keys = known_args.metric_keys.split(",")
with beam.Pipeline(options=beam.pipeline.PipelineOptions(pipeline_args)) as pipe:
prediction_result_pattern = os.path.join(known_args.prediction_path, "prediction.results-*-of-*")
prediction_summary_path = os.path.join(known_args.prediction_path, "prediction.summary.json")
# This is apache-beam ptransform's convention
_ = (
pipe
| "ReadPredictionResult" >> beam.io.ReadFromText(prediction_result_pattern, coder=JsonCoder())
| "Summary" >> MakeSummary(metric_fn, metric_keys)
| "Write"
>> beam.io.WriteToText(
prediction_summary_path,
shard_name_template="", # without trailing -NNNNN-of-NNNNN.
coder=JsonCoder(),
)
) |
Get facets from BigQuery table object. | def get_facets_from_bq_table(table: Table) -> dict[Any, Any]:
"""Get facets from BigQuery table object."""
facets = {
"schema": SchemaDatasetFacet(
fields=[
SchemaField(name=field.name, type=field.field_type, description=field.description)
for field in table.schema
]
),
"documentation": DocumentationDatasetFacet(description=table.description or ""),
}
return facets |
Get column lineage facet.
Simple lineage will be created, where each source column corresponds to single destination column
in each input dataset and there are no transformations made. | def get_identity_column_lineage_facet(
field_names: list[str],
input_datasets: list[Dataset],
) -> ColumnLineageDatasetFacet:
"""
Get column lineage facet.
Simple lineage will be created, where each source column corresponds to single destination column
in each input dataset and there are no transformations made.
"""
if field_names and not input_datasets:
raise ValueError("When providing `field_names` You must provide at least one `input_dataset`.")
column_lineage_facet = ColumnLineageDatasetFacet(
fields={
field: ColumnLineageDatasetFacetFieldsAdditional(
inputFields=[
ColumnLineageDatasetFacetFieldsAdditionalInputFields(
namespace=dataset.namespace, name=dataset.name, field=field
)
for dataset in input_datasets
],
transformationType="IDENTITY",
transformationDescription="identical",
)
for field in field_names
}
)
return column_lineage_facet |
Create a HTTP authorized client. | def create_client_session():
"""Create a HTTP authorized client."""
service_account_path = conf.get("api", "google_key_path")
if service_account_path:
id_token_credentials = service_account.IDTokenCredentials.from_service_account_file(
service_account_path
)
else:
id_token_credentials = get_default_id_token_credentials(target_audience=AUDIENCE)
return AuthorizedSession(credentials=id_token_credentials) |
Initialize authentication. | def init_app(_):
"""Initialize authentication.""" |
Act as a Decorator for function that require authentication. | def requires_authentication(function: T):
"""Act as a Decorator for function that require authentication."""
@wraps(function)
def decorated(*args, **kwargs):
access_token = _get_id_token_from_request(flask_request)
if not access_token:
log.debug("Missing ID Token")
return Response("Forbidden", 403)
userid = _verify_id_token(access_token)
if not userid:
log.debug("Invalid ID Token")
return Response("Forbidden", 403)
log.debug("Looking for user with e-mail: %s", userid)
user = _lookup_user(userid)
if not user:
return Response("Forbidden", 403)
log.debug("Found user: %s", user)
_set_current_user(user)
return function(*args, **kwargs)
return cast(T, decorated) |
Check for quota violation errors.
API for Google services does not have a standardized way to report quota violation errors.
The function has been adapted by trial and error to the following services:
* Google Translate
* Google Vision
* Google Text-to-Speech
* Google Speech-to-Text
* Google Natural Language
* Google Video Intelligence | def is_soft_quota_exception(exception: Exception):
"""
Check for quota violation errors.
API for Google services does not have a standardized way to report quota violation errors.
The function has been adapted by trial and error to the following services:
* Google Translate
* Google Vision
* Google Text-to-Speech
* Google Speech-to-Text
* Google Natural Language
* Google Video Intelligence
"""
if isinstance(exception, Forbidden):
return any(reason in error.details() for reason in INVALID_REASONS for error in exception.errors)
if isinstance(exception, (ResourceExhausted, TooManyRequests)):
return any(key in error.details() for key in INVALID_KEYS for error in exception.errors)
return False |
Handle operation in-progress exceptions.
Some calls return 429 (too many requests!) or 409 errors (Conflict) in case of operation in progress.
* Google Cloud SQL | def is_operation_in_progress_exception(exception: Exception) -> bool:
"""
Handle operation in-progress exceptions.
Some calls return 429 (too many requests!) or 409 errors (Conflict) in case of operation in progress.
* Google Cloud SQL
"""
if isinstance(exception, HttpError):
return exception.resp.status == 429 or exception.resp.status == 409
return False |
Handle refresh credentials exceptions.
Some calls return 502 (server error) in case a new token cannot be obtained.
* Google BigQuery | def is_refresh_credentials_exception(exception: Exception) -> bool:
"""
Handle refresh credentials exceptions.
Some calls return 502 (server error) in case a new token cannot be obtained.
* Google BigQuery
"""
if isinstance(exception, RefreshError):
return "Unable to acquire impersonated credentials" in str(exception)
return False |
Get field from extra, first checking short name, then for backcompat we check for prefixed name. | def get_field(extras: dict, field_name: str):
"""Get field from extra, first checking short name, then for backcompat we check for prefixed name."""
if field_name.startswith("extra__"):
raise ValueError(
f"Got prefixed name {field_name}; please remove the 'extra__google_cloud_platform__' prefix "
"when using this method."
)
if field_name in extras:
return extras[field_name] or None
prefixed_name = f"extra__google_cloud_platform__{field_name}"
return extras.get(prefixed_name) or None |
Load credentials from a file.
The credentials file must be a service account key or a stored authorized user credential.
:param filename: The full path to the credentials file.
:return: Loaded credentials
:raise google.auth.exceptions.DefaultCredentialsError: if the file is in the wrong format or is missing. | def _load_credentials_from_file(
filename: str, target_audience: str | None
) -> google_auth_credentials.Credentials | None:
"""
Load credentials from a file.
The credentials file must be a service account key or a stored authorized user credential.
:param filename: The full path to the credentials file.
:return: Loaded credentials
:raise google.auth.exceptions.DefaultCredentialsError: if the file is in the wrong format or is missing.
"""
if not os.path.exists(filename):
raise exceptions.DefaultCredentialsError(f"File {filename} was not found.")
with open(filename) as file_obj:
try:
info = json.load(file_obj)
except json.JSONDecodeError:
raise exceptions.DefaultCredentialsError(f"File {filename} is not a valid json file.")
# The type key should indicate that the file is either a service account
# credentials file or an authorized user credentials file.
credential_type = info.get("type")
if credential_type == _AUTHORIZED_USER_TYPE:
current_credentials = oauth2_credentials.Credentials.from_authorized_user_info(
info, scopes=["openid", "email"]
)
current_credentials = IDTokenCredentialsAdapter(credentials=current_credentials)
return current_credentials
elif credential_type == _SERVICE_ACCOUNT_TYPE:
try:
return service_account.IDTokenCredentials.from_service_account_info(
info, target_audience=target_audience
)
except ValueError:
raise exceptions.DefaultCredentialsError(
f"Failed to load service account credentials from {filename}"
)
raise exceptions.DefaultCredentialsError(
f"The file {filename} does not have a valid type. Type is {credential_type}, "
f"expected one of {_VALID_TYPES}."
) |
Get credentials from the GOOGLE_APPLICATION_CREDENTIALS environment variable. | def _get_explicit_environ_credentials(
target_audience: str | None,
) -> google_auth_credentials.Credentials | None:
"""Get credentials from the GOOGLE_APPLICATION_CREDENTIALS environment variable."""
explicit_file = os.environ.get(environment_vars.CREDENTIALS)
if explicit_file is None:
return None
current_credentials = _load_credentials_from_file(
os.environ[environment_vars.CREDENTIALS], target_audience=target_audience
)
return current_credentials |
Get the credentials and project ID from the Cloud SDK. | def _get_gcloud_sdk_credentials(
target_audience: str | None,
) -> google_auth_credentials.Credentials | None:
"""Get the credentials and project ID from the Cloud SDK."""
from google.auth import _cloud_sdk # type: ignore[attr-defined]
# Check if application default credentials exist.
credentials_filename = _cloud_sdk.get_application_default_credentials_path()
if not os.path.isfile(credentials_filename):
return None
current_credentials = _load_credentials_from_file(credentials_filename, target_audience)
return current_credentials |
Get credentials and project ID from the GCE Metadata Service. | def _get_gce_credentials(
target_audience: str | None, request: google.auth.transport.Request | None = None
) -> google_auth_credentials.Credentials | None:
"""Get credentials and project ID from the GCE Metadata Service."""
# Ping requires a transport, but we want application default credentials
# to require no arguments. So, we'll use the _http_client transport which
# uses http.client. This is only acceptable because the metadata server
# doesn't do SSL and never requires proxies.
# While this library is normally bundled with compute_engine, there are
# some cases where it's not available, so we tolerate ImportError.
try:
from google.auth import compute_engine
from google.auth.compute_engine import _metadata
except ImportError:
return None
from google.auth.transport import _http_client
if request is None:
request = _http_client.Request()
if _metadata.ping(request=request):
return compute_engine.IDTokenCredentials(
request, target_audience, use_metadata_identity_endpoint=True
)
return None |
Get the default ID Token credentials for the current environment.
`Application Default Credentials`_ provides an easy way to obtain credentials to call Google APIs for
server-to-server or local applications.
.. _Application Default Credentials: https://developers.google.com /identity/protocols/application-default-credentials
:param target_audience: The intended audience for these credentials.
:param request: An object used to make HTTP requests. This is used to detect whether the application
is running on Compute Engine. If not specified, then it will use the standard library http client
to make requests.
:return: the current environment's credentials.
:raises ~google.auth.exceptions.DefaultCredentialsError:
If no credentials were found, or if the credentials found were invalid. | def get_default_id_token_credentials(
target_audience: str | None, request: google.auth.transport.Request = None
) -> google_auth_credentials.Credentials:
"""Get the default ID Token credentials for the current environment.
`Application Default Credentials`_ provides an easy way to obtain credentials to call Google APIs for
server-to-server or local applications.
.. _Application Default Credentials: https://developers.google.com\
/identity/protocols/application-default-credentials
:param target_audience: The intended audience for these credentials.
:param request: An object used to make HTTP requests. This is used to detect whether the application
is running on Compute Engine. If not specified, then it will use the standard library http client
to make requests.
:return: the current environment's credentials.
:raises ~google.auth.exceptions.DefaultCredentialsError:
If no credentials were found, or if the credentials found were invalid.
"""
checkers = (
lambda: _get_explicit_environ_credentials(target_audience),
lambda: _get_gcloud_sdk_credentials(target_audience),
lambda: _get_gce_credentials(target_audience, request),
)
for checker in checkers:
current_credentials = checker()
if current_credentials is not None:
return current_credentials
raise exceptions.DefaultCredentialsError(
f"""Could not automatically determine credentials. Please set {environment_vars.CREDENTIALS} or
explicitly create credentials and re-run the application. For more information, please see
https://cloud.google.com/docs/authentication/getting-started
""".strip()
) |
Combine base url with endpoint. | def _url_from_endpoint(base_url: str | None, endpoint: str | None) -> str:
"""Combine base url with endpoint."""
if base_url and not base_url.endswith("/") and endpoint and not endpoint.startswith("/"):
return f"{base_url}/{endpoint}"
return (base_url or "") + (endpoint or "") |
Context manager that suppresses the given exceptions and logs a warning message. | def suppress_and_warn(*exceptions: type[BaseException]):
"""Context manager that suppresses the given exceptions and logs a warning message."""
try:
yield
except exceptions as e:
warnings.warn(
f"Exception suppressed: {e}\n{traceback.format_exc()}",
category=UserWarning,
stacklevel=3,
) |
Create a Jenkins request from a raw request.
We need to get the headers in addition to the body answer to get the
location from them. This function uses ``jenkins_request`` from
python-jenkins with just the return call changed.
:param jenkins_server: The server to query
:param req: The request to execute
:return: Dict containing the response body (key body)
and the headers coming along (headers) | def jenkins_request_with_headers(jenkins_server: Jenkins, req: Request) -> JenkinsRequest | None:
"""Create a Jenkins request from a raw request.
We need to get the headers in addition to the body answer to get the
location from them. This function uses ``jenkins_request`` from
python-jenkins with just the return call changed.
:param jenkins_server: The server to query
:param req: The request to execute
:return: Dict containing the response body (key body)
and the headers coming along (headers)
"""
try:
response = jenkins_server.jenkins_request(req)
response_body = response.content
response_headers = response.headers
if response_body is None:
raise jenkins.EmptyResponseException(
f"Error communicating with server[{jenkins_server.server}]: empty response"
)
return {"body": response_body.decode("utf-8"), "headers": response_headers}
except HTTPError as e:
# Jenkins's funky authentication means its nigh impossible to distinguish errors.
if e.code in [401, 403, 500]:
raise JenkinsException(f"Error in request. Possibly authentication failed [{e.code}]: {e.reason}")
elif e.code == 404:
raise jenkins.NotFoundException("Requested item could not be found")
else:
raise
except socket.timeout as e:
raise jenkins.TimeoutException(f"Error in request: {e}")
except URLError as e:
raise JenkinsException(f"Error in request: {e.reason}")
return None |
Get field from extra, first checking short name, then for backcompat we check for prefixed name. | def get_field(*, conn_id: str, conn_type: str, extras: dict, field_name: str):
"""Get field from extra, first checking short name, then for backcompat we check for prefixed name."""
backcompat_prefix = f"extra__{conn_type}__"
backcompat_key = f"{backcompat_prefix}{field_name}"
ret = None
if field_name.startswith("extra__"):
raise ValueError(
f"Got prefixed name {field_name}; please remove the '{backcompat_prefix}' prefix "
"when using this method."
)
if field_name in extras:
if backcompat_key in extras:
warnings.warn(
f"Conflicting params `{field_name}` and `{backcompat_key}` found in extras for conn "
f"{conn_id}. Using value for `{field_name}`. Please ensure this is the correct "
f"value and remove the backcompat key `{backcompat_key}`.",
UserWarning,
stacklevel=2,
)
ret = extras[field_name]
elif backcompat_key in extras:
ret = extras.get(backcompat_key)
if ret == "":
return None
return ret |
Get DefaultAzureCredential based on provided arguments.
If managed_identity_client_id and workload_identity_tenant_id are provided, this function returns
DefaultAzureCredential with managed identity. | def _get_default_azure_credential(
*,
managed_identity_client_id: str | None = None,
workload_identity_tenant_id: str | None = None,
use_async: bool = False,
) -> DefaultAzureCredential | AsyncDefaultAzureCredential:
"""Get DefaultAzureCredential based on provided arguments.
If managed_identity_client_id and workload_identity_tenant_id are provided, this function returns
DefaultAzureCredential with managed identity.
"""
credential_cls: type[AsyncDefaultAzureCredential] | type[DefaultAzureCredential] = (
AsyncDefaultAzureCredential if use_async else DefaultAzureCredential
)
if managed_identity_client_id and workload_identity_tenant_id:
return credential_cls(
managed_identity_client_id=managed_identity_client_id,
workload_identity_tenant_id=workload_identity_tenant_id,
additionally_allowed_tenants=[workload_identity_tenant_id],
)
else:
return credential_cls() |
Get Azure CosmosDB database link. | def get_database_link(database_id: str) -> str:
"""Get Azure CosmosDB database link."""
return "dbs/" + database_id |
Get Azure CosmosDB collection link. | def get_collection_link(database_id: str, collection_id: str) -> str:
"""Get Azure CosmosDB collection link."""
return get_database_link(database_id) + "/colls/" + collection_id |
Get Azure CosmosDB document link. | def get_document_link(database_id: str, collection_id: str, document_id: str) -> str:
"""Get Azure CosmosDB document link."""
return get_collection_link(database_id, collection_id) + "/docs/" + document_id |
Provide the targeted factory to the decorated function in case it isn't specified.
If ``resource_group_name`` or ``factory_name`` is not provided it defaults to the value specified in
the connection extras. | def provide_targeted_factory(func: Callable) -> Callable:
"""
Provide the targeted factory to the decorated function in case it isn't specified.
If ``resource_group_name`` or ``factory_name`` is not provided it defaults to the value specified in
the connection extras.
"""
signature = inspect.signature(func)
@wraps(func)
def wrapper(*args, **kwargs) -> Callable:
bound_args = signature.bind(*args, **kwargs)
def bind_argument(arg, default_key):
# Check if arg was not included in the function signature or, if it is, the value is not provided.
if arg not in bound_args.arguments or bound_args.arguments[arg] is None:
self = args[0]
conn = self.get_connection(self.conn_id)
extras = conn.extra_dejson
default_value = extras.get(default_key) or extras.get(
f"extra__azure_data_factory__{default_key}"
)
if not default_value:
raise AirflowException("Could not determine the targeted data factory.")
bound_args.arguments[arg] = default_value
bind_argument("resource_group_name", "resource_group_name")
bind_argument("factory_name", "factory_name")
return func(*bound_args.args, **bound_args.kwargs)
return wrapper |
Get field from extra, first checking short name, then for backcompat we check for prefixed name. | def get_field(extras: dict, field_name: str, strict: bool = False):
"""Get field from extra, first checking short name, then for backcompat we check for prefixed name."""
backcompat_prefix = "extra__azure_data_factory__"
if field_name.startswith("extra__"):
raise ValueError(
f"Got prefixed name {field_name}; please remove the '{backcompat_prefix}' prefix "
"when using this method."
)
if field_name in extras:
return extras[field_name] or None
prefixed_name = f"{backcompat_prefix}{field_name}"
if prefixed_name in extras:
return extras[prefixed_name] or None
if strict:
raise KeyError(f"Field {field_name} not found in extras") |
Provide the targeted factory to the async decorated function in case it isn't specified.
If ``resource_group_name`` or ``factory_name`` is not provided it defaults to the value specified in
the connection extras. | def provide_targeted_factory_async(func: T) -> T:
"""
Provide the targeted factory to the async decorated function in case it isn't specified.
If ``resource_group_name`` or ``factory_name`` is not provided it defaults to the value specified in
the connection extras.
"""
signature = inspect.signature(func)
@wraps(func)
async def wrapper(*args: Any, **kwargs: Any) -> Any:
bound_args = signature.bind(*args, **kwargs)
async def bind_argument(arg: Any, default_key: str) -> None:
# Check if arg was not included in the function signature or, if it is, the value is not provided.
if arg not in bound_args.arguments or bound_args.arguments[arg] is None:
self = args[0]
conn = await sync_to_async(self.get_connection)(self.conn_id)
extras = conn.extra_dejson
default_value = extras.get(default_key) or extras.get(
f"extra__azure_data_factory__{default_key}"
)
if not default_value and extras.get(f"extra__azure_data_factory__{default_key}"):
warnings.warn(
f"`extra__azure_data_factory__{default_key}` is deprecated in azure connection extra,"
f" please use `{default_key}` instead",
AirflowProviderDeprecationWarning,
stacklevel=2,
)
default_value = extras.get(f"extra__azure_data_factory__{default_key}")
if not default_value:
raise AirflowException("Could not determine the targeted data factory.")
bound_args.arguments[arg] = default_value
await bind_argument("resource_group_name", "resource_group_name")
await bind_argument("factory_name", "factory_name")
return await func(*bound_args.args, **bound_args.kwargs)
return cast(T, wrapper) |
[openlineage] config_path. | def config_path(check_legacy_env_var: bool = True) -> str:
"""[openlineage] config_path."""
option = conf.get(_CONFIG_SECTION, "config_path", fallback="")
if check_legacy_env_var and not option:
option = os.getenv("OPENLINEAGE_CONFIG", "")
return option |
[openlineage] disable_source_code. | def is_source_enabled() -> bool:
"""[openlineage] disable_source_code."""
option = conf.get(_CONFIG_SECTION, "disable_source_code", fallback="")
if not option:
option = os.getenv("OPENLINEAGE_AIRFLOW_DISABLE_SOURCE_CODE", "")
return option.lower() not in ("true", "1", "t") |
[openlineage] disabled_for_operators. | def disabled_operators() -> set[str]:
"""[openlineage] disabled_for_operators."""
option = conf.get(_CONFIG_SECTION, "disabled_for_operators", fallback="")
return set(operator.strip() for operator in option.split(";") if operator.strip()) |
[openlineage] extractors. | def custom_extractors() -> set[str]:
"""[openlineage] extractors."""
option = conf.get(_CONFIG_SECTION, "extractors", fallback="")
if not option:
option = os.getenv("OPENLINEAGE_EXTRACTORS", "")
return set(extractor.strip() for extractor in option.split(";") if extractor.strip()) |
[openlineage] namespace. | def namespace() -> str:
"""[openlineage] namespace."""
option = conf.get(_CONFIG_SECTION, "namespace", fallback="")
if not option:
option = os.getenv("OPENLINEAGE_NAMESPACE", "default")
return option |
[openlineage] transport. | def transport() -> dict[str, Any]:
"""[openlineage] transport."""
option = conf.getjson(_CONFIG_SECTION, "transport", fallback={})
if not isinstance(option, dict):
raise ValueError(f"OpenLineage transport `{option}` is not a dict")
return option |
[openlineage] disabled + some extra checks. | def is_disabled() -> bool:
"""[openlineage] disabled + some extra checks."""
def _is_true(val):
return str(val).lower().strip() in ("true", "1", "t")
option = conf.get(_CONFIG_SECTION, "disabled", fallback="")
if _is_true(option):
return True
option = os.getenv("OPENLINEAGE_DISABLED", "")
if _is_true(option):
return True
# Check if both 'transport' and 'config_path' are not present and also
# if legacy 'OPENLINEAGE_URL' environment variables is not set
return transport() == {} and config_path(True) == "" and os.getenv("OPENLINEAGE_URL", "") == "" |
Get singleton listener manager. | def get_openlineage_listener() -> OpenLineageListener:
"""Get singleton listener manager."""
global _openlineage_listener
if not _openlineage_listener:
_openlineage_listener = OpenLineageListener()
return _openlineage_listener |
Macro function which returns Airflow OpenLineage namespace.
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage` | def lineage_job_namespace():
"""
Macro function which returns Airflow OpenLineage namespace.
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage`
"""
return conf.namespace() |
Macro function which returns Airflow task name in OpenLineage format (`<dag_id>.<task_id>`).
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage` | def lineage_job_name(task_instance: TaskInstance):
"""
Macro function which returns Airflow task name in OpenLineage format (`<dag_id>.<task_id>`).
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage`
"""
return get_job_name(task_instance) |
Macro function which returns the generated run id (UUID) for a given task.
This can be used to forward the run id from a task to a child run so the job hierarchy is preserved.
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage` | def lineage_run_id(task_instance: TaskInstance):
"""
Macro function which returns the generated run id (UUID) for a given task.
This can be used to forward the run id from a task to a child run so the job hierarchy is preserved.
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage`
"""
return OpenLineageAdapter.build_task_instance_run_id(
dag_id=task_instance.dag_id,
task_id=task_instance.task_id,
execution_date=task_instance.execution_date,
try_number=task_instance.try_number,
) |
Macro function which returns a unique identifier of given task that can be used to create ParentRunFacet.
This identifier is composed of the namespace, job name, and generated run id for given task, structured
as '{namespace}/{job_name}/{run_id}'. This can be used to forward task information from a task to a child
run so the job hierarchy is preserved. Child run can easily create ParentRunFacet from these information.
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage` | def lineage_parent_id(task_instance: TaskInstance):
"""
Macro function which returns a unique identifier of given task that can be used to create ParentRunFacet.
This identifier is composed of the namespace, job name, and generated run id for given task, structured
as '{namespace}/{job_name}/{run_id}'. This can be used to forward task information from a task to a child
run so the job hierarchy is preserved. Child run can easily create ParentRunFacet from these information.
.. seealso::
For more information take a look at the guide:
:ref:`howto/macros:openlineage`
"""
return "/".join(
(
lineage_job_namespace(),
lineage_job_name(task_instance),
lineage_run_id(task_instance),
)
) |
Set selective enable OpenLineage parameter to True.
The method also propagates param to tasks if the object is DAG. | def enable_lineage(obj: T) -> T:
"""Set selective enable OpenLineage parameter to True.
The method also propagates param to tasks if the object is DAG.
"""
if isinstance(obj, XComArg):
enable_lineage(obj.operator)
return obj
# propagate param to tasks
if isinstance(obj, DAG):
for task in obj.task_dict.values():
enable_lineage(task)
obj.params[ENABLE_OL_PARAM_NAME] = ENABLE_OL_PARAM
return obj |
Set selective enable OpenLineage parameter to False.
The method also propagates param to tasks if the object is DAG. | def disable_lineage(obj: T) -> T:
"""Set selective enable OpenLineage parameter to False.
The method also propagates param to tasks if the object is DAG.
"""
if isinstance(obj, XComArg):
disable_lineage(obj.operator)
return obj
# propagate param to tasks
if isinstance(obj, DAG):
for task in obj.task_dict.values():
disable_lineage(task)
obj.params[ENABLE_OL_PARAM_NAME] = DISABLE_OL_PARAM
return obj |
Check if selective enable OpenLineage parameter is set to True on task level. | def is_task_lineage_enabled(task: Operator) -> bool:
"""Check if selective enable OpenLineage parameter is set to True on task level."""
if task.params.get(ENABLE_OL_PARAM_NAME) is False:
log.debug(
"OpenLineage event emission suppressed. Task for this functionality is selectively disabled."
)
return task.params.get(ENABLE_OL_PARAM_NAME) is True |
Check if DAG is selectively enabled to emit OpenLineage events.
The method also checks if selective enable parameter is set to True
or if any of the tasks in DAG is selectively enabled. | def is_dag_lineage_enabled(dag: DAG) -> bool:
"""Check if DAG is selectively enabled to emit OpenLineage events.
The method also checks if selective enable parameter is set to True
or if any of the tasks in DAG is selectively enabled.
"""
if dag.params.get(ENABLE_OL_PARAM_NAME) is False:
log.debug(
"OpenLineage event emission suppressed. DAG for this functionality is selectively disabled."
)
return dag.params.get(ENABLE_OL_PARAM_NAME) is True or any(
is_task_lineage_enabled(task) for task in dag.tasks
) |
Query database for table schemas.
Uses provided hook. Responsibility to provide queries for this function is on particular extractors.
If query for input or output table isn't provided, the query is skipped. | def get_table_schemas(
hook: BaseHook,
namespace: str,
schema: str | None,
database: str | None,
in_query: str | None,
out_query: str | None,
) -> tuple[list[Dataset], list[Dataset]]:
"""Query database for table schemas.
Uses provided hook. Responsibility to provide queries for this function is on particular extractors.
If query for input or output table isn't provided, the query is skipped.
"""
# Do not query if we did not get both queries
if not in_query and not out_query:
return [], []
with closing(hook.get_conn()) as conn, closing(conn.cursor()) as cursor:
if in_query:
cursor.execute(in_query)
in_datasets = [x.to_dataset(namespace, database, schema) for x in parse_query_result(cursor)]
else:
in_datasets = []
if out_query:
cursor.execute(out_query)
out_datasets = [x.to_dataset(namespace, database, schema) for x in parse_query_result(cursor)]
else:
out_datasets = []
return in_datasets, out_datasets |
Fetch results from DB-API 2.0 cursor and creates list of table schemas.
For each row it creates :class:`TableSchema`. | def parse_query_result(cursor) -> list[TableSchema]:
"""Fetch results from DB-API 2.0 cursor and creates list of table schemas.
For each row it creates :class:`TableSchema`.
"""
schemas: dict = {}
columns: dict = defaultdict(list)
for row in cursor.fetchall():
table_schema_name: str = row[ColumnIndex.SCHEMA]
table_name: str = row[ColumnIndex.TABLE_NAME]
table_column: SchemaField = SchemaField(
name=row[ColumnIndex.COLUMN_NAME],
type=row[ColumnIndex.UDT_NAME],
description=None,
)
ordinal_position = row[ColumnIndex.ORDINAL_POSITION]
try:
table_database = row[ColumnIndex.DATABASE]
except IndexError:
table_database = None
# Attempt to get table schema
table_key = ".".join(filter(None, [table_database, table_schema_name, table_name]))
schemas[table_key] = TableSchema(
table=table_name, schema=table_schema_name, database=table_database, fields=[]
)
columns[table_key].append((ordinal_position, table_column))
for schema in schemas.values():
table_key = ".".join(filter(None, [schema.database, schema.schema, schema.table]))
schema.fields = [x for _, x in sorted(columns[table_key])]
return list(schemas.values()) |
Create query for getting table schemas from information schema. | def create_information_schema_query(
columns: list[str],
information_schema_table_name: str,
tables_hierarchy: TablesHierarchy,
uppercase_names: bool = False,
use_flat_cross_db_query: bool = False,
sqlalchemy_engine: Engine | None = None,
) -> str:
"""Create query for getting table schemas from information schema."""
metadata = MetaData(sqlalchemy_engine)
select_statements = []
# Don't iterate over tables hierarchy, just pass it to query single information schema table
if use_flat_cross_db_query:
information_schema_table = Table(
information_schema_table_name,
metadata,
*[Column(column) for column in columns],
quote=False,
)
filter_clauses = create_filter_clauses(
tables_hierarchy,
information_schema_table,
uppercase_names=uppercase_names,
)
select_statements.append(information_schema_table.select().filter(filter_clauses))
else:
for db, schema_mapping in tables_hierarchy.items():
# Information schema table name is expected to be "< information_schema schema >.<view/table name>"
# usually "information_schema.columns". In order to use table identifier correct for various table
# we need to pass first part of dot-separated identifier as `schema` argument to `sqlalchemy.Table`.
if db:
# Use database as first part of table identifier.
schema = db
table_name = information_schema_table_name
else:
# When no database passed, use schema as first part of table identifier.
schema, table_name = information_schema_table_name.split(".")
information_schema_table = Table(
table_name,
metadata,
*[Column(column) for column in columns],
schema=schema,
quote=False,
)
filter_clauses = create_filter_clauses(
{None: schema_mapping},
information_schema_table,
uppercase_names=uppercase_names,
)
select_statements.append(information_schema_table.select().filter(filter_clauses))
return str(
union_all(*select_statements).compile(sqlalchemy_engine, compile_kwargs={"literal_binds": True})
) |
Create comprehensive filter clauses for all tables in one database.
:param mapping: a nested dictionary of database, schema names and list of tables in each
:param information_schema_table: `sqlalchemy.Table` instance used to construct clauses
For most SQL dbs it contains `table_name` and `table_schema` columns,
therefore it is expected the table has them defined.
:param uppercase_names: if True use schema and table names uppercase | def create_filter_clauses(
mapping: dict,
information_schema_table: Table,
uppercase_names: bool = False,
) -> ClauseElement:
"""
Create comprehensive filter clauses for all tables in one database.
:param mapping: a nested dictionary of database, schema names and list of tables in each
:param information_schema_table: `sqlalchemy.Table` instance used to construct clauses
For most SQL dbs it contains `table_name` and `table_schema` columns,
therefore it is expected the table has them defined.
:param uppercase_names: if True use schema and table names uppercase
"""
table_schema_column_name = information_schema_table.columns[ColumnIndex.SCHEMA].name
table_name_column_name = information_schema_table.columns[ColumnIndex.TABLE_NAME].name
try:
table_database_column_name = information_schema_table.columns[ColumnIndex.DATABASE].name
except IndexError:
table_database_column_name = ""
filter_clauses = []
for db, schema_mapping in mapping.items():
schema_level_clauses = []
for schema, tables in schema_mapping.items():
filter_clause = information_schema_table.c[table_name_column_name].in_(
name.upper() if uppercase_names else name for name in tables
)
if schema:
schema = schema.upper() if uppercase_names else schema
filter_clause = and_(
information_schema_table.c[table_schema_column_name] == schema, filter_clause
)
schema_level_clauses.append(filter_clause)
if db and table_database_column_name:
db = db.upper() if uppercase_names else db
filter_clause = and_(
information_schema_table.c[table_database_column_name] == db, or_(*schema_level_clauses)
)
filter_clauses.append(filter_clause)
else:
filter_clauses.extend(schema_level_clauses)
return or_(*filter_clauses) |
If selective enable is active check if DAG or Task is enabled to emit events. | def is_selective_lineage_enabled(obj: DAG | BaseOperator | MappedOperator) -> bool:
"""If selective enable is active check if DAG or Task is enabled to emit events."""
if not conf.selective_enable():
return True
if isinstance(obj, DAG):
return is_dag_lineage_enabled(obj)
elif isinstance(obj, (BaseOperator, MappedOperator)):
return is_task_lineage_enabled(obj)
else:
raise TypeError("is_selective_lineage_enabled can only be used on DAG or Operator objects") |
Register ``RemoteKernelEngine`` papermill engine. | def register_remote_kernel_engine():
"""Register ``RemoteKernelEngine`` papermill engine."""
from papermill.engines import papermill_engines
papermill_engines.register(REMOTE_KERNEL_ENGINE, RemoteKernelEngine) |
Return json string with dag_id, task_id, execution_date and try_number. | def generate_presto_client_info() -> str:
"""Return json string with dag_id, task_id, execution_date and try_number."""
context_var = {
format_map["default"].replace(DEFAULT_FORMAT_PREFIX, ""): os.environ.get(
format_map["env_var_format"], ""
)
for format_map in AIRFLOW_VAR_NAME_FORMAT_MAPPING.values()
}
task_info = {
"dag_id": context_var["dag_id"],
"task_id": context_var["task_id"],
"execution_date": context_var["execution_date"],
"try_number": context_var["try_number"],
"dag_run_id": context_var["dag_run_id"],
"dag_owner": context_var["dag_owner"],
}
return json.dumps(task_info, sort_keys=True) |
Send an email with html content using `Sendgrid <https://sendgrid.com/>`__.
.. note::
For more information, see :ref:`email-configuration-sendgrid` | def send_email(
to: AddressesType,
subject: str,
html_content: str,
files: AddressesType | None = None,
cc: AddressesType | None = None,
bcc: AddressesType | None = None,
sandbox_mode: bool = False,
conn_id: str = "sendgrid_default",
**kwargs,
) -> None:
"""
Send an email with html content using `Sendgrid <https://sendgrid.com/>`__.
.. note::
For more information, see :ref:`email-configuration-sendgrid`
"""
if files is None:
files = []
mail = Mail()
from_email = kwargs.get("from_email") or os.environ.get("SENDGRID_MAIL_FROM")
from_name = kwargs.get("from_name") or os.environ.get("SENDGRID_MAIL_SENDER")
mail.from_email = Email(from_email, from_name)
mail.subject = subject
mail.mail_settings = MailSettings()
if sandbox_mode:
mail.mail_settings.sandbox_mode = SandBoxMode(enable=True)
# Add the recipient list of to emails.
personalization = Personalization()
to = get_email_address_list(to)
for to_address in to:
personalization.add_to(Email(to_address))
if cc:
cc = get_email_address_list(cc)
for cc_address in cc:
personalization.add_cc(Email(cc_address))
if bcc:
bcc = get_email_address_list(bcc)
for bcc_address in bcc:
personalization.add_bcc(Email(bcc_address))
# Add custom_args to personalization if present
pers_custom_args = kwargs.get("personalization_custom_args")
if isinstance(pers_custom_args, dict):
for key, val in pers_custom_args.items():
personalization.add_custom_arg(CustomArg(key, val))
mail.add_personalization(personalization)
mail.add_content(Content("text/html", html_content))
categories = kwargs.get("categories", [])
for cat in categories:
mail.add_category(Category(cat))
# Add email attachment.
for fname in files:
basename = os.path.basename(fname)
with open(fname, "rb") as file:
content = base64.b64encode(file.read()).decode("utf-8")
attachment = Attachment(
file_content=content,
file_type=mimetypes.guess_type(basename)[0],
file_name=basename,
disposition="attachment",
content_id=f"<{basename}>",
)
mail.add_attachment(attachment)
_post_sendgrid_mail(mail.get(), conn_id) |
Wrap a function into an Airflow operator.
Accepts kwargs for operator kwarg. Can be reused in a single DAG.
:param python_callable: Function to decorate | def sftp_sensor_task(python_callable: Callable | None = None, **kwargs) -> TaskDecorator:
"""
Wrap a function into an Airflow operator.
Accepts kwargs for operator kwarg. Can be reused in a single DAG.
:param python_callable: Function to decorate
"""
return task_decorator_factory(
python_callable=python_callable,
multiple_outputs=False,
decorated_operator_class=_DecoratedSFTPSensor,
**kwargs,
) |
Check WebhookResponse and raise an error if status code != 200. | def check_webhook_response(func: Callable) -> Callable:
"""Check WebhookResponse and raise an error if status code != 200."""
@wraps(func)
def wrapper(*args, **kwargs) -> Callable:
resp = func(*args, **kwargs)
if resp.status_code != 200:
raise AirflowException(
f"Response body: {resp.body!r}, Status Code: {resp.status_code}. "
"See: https://api.slack.com/messaging/webhooks#handling_errors"
)
return resp
return wrapper |
Parse filetype and compression from given filename.
:param filename: filename to parse.
:param supported_file_formats: list of supported file extensions.
:param fallback: fallback to given file format.
:returns: filetype and compression (if specified) | def parse_filename(
filename: str, supported_file_formats: Sequence[str], fallback: str | None = None
) -> tuple[str, str | None]:
"""
Parse filetype and compression from given filename.
:param filename: filename to parse.
:param supported_file_formats: list of supported file extensions.
:param fallback: fallback to given file format.
:returns: filetype and compression (if specified)
"""
if not filename:
raise ValueError("Expected 'filename' parameter is missing.")
if fallback and fallback not in supported_file_formats:
raise ValueError(f"Invalid fallback value {fallback!r}, expected one of {supported_file_formats}.")
parts = filename.rsplit(".", 2)
try:
if len(parts) == 1:
raise ValueError(f"No file extension specified in filename {filename!r}.")
if parts[-1] in supported_file_formats:
return parts[-1], None
elif len(parts) == 2:
raise ValueError(
f"Unsupported file format {parts[-1]!r}, expected one of {supported_file_formats}."
)
else:
if parts[-2] not in supported_file_formats:
raise ValueError(
f"Unsupported file format '{parts[-2]}.{parts[-1]}', "
f"expected one of {supported_file_formats} with compression extension."
)
return parts[-2], parts[-1]
except ValueError as ex:
if fallback:
return fallback, None
raise ex from None |
Replace all single quotes in parameter by two single quotes and enclose param in single quote.
.. seealso::
https://docs.snowflake.com/en/sql-reference/data-types-text.html#single-quoted-string-constants
Examples:
.. code-block:: python
enclose_param("without quotes") # Returns: 'without quotes'
enclose_param("'with quotes'") # Returns: '''with quotes'''
enclose_param("Today's sales projections") # Returns: 'Today''s sales projections'
enclose_param("sample/john's.csv") # Returns: 'sample/john''s.csv'
enclose_param(".*'awesome'.*[.]csv") # Returns: '.*''awesome''.*[.]csv'
:param param: parameter which required single quotes enclosure. | def enclose_param(param: str) -> str:
"""
Replace all single quotes in parameter by two single quotes and enclose param in single quote.
.. seealso::
https://docs.snowflake.com/en/sql-reference/data-types-text.html#single-quoted-string-constants
Examples:
.. code-block:: python
enclose_param("without quotes") # Returns: 'without quotes'
enclose_param("'with quotes'") # Returns: '''with quotes'''
enclose_param("Today's sales projections") # Returns: 'Today''s sales projections'
enclose_param("sample/john's.csv") # Returns: 'sample/john''s.csv'
enclose_param(".*'awesome'.*[.]csv") # Returns: '.*''awesome''.*[.]csv'
:param param: parameter which required single quotes enclosure.
"""
return f"""'{param.replace("'", "''")}'""" |
Try to parse a string into boolean.
The string is returned as-is if it does not look like a boolean value. | def parse_boolean(val: str) -> str | bool:
"""Try to parse a string into boolean.
The string is returned as-is if it does not look like a boolean value.
"""
val = val.lower()
if val in ("y", "yes", "t", "true", "on", "1"):
return True
if val in ("n", "no", "f", "false", "off", "0"):
return False
return val |
Return json string with dag_id, task_id, execution_date and try_number. | def generate_trino_client_info() -> str:
"""Return json string with dag_id, task_id, execution_date and try_number."""
context_var = {
format_map["default"].replace(DEFAULT_FORMAT_PREFIX, ""): os.environ.get(
format_map["env_var_format"], ""
)
for format_map in AIRFLOW_VAR_NAME_FORMAT_MAPPING.values()
}
task_info = {
"dag_id": context_var["dag_id"],
"task_id": context_var["task_id"],
"execution_date": context_var["execution_date"],
"try_number": context_var["try_number"],
"dag_run_id": context_var["dag_run_id"],
"dag_owner": context_var["dag_owner"],
}
return json.dumps(task_info, sort_keys=True) |
Replace the default DbApiHook fetch_all_handler in order to fix this issue https://github.com/apache/airflow/issues/32993.
Returned value will not change after the initial call of fetch_all_handler, all the remaining code is here
only to make vertica client throws error.
With Vertica, if you run the following sql (with split_statements set to false):
INSERT INTO MyTable (Key, Label) values (1, 'test 1');
INSERT INTO MyTable (Key, Label) values (1, 'test 2');
INSERT INTO MyTable (Key, Label) values (3, 'test 3');
each insert will have its own result set and if you don't try to fetch data of those result sets
you won't detect error on the second insert. | def vertica_fetch_all_handler(cursor) -> list[tuple] | None:
"""
Replace the default DbApiHook fetch_all_handler in order to fix this issue https://github.com/apache/airflow/issues/32993.
Returned value will not change after the initial call of fetch_all_handler, all the remaining code is here
only to make vertica client throws error.
With Vertica, if you run the following sql (with split_statements set to false):
INSERT INTO MyTable (Key, Label) values (1, 'test 1');
INSERT INTO MyTable (Key, Label) values (1, 'test 2');
INSERT INTO MyTable (Key, Label) values (3, 'test 3');
each insert will have its own result set and if you don't try to fetch data of those result sets
you won't detect error on the second insert.
"""
result = fetch_all_handler(cursor)
# loop on all statement result sets to get errors
if cursor.description is not None:
while cursor.nextset():
if cursor.description is not None:
row = cursor.fetchone()
while row:
row = cursor.fetchone()
return result |
Return credentials JSON for Yandex Cloud SDK based on credentials.
Credentials will be used with this priority:
* OAuth Token
* Service Account JSON file
* Service Account JSON
* Metadata Service
:param oauth_token: OAuth Token
:param service_account_json: Service Account JSON key or dict
:param service_account_json_path: Service Account JSON key file path
:return: Credentials JSON | def get_credentials(
oauth_token: str | None = None,
service_account_json: dict | str | None = None,
service_account_json_path: str | None = None,
) -> dict[str, Any]:
"""
Return credentials JSON for Yandex Cloud SDK based on credentials.
Credentials will be used with this priority:
* OAuth Token
* Service Account JSON file
* Service Account JSON
* Metadata Service
:param oauth_token: OAuth Token
:param service_account_json: Service Account JSON key or dict
:param service_account_json_path: Service Account JSON key file path
:return: Credentials JSON
"""
if oauth_token:
return {"token": oauth_token}
service_account_key = get_service_account_key(
service_account_json=service_account_json,
service_account_json_path=service_account_json_path,
)
if service_account_key:
return {"service_account_key": service_account_key}
log.info("using metadata service as credentials")
return {} |
Return Yandex Cloud Service Account key loaded from JSON string or file.
:param service_account_json: Service Account JSON key or dict
:param service_account_json_path: Service Account JSON key file path
:return: Yandex Cloud Service Account key | def get_service_account_key(
service_account_json: dict | str | None = None,
service_account_json_path: str | None = None,
) -> dict[str, str] | None:
"""
Return Yandex Cloud Service Account key loaded from JSON string or file.
:param service_account_json: Service Account JSON key or dict
:param service_account_json_path: Service Account JSON key file path
:return: Yandex Cloud Service Account key
"""
if service_account_json_path:
with open(service_account_json_path) as infile:
service_account_json = infile.read()
if isinstance(service_account_json, dict):
return service_account_json
if service_account_json:
return json.loads(service_account_json)
return None |
Return Yandex Cloud Service Account ID loaded from JSON string or file.
:param service_account_json: Service Account JSON key or dict
:param service_account_json_path: Service Account JSON key file path
:return: Yandex Cloud Service Account ID | def get_service_account_id(
service_account_json: dict | str | None = None,
service_account_json_path: str | None = None,
) -> str | None:
"""
Return Yandex Cloud Service Account ID loaded from JSON string or file.
:param service_account_json: Service Account JSON key or dict
:param service_account_json_path: Service Account JSON key file path
:return: Yandex Cloud Service Account ID
"""
sa_key = get_service_account_key(
service_account_json=service_account_json,
service_account_json_path=service_account_json_path,
)
if sa_key:
return sa_key.get("service_account_id")
return None |
Get field from extras, first checking short name, then for backcompat checking for prefixed name.
:param extras: Dictionary with extras keys
:param field_name: Field name to get from extras
:param default: Default value if field not found
:return: Field value or default if not found | def get_field_from_extras(extras: dict[str, Any], field_name: str, default: Any = None) -> Any:
"""
Get field from extras, first checking short name, then for backcompat checking for prefixed name.
:param extras: Dictionary with extras keys
:param field_name: Field name to get from extras
:param default: Default value if field not found
:return: Field value or default if not found
"""
backcompat_prefix = "extra__yandexcloud__"
if field_name.startswith("extra__"):
raise ValueError(
f"Got prefixed name {field_name}; please remove the '{backcompat_prefix}' prefix "
"when using this function."
)
if field_name in extras:
return extras[field_name]
prefixed_name = f"{backcompat_prefix}{field_name}"
if prefixed_name in extras:
return extras[prefixed_name]
return default |
Construct User-Agent from Airflow core & provider package versions. | def provider_user_agent() -> str | None:
"""Construct User-Agent from Airflow core & provider package versions."""
from airflow import __version__ as airflow_version
from airflow.configuration import conf
from airflow.providers_manager import ProvidersManager
try:
manager = ProvidersManager()
provider_name = manager.hooks[conn_type].package_name # type: ignore[union-attr]
provider = manager.providers[provider_name]
return " ".join(
(
conf.get("yandex", "sdk_user_agent_prefix", fallback=""),
f"apache-airflow/{airflow_version}",
f"{provider_name}/{provider.version}",
)
).strip()
except KeyError:
warnings.warn(
f"Hook '{hook_name}' info is not initialized in airflow.ProviderManager",
UserWarning,
stacklevel=2,
)
return None |
Return :class:`airflow.models.connection.Connection` constructor parameters. | def get_connection_parameter_names() -> set[str]:
"""Return :class:`airflow.models.connection.Connection` constructor parameters."""
from airflow.models.connection import Connection
return {k for k in signature(Connection.__init__).parameters.keys() if k != "self"} |
Parse a file in the ``.env`` format.
.. code-block:: text
MY_CONN_ID=my-conn-type://my-login:my-pa%2Fssword@my-host:5432/my-schema?param1=val1¶m2=val2
:param file_path: The location of the file that will be processed.
:return: Tuple with mapping of key and list of values and list of syntax errors | def _parse_env_file(file_path: str) -> tuple[dict[str, list[str]], list[FileSyntaxError]]:
"""
Parse a file in the ``.env`` format.
.. code-block:: text
MY_CONN_ID=my-conn-type://my-login:my-pa%2Fssword@my-host:5432/my-schema?param1=val1¶m2=val2
:param file_path: The location of the file that will be processed.
:return: Tuple with mapping of key and list of values and list of syntax errors
"""
with open(file_path) as f:
content = f.read()
secrets: dict[str, list[str]] = defaultdict(list)
errors: list[FileSyntaxError] = []
for line_no, line in enumerate(content.splitlines(), 1):
if not line:
# Ignore empty line
continue
if COMMENT_PATTERN.match(line):
# Ignore comments
continue
key, sep, value = line.partition("=")
if not sep:
errors.append(
FileSyntaxError(
line_no=line_no,
message='Invalid line format. The line should contain at least one equal sign ("=").',
)
)
continue
if not value:
errors.append(
FileSyntaxError(
line_no=line_no,
message="Invalid line format. Key is empty.",
)
)
secrets[key].append(value)
return secrets, errors |
Parse a file in the YAML format.
:param file_path: The location of the file that will be processed.
:return: Tuple with mapping of key and list of values and list of syntax errors | def _parse_yaml_file(file_path: str) -> tuple[dict[str, list[str]], list[FileSyntaxError]]:
"""
Parse a file in the YAML format.
:param file_path: The location of the file that will be processed.
:return: Tuple with mapping of key and list of values and list of syntax errors
"""
with open(file_path) as f:
content = f.read()
if not content:
return {}, [FileSyntaxError(line_no=1, message="The file is empty.")]
try:
secrets = yaml.safe_load(content)
except yaml.MarkedYAMLError as e:
err_line_no = e.problem_mark.line if e.problem_mark else -1
return {}, [FileSyntaxError(line_no=err_line_no, message=str(e))]
if not isinstance(secrets, dict):
return {}, [FileSyntaxError(line_no=1, message="The file should contain the object.")]
return secrets, [] |
Parse a file in the JSON format.
:param file_path: The location of the file that will be processed.
:return: Tuple with mapping of key and list of values and list of syntax errors | def _parse_json_file(file_path: str) -> tuple[dict[str, Any], list[FileSyntaxError]]:
"""
Parse a file in the JSON format.
:param file_path: The location of the file that will be processed.
:return: Tuple with mapping of key and list of values and list of syntax errors
"""
with open(file_path) as f:
content = f.read()
if not content:
return {}, [FileSyntaxError(line_no=1, message="The file is empty.")]
try:
secrets = json.loads(content)
except JSONDecodeError as e:
return {}, [FileSyntaxError(line_no=int(e.lineno), message=e.msg)]
if not isinstance(secrets, dict):
return {}, [FileSyntaxError(line_no=1, message="The file should contain the object.")]
return secrets, [] |
Based on the file extension format, selects a parser, and parses the file.
:param file_path: The location of the file that will be processed.
:return: Map of secret key (e.g. connection ID) and value. | def _parse_secret_file(file_path: str) -> dict[str, Any]:
"""
Based on the file extension format, selects a parser, and parses the file.
:param file_path: The location of the file that will be processed.
:return: Map of secret key (e.g. connection ID) and value.
"""
if not os.path.exists(file_path):
raise AirflowException(
f"File {file_path} was not found. Check the configuration of your Secrets backend."
)
log.debug("Parsing file: %s", file_path)
ext = file_path.rsplit(".", 2)[-1].lower()
if ext not in FILE_PARSERS:
raise AirflowException(
"Unsupported file format. The file must have one of the following extensions: "
".env .json .yaml .yml"
)
secrets, parse_errors = FILE_PARSERS[ext](file_path)
log.debug("Parsed file: len(parse_errors)=%d, len(secrets)=%d", len(parse_errors), len(secrets))
if parse_errors:
raise AirflowFileParseException(
"Failed to load the secret file.", file_path=file_path, parse_errors=parse_errors
)
return secrets |
Create a connection based on a URL or JSON object. | def _create_connection(conn_id: str, value: Any):
"""Create a connection based on a URL or JSON object."""
from airflow.models.connection import Connection
if isinstance(value, str):
return Connection(conn_id=conn_id, uri=value)
if isinstance(value, dict):
connection_parameter_names = get_connection_parameter_names() | {"extra_dejson"}
current_keys = set(value.keys())
if not current_keys.issubset(connection_parameter_names):
illegal_keys = current_keys - connection_parameter_names
illegal_keys_list = ", ".join(illegal_keys)
raise AirflowException(
f"The object have illegal keys: {illegal_keys_list}. "
f"The dictionary can only contain the following keys: {connection_parameter_names}"
)
if "extra" in value and "extra_dejson" in value:
raise AirflowException(
"The extra and extra_dejson parameters are mutually exclusive. "
"Please provide only one parameter."
)
if "extra_dejson" in value:
value["extra"] = json.dumps(value["extra_dejson"])
del value["extra_dejson"]
if "conn_id" in current_keys and conn_id != value["conn_id"]:
raise AirflowException(
f"Mismatch conn_id. "
f"The dictionary key has the value: {value['conn_id']}. "
f"The item has the value: {conn_id}."
)
value["conn_id"] = conn_id
return Connection(**value)
raise AirflowException(
f"Unexpected value type: {type(value)}. The connection can only be defined using a string or object."
) |
Load variables from a text file.
``JSON``, `YAML` and ``.env`` files are supported.
:param file_path: The location of the file that will be processed. | def load_variables(file_path: str) -> dict[str, str]:
"""
Load variables from a text file.
``JSON``, `YAML` and ``.env`` files are supported.
:param file_path: The location of the file that will be processed.
"""
log.debug("Loading variables from a text file")
secrets = _parse_secret_file(file_path)
invalid_keys = [key for key, values in secrets.items() if isinstance(values, list) and len(values) != 1]
if invalid_keys:
raise AirflowException(f'The "{file_path}" file contains multiple values for keys: {invalid_keys}')
variables = {key: values[0] if isinstance(values, list) else values for key, values in secrets.items()}
log.debug("Loaded %d variables: ", len(variables))
return variables |
Use `airflow.secrets.local_filesystem.load_connections_dict`, this is deprecated. | def load_connections(file_path) -> dict[str, list[Any]]:
"""Use `airflow.secrets.local_filesystem.load_connections_dict`, this is deprecated."""
warnings.warn(
"This function is deprecated. Please use `airflow.secrets.local_filesystem.load_connections_dict`.",
RemovedInAirflow3Warning,
stacklevel=2,
)
return {k: [v] for k, v in load_connections_dict(file_path).values()} |
Load connection from text file.
``JSON``, `YAML` and ``.env`` files are supported.
:return: A dictionary where the key contains a connection ID and the value contains the connection. | def load_connections_dict(file_path: str) -> dict[str, Any]:
"""
Load connection from text file.
``JSON``, `YAML` and ``.env`` files are supported.
:return: A dictionary where the key contains a connection ID and the value contains the connection.
"""
log.debug("Loading connection")
secrets: dict[str, Any] = _parse_secret_file(file_path)
connection_by_conn_id = {}
for key, secret_values in list(secrets.items()):
if isinstance(secret_values, list):
if len(secret_values) > 1:
raise ConnectionNotUnique(f"Found multiple values for {key} in {file_path}.")
for secret_value in secret_values:
connection_by_conn_id[key] = _create_connection(key, secret_value)
else:
connection_by_conn_id[key] = _create_connection(key, secret_values)
num_conn = len(connection_by_conn_id)
log.debug("Loaded %d connections", num_conn)
return connection_by_conn_id |
Retrieve Kerberos principal. Fallback to principal from Airflow configuration if not provided. | def get_kerberos_principle(principal: str | None) -> str:
"""Retrieve Kerberos principal. Fallback to principal from Airflow configuration if not provided."""
return principal or conf.get_mandatory_value("kerberos", "principal").replace("_HOST", get_hostname()) |
Renew kerberos token from keytab.
:param principal: principal
:param keytab: keytab file
:return: None | def renew_from_kt(principal: str | None, keytab: str, exit_on_fail: bool = True):
"""
Renew kerberos token from keytab.
:param principal: principal
:param keytab: keytab file
:return: None
"""
# The config is specified in seconds. But we ask for that same amount in
# minutes to give ourselves a large renewal buffer.
renewal_lifetime = f"{conf.getint('kerberos', 'reinit_frequency')}m"
cmd_principal = get_kerberos_principle(principal)
if conf.getboolean("kerberos", "forwardable"):
forwardable = "-f"
else:
forwardable = "-F"
if conf.getboolean("kerberos", "include_ip"):
include_ip = "-a"
else:
include_ip = "-A"
cmdv: list[str] = [
conf.get_mandatory_value("kerberos", "kinit_path"),
forwardable,
include_ip,
"-r",
renewal_lifetime,
"-k", # host ticket
"-t",
keytab, # specify keytab
"-c",
conf.get_mandatory_value("kerberos", "ccache"), # specify credentials cache
cmd_principal,
]
log.info("Re-initialising kerberos from keytab: %s", " ".join(shlex.quote(f) for f in cmdv))
with subprocess.Popen(
cmdv,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=True,
bufsize=-1,
universal_newlines=True,
) as subp:
subp.wait()
if subp.returncode != 0:
log.error(
"Couldn't reinit from keytab! `kinit` exited with %s.\n%s\n%s",
subp.returncode,
"\n".join(subp.stdout.readlines() if subp.stdout else []),
"\n".join(subp.stderr.readlines() if subp.stderr else []),
)
if exit_on_fail:
sys.exit(subp.returncode)
else:
return subp.returncode
global NEED_KRB181_WORKAROUND
if NEED_KRB181_WORKAROUND is None:
NEED_KRB181_WORKAROUND = detect_conf_var()
if NEED_KRB181_WORKAROUND:
# (From: HUE-640). Kerberos clock have seconds level granularity. Make sure we
# renew the ticket after the initial valid time.
time.sleep(1.5)
ret = perform_krb181_workaround(cmd_principal)
if exit_on_fail and ret != 0:
sys.exit(ret)
else:
return ret
return 0 |
Workaround for Kerberos 1.8.1.
:param principal: principal name
:return: None | def perform_krb181_workaround(principal: str):
"""
Workaround for Kerberos 1.8.1.
:param principal: principal name
:return: None
"""
cmdv: list[str] = [
conf.get_mandatory_value("kerberos", "kinit_path"),
"-c",
conf.get_mandatory_value("kerberos", "ccache"),
"-R",
] # Renew ticket_cache
log.info("Renewing kerberos ticket to work around kerberos 1.8.1: %s", " ".join(cmdv))
ret = subprocess.call(cmdv, close_fds=True)
if ret != 0:
principal = f"{principal or conf.get('kerberos', 'principal')}/{get_hostname()}"
ccache = conf.get("kerberos", "ccache")
log.error(
"Couldn't renew kerberos ticket in order to work around Kerberos 1.8.1 issue. Please check that "
"the ticket for '%s' is still renewable:\n $ kinit -f -c %s\nIf the 'renew until' date is the "
"same as the 'valid starting' date, the ticket cannot be renewed. Please check your KDC "
"configuration, and the ticket renewal policy (maxrenewlife) for the '%s' and `krbtgt' "
"principals.",
principal,
ccache,
principal,
)
return ret |
Autodetect the Kerberos ticket configuration.
Return true if the ticket cache contains "conf" information as is found
in ticket caches of Kerberos 1.8.1 or later. This is incompatible with the
Sun Java Krb5LoginModule in Java6, so we need to take an action to work
around it. | def detect_conf_var() -> bool:
"""
Autodetect the Kerberos ticket configuration.
Return true if the ticket cache contains "conf" information as is found
in ticket caches of Kerberos 1.8.1 or later. This is incompatible with the
Sun Java Krb5LoginModule in Java6, so we need to take an action to work
around it.
"""
ticket_cache = conf.get_mandatory_value("kerberos", "ccache")
with open(ticket_cache, "rb") as file:
# Note: this file is binary, so we check against a bytearray.
return b"X-CACHECONF:" in file.read() |
Run the kerberos renewer.
:param principal: principal name
:param keytab: keytab file
:param mode: mode to run the airflow kerberos in
:return: None | def run(principal: str | None, keytab: str, mode: KerberosMode = KerberosMode.STANDARD):
"""
Run the kerberos renewer.
:param principal: principal name
:param keytab: keytab file
:param mode: mode to run the airflow kerberos in
:return: None
"""
if not keytab:
log.warning("Keytab renewer not starting, no keytab configured")
sys.exit(0)
log.info("Using airflow kerberos with mode: %s", mode.value)
if mode == KerberosMode.STANDARD:
while True:
renew_from_kt(principal, keytab)
time.sleep(conf.getint("kerberos", "reinit_frequency"))
elif mode == KerberosMode.ONE_TIME:
renew_from_kt(principal, keytab) |
Return the resource name for a DAG id.
Note that since a sub-DAG should follow the permission of its
parent DAG, you should pass ``DagModel.root_dag_id`` to this function,
for a subdag. A normal dag should pass the ``DagModel.dag_id``. | def resource_name_for_dag(root_dag_id: str) -> str:
"""Return the resource name for a DAG id.
Note that since a sub-DAG should follow the permission of its
parent DAG, you should pass ``DagModel.root_dag_id`` to this function,
for a subdag. A normal dag should pass the ``DagModel.dag_id``.
"""
if root_dag_id == RESOURCE_DAG:
return root_dag_id
if root_dag_id.startswith(RESOURCE_DAG_PREFIX):
return root_dag_id
return f"{RESOURCE_DAG_PREFIX}{root_dag_id}" |
Split the kerberos principal string into parts.
:return: *None* if the principal is empty. Otherwise split the value into
parts. Assuming the principal string is valid, the return value should
contain three components: short name, instance (FQDN), and realm. | def get_components(principal) -> list[str] | None:
"""Split the kerberos principal string into parts.
:return: *None* if the principal is empty. Otherwise split the value into
parts. Assuming the principal string is valid, the return value should
contain three components: short name, instance (FQDN), and realm.
"""
if not principal:
return None
return re2.split(r"[/@]", str(principal)) |
Replace hostname with the right pattern including lowercase of the name. | def replace_hostname_pattern(components, host=None):
"""Replace hostname with the right pattern including lowercase of the name."""
fqdn = host
if not fqdn or fqdn == "0.0.0.0":
fqdn = get_hostname()
return f"{components[0]}/{fqdn.lower()}@{components[2]}" |
Retrieve FQDN - hostname for the IP or hostname. | def get_fqdn(hostname_or_ip=None):
"""Retrieve FQDN - hostname for the IP or hostname."""
try:
if hostname_or_ip:
fqdn = socket.gethostbyaddr(hostname_or_ip)[0]
if fqdn == "localhost":
fqdn = get_hostname()
else:
fqdn = get_hostname()
except OSError:
fqdn = hostname_or_ip
return fqdn |
Retrieve principal from the username and realm. | def principal_from_username(username, realm):
"""Retrieve principal from the username and realm."""
if ("@" not in username) and realm:
username = f"{username}@{realm}"
return username |
Get the original start_date for a rescheduled task.
:meta private: | def _orig_start_date(
dag_id: str, task_id: str, run_id: str, map_index: int, try_number: int, session: Session = NEW_SESSION
):
"""
Get the original start_date for a rescheduled task.
:meta private:
"""
return session.scalar(
select(TaskReschedule)
.where(
TaskReschedule.dag_id == dag_id,
TaskReschedule.task_id == task_id,
TaskReschedule.run_id == run_id,
TaskReschedule.map_index == map_index,
TaskReschedule.try_number == try_number,
)
.order_by(TaskReschedule.id.asc())
.with_only_columns(TaskReschedule.start_date)
.limit(1)
) |
Decorate a subclass of BaseSensorOperator with poke.
Indicate that instances of this class are only safe to use poke mode.
Will decorate all methods in the class to assert they did not change
the mode from 'poke'.
:param cls: BaseSensor class to enforce methods only use 'poke' mode. | def poke_mode_only(cls):
"""
Decorate a subclass of BaseSensorOperator with poke.
Indicate that instances of this class are only safe to use poke mode.
Will decorate all methods in the class to assert they did not change
the mode from 'poke'.
:param cls: BaseSensor class to enforce methods only use 'poke' mode.
"""
def decorate(cls_type):
def mode_getter(_):
return "poke"
def mode_setter(_, value):
if value != "poke":
raise ValueError(f"Cannot set mode to '{value}'. Only 'poke' is acceptable")
if not issubclass(cls_type, BaseSensorOperator):
raise ValueError(
f"poke_mode_only decorator should only be "
f"applied to subclasses of BaseSensorOperator,"
f" got:{cls_type}."
)
cls_type.mode = property(mode_getter, mode_setter)
return cls_type
return decorate(cls) |
Return a serializable representation of the templated field.
If ``templated_field`` contains a class or instance that requires recursive
templating, store them as strings. Otherwise simply return the field as-is. | def serialize_template_field(template_field: Any, name: str) -> str | dict | list | int | float:
"""Return a serializable representation of the templated field.
If ``templated_field`` contains a class or instance that requires recursive
templating, store them as strings. Otherwise simply return the field as-is.
"""
def is_jsonable(x):
try:
json.dumps(x)
except (TypeError, OverflowError):
return False
else:
return True
max_length = conf.getint("core", "max_templated_field_length")
if not is_jsonable(template_field):
serialized = str(template_field)
if len(serialized) > max_length:
rendered = redact(serialized, name)
return (
"Truncated. You can change this behaviour in [core]max_templated_field_length. "
f"{rendered[:max_length - 79]!r}... "
)
return str(template_field)
else:
if not template_field:
return template_field
serialized = str(template_field)
if len(serialized) > max_length:
rendered = redact(serialized, name)
return (
"Truncated. You can change this behaviour in [core]max_templated_field_length. "
f"{rendered[:max_length - 79]!r}... "
)
return template_field |
Load & return Json Schema for DAG as Python dict. | def load_dag_schema_dict() -> dict:
"""Load & return Json Schema for DAG as Python dict."""
schema_file_name = "schema.json"
schema_file = pkgutil.get_data(__name__, schema_file_name)
if schema_file is None:
raise AirflowException(f"Schema file {schema_file_name} does not exists")
schema = json.loads(schema_file.decode())
return schema |
Load & Validate Json Schema for DAG. | def load_dag_schema() -> Validator:
"""Load & Validate Json Schema for DAG."""
import jsonschema
schema = load_dag_schema_dict()
return jsonschema.Draft7Validator(schema) |
Encode an object so it can be understood by the deserializer. | def encode(cls: str, version: int, data: T) -> dict[str, str | int | T]:
"""Encode an object so it can be understood by the deserializer."""
return {CLASSNAME: cls, VERSION: version, DATA: data} |
Serialize an object into a representation consisting only built-in types.
Primitives (int, float, bool, str) are returned as-is. Built-in collections
are iterated over, where it is assumed that keys in a dict can be represented
as str.
Values that are not of a built-in type are serialized if a serializer is
found for them. The order in which serializers are used is
1. A ``serialize`` function provided by the object.
2. A registered serializer in the namespace of ``airflow.serialization.serializers``
3. Annotations from attr or dataclass.
Limitations: attr and dataclass objects can lose type information for nested objects
as they do not store this when calling ``asdict``. This means that at deserialization values
will be deserialized as a dict as opposed to reinstating the object. Provide
your own serializer to work around this.
:param o: The object to serialize.
:param depth: Private tracker for nested serialization.
:raise TypeError: A serializer cannot be found.
:raise RecursionError: The object is too nested for the function to handle.
:return: A representation of ``o`` that consists of only built-in types. | def serialize(o: object, depth: int = 0) -> U | None:
"""Serialize an object into a representation consisting only built-in types.
Primitives (int, float, bool, str) are returned as-is. Built-in collections
are iterated over, where it is assumed that keys in a dict can be represented
as str.
Values that are not of a built-in type are serialized if a serializer is
found for them. The order in which serializers are used is
1. A ``serialize`` function provided by the object.
2. A registered serializer in the namespace of ``airflow.serialization.serializers``
3. Annotations from attr or dataclass.
Limitations: attr and dataclass objects can lose type information for nested objects
as they do not store this when calling ``asdict``. This means that at deserialization values
will be deserialized as a dict as opposed to reinstating the object. Provide
your own serializer to work around this.
:param o: The object to serialize.
:param depth: Private tracker for nested serialization.
:raise TypeError: A serializer cannot be found.
:raise RecursionError: The object is too nested for the function to handle.
:return: A representation of ``o`` that consists of only built-in types.
"""
if depth == MAX_RECURSION_DEPTH:
raise RecursionError("maximum recursion depth reached for serialization")
# None remains None
if o is None:
return o
# primitive types are returned as is
if isinstance(o, _primitives):
if isinstance(o, enum.Enum):
return o.value
return o
if isinstance(o, list):
return [serialize(d, depth + 1) for d in o]
if isinstance(o, dict):
if CLASSNAME in o or SCHEMA_ID in o:
raise AttributeError(f"reserved key {CLASSNAME} or {SCHEMA_ID} found in dict to serialize")
return {str(k): serialize(v, depth + 1) for k, v in o.items()}
cls = type(o)
qn = qualname(o)
classname = None
# Serialize namedtuple like tuples
# We also override the classname returned by the builtin.py serializer. The classname
# has to be "builtins.tuple", so that the deserializer can deserialize the object into tuple.
if _is_namedtuple(o):
qn = "builtins.tuple"
classname = qn
# if there is a builtin serializer available use that
if qn in _serializers:
data, serialized_classname, version, is_serialized = _serializers[qn].serialize(o)
if is_serialized:
return encode(classname or serialized_classname, version, serialize(data, depth + 1))
# custom serializers
dct = {
CLASSNAME: qn,
VERSION: getattr(cls, "__version__", DEFAULT_VERSION),
}
# object / class brings their own
if hasattr(o, "serialize"):
data = getattr(o, "serialize")()
# if we end up with a structure, ensure its values are serialized
if isinstance(data, dict):
data = serialize(data, depth + 1)
dct[DATA] = data
return dct
# pydantic models are recursive
if _is_pydantic(cls):
data = o.model_dump() # type: ignore[attr-defined]
dct[DATA] = serialize(data, depth + 1)
return dct
# dataclasses
if dataclasses.is_dataclass(cls):
# fixme: unfortunately using asdict with nested dataclasses it looses information
data = dataclasses.asdict(o) # type: ignore[call-overload]
dct[DATA] = serialize(data, depth + 1)
return dct
# attr annotated
if attr.has(cls):
# Only include attributes which we can pass back to the classes constructor
data = attr.asdict(cast(attr.AttrsInstance, o), recurse=False, filter=lambda a, v: a.init)
dct[DATA] = serialize(data, depth + 1)
return dct
raise TypeError(f"cannot serialize object of type {cls}") |
Deserialize an object of primitive type and uses an allow list to determine if a class can be loaded.
:param o: primitive to deserialize into an arbitrary object.
:param full: if False it will return a stringified representation
of an object and will not load any classes
:param type_hint: if set it will be used to help determine what
object to deserialize in. It does not override if another
specification is found
:return: object | def deserialize(o: T | None, full=True, type_hint: Any = None) -> object:
"""
Deserialize an object of primitive type and uses an allow list to determine if a class can be loaded.
:param o: primitive to deserialize into an arbitrary object.
:param full: if False it will return a stringified representation
of an object and will not load any classes
:param type_hint: if set it will be used to help determine what
object to deserialize in. It does not override if another
specification is found
:return: object
"""
if o is None:
return o
if isinstance(o, _primitives):
return o
# tuples, sets are included here for backwards compatibility
if isinstance(o, _builtin_collections):
col = [deserialize(d) for d in o]
if isinstance(o, tuple):
return tuple(col)
if isinstance(o, set):
return set(col)
return col
if not isinstance(o, dict):
# if o is not a dict, then it's already deserialized
# in this case we should return it as is
return o
o = _convert(o)
# plain dict and no type hint
if CLASSNAME not in o and not type_hint or VERSION not in o:
return {str(k): deserialize(v, full) for k, v in o.items()}
# custom deserialization starts here
cls: Any
version = 0
value: Any = None
classname = ""
if type_hint:
cls = type_hint
classname = qualname(cls)
version = 0 # type hinting always sets version to 0
value = o
if CLASSNAME in o and VERSION in o:
classname, version, value = decode(o)
if not classname:
raise TypeError("classname cannot be empty")
# only return string representation
if not full:
return _stringify(classname, version, value)
if not _match(classname) and classname not in _extra_allowed:
raise ImportError(
f"{classname} was not found in allow list for deserialization imports. "
f"To allow it, add it to allowed_deserialization_classes in the configuration"
)
cls = import_string(classname)
# registered deserializer
if classname in _deserializers:
return _deserializers[classname].deserialize(classname, version, deserialize(value))
# class has deserialization function
if hasattr(cls, "deserialize"):
return getattr(cls, "deserialize")(deserialize(value), version)
# attr or dataclass or pydantic
if attr.has(cls) or dataclasses.is_dataclass(cls) or _is_pydantic(cls):
class_version = getattr(cls, "__version__", 0)
if int(version) > class_version:
raise TypeError(
"serialized version of %s is newer than module version (%s > %s)",
classname,
version,
class_version,
)
return cls(**deserialize(value))
# no deserializer available
raise TypeError(f"No deserializer found for {classname}") |
Convert an old style serialization to new style. | def _convert(old: dict) -> dict:
"""Convert an old style serialization to new style."""
if OLD_TYPE in old and OLD_DATA in old:
# Return old style dicts directly as they do not need wrapping
if old[OLD_TYPE] == OLD_DICT:
return old[OLD_DATA]
else:
return {CLASSNAME: old[OLD_TYPE], VERSION: DEFAULT_VERSION, DATA: old[OLD_DATA]}
return old |
Check if the given classname matches a path pattern either using glob format or regexp format. | def _match(classname: str) -> bool:
"""Check if the given classname matches a path pattern either using glob format or regexp format."""
return _match_glob(classname) or _match_regexp(classname) |
Check if the given classname matches a pattern from allowed_deserialization_classes using glob syntax. | def _match_glob(classname: str):
"""Check if the given classname matches a pattern from allowed_deserialization_classes using glob syntax."""
patterns = _get_patterns()
return any(fnmatch(classname, p.pattern) for p in patterns) |
Check if the given classname matches a pattern from allowed_deserialization_classes_regexp using regexp. | def _match_regexp(classname: str):
"""Check if the given classname matches a pattern from allowed_deserialization_classes_regexp using regexp."""
patterns = _get_regexp_patterns()
return any(p.match(classname) is not None for p in patterns) |
Convert a previously serialized object in a somewhat human-readable format.
This function is not designed to be exact, and will not extensively traverse
the whole tree of an object. | def _stringify(classname: str, version: int, value: T | None) -> str:
"""Convert a previously serialized object in a somewhat human-readable format.
This function is not designed to be exact, and will not extensively traverse
the whole tree of an object.
"""
if classname in _stringifiers:
return _stringifiers[classname].stringify(classname, version, value)
s = f"{classname}@version={version}("
if isinstance(value, _primitives):
s += f"{value}"
elif isinstance(value, _builtin_collections):
# deserialized values can be != str
s += ",".join(str(deserialize(value, full=False)))
elif isinstance(value, dict):
s += ",".join(f"{k}={deserialize(v, full=False)}" for k, v in value.items())
s += ")"
return s |
Return True if the class is a pydantic model.
Checking is done by attributes as it is significantly faster than
using isinstance. | def _is_pydantic(cls: Any) -> bool:
"""Return True if the class is a pydantic model.
Checking is done by attributes as it is significantly faster than
using isinstance.
"""
return hasattr(cls, "model_config") and hasattr(cls, "model_fields") and hasattr(cls, "model_fields_set") |
Return True if the class is a namedtuple.
Checking is done by attributes as it is significantly faster than
using isinstance. | def _is_namedtuple(cls: Any) -> bool:
"""Return True if the class is a namedtuple.
Checking is done by attributes as it is significantly faster than
using isinstance.
"""
return hasattr(cls, "_asdict") and hasattr(cls, "_fields") and hasattr(cls, "_field_defaults") |
Register builtin serializers and deserializers for types that don't have any themselves. | def _register():
"""Register builtin serializers and deserializers for types that don't have any themselves."""
_serializers.clear()
_deserializers.clear()
_stringifiers.clear()
with Stats.timer("serde.load_serializers") as timer:
for _, name, _ in iter_namespace(airflow.serialization.serializers):
name = import_module(name)
for s in getattr(name, "serializers", ()):
if not isinstance(s, str):
s = qualname(s)
if s in _serializers and _serializers[s] != name:
raise AttributeError(f"duplicate {s} for serialization in {name} and {_serializers[s]}")
log.debug("registering %s for serialization", s)
_serializers[s] = name
for d in getattr(name, "deserializers", ()):
if not isinstance(d, str):
d = qualname(d)
if d in _deserializers and _deserializers[d] != name:
raise AttributeError(f"duplicate {d} for deserialization in {name} and {_serializers[d]}")
log.debug("registering %s for deserialization", d)
_deserializers[d] = name
_extra_allowed.add(d)
for c in getattr(name, "stringifiers", ()):
if not isinstance(c, str):
c = qualname(c)
if c in _deserializers and _deserializers[c] != name:
raise AttributeError(f"duplicate {c} for stringifiers in {name} and {_stringifiers[c]}")
log.debug("registering %s for stringifying", c)
_stringifiers[c] = name
log.debug("loading serializers took %.3f seconds", timer.duration) |
Subsets and Splits