package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
airflow-jdbc-xcom-return
Failed to fetch description. HTTP Status Code: 404
airflow-kaldea
airflow-kaldeaThis is a collection ofAirflowoperators to provide integration withKaldea.fromairflow.modelsimportDAGfromairflow_kaldea.operators.kaldea_job_operatorimportKaldeaJobOperatordefault_args={}dag=DAG(dag_id='data_dag',default_args=default_args,schedule_interval='0 * * * *',)kaldea_job=KaldeaJobOperator(task_id='kaldea_job',kaldea_job_id='kaldea_job_id',kaldea_task_id='kaldea_task_id',dag=dag,)InstallationInstall from PyPI:pipinstallairflow-kaldea
airflow-kdb-provider
Airflow KDB ProviderA lightweight KDB provider for Apache Airflow, featuring theKDBAirflowOperator. This provider allows for seamless integration between Airflow and KDB+/q, making it easier to automate data pipelines that involve KDB+/q.InstallationYou can install the Airflow KDB Provider package from PyPI using the following command:pipinstallairflow-kdb-providerUsageTo use the KDBAirflowOperator in your Airflow DAG, you must first import it and create an instance of the operator. Here is an example:from airflow_kdb_provider.operators.kdb_operator import KDBOperatorkdb_operator=KDBOperator(task_id='run_kdb_script',command='/path/to/kdb_script.q',params={'param1':'value1','param2':'value2'},conn_id='kdb_conn',dag=dag)In this example, we create an instance of the KDBOperator and specify the following parameters:task_id: the task ID for this operator command: the path to the KDB+/q script that we want to execute params: a dictionary of parameters that will be passed to the KDB+/q script as command-line arguments conn_id: the connection ID for the KDB+/q server that we want to use (this should be defined in Airflow's Connections interface) dag: the DAG that this operator belongs to Once you have created an instance of the KDBOperator, you can add it to your DAG like any other Airflow operator:some_other_operator>>kdb_operator>>some_other_operator2In this example, we have added the kdb_operator to our DAG and specified that it should be executed after some_other_operator and before some_other_operator2.
airflowkit
Airflowkit: operator, sensors, triggers for AirflowLibrary of Airflow Operators, Sensors, Triggers...DisclaimerNAFeatures
airflow-kube-base-operator
Airflow Kubernetes Base OperatorWhat is this?This should be a generic operator for other airflow k**o to use as their core dependency. Maybe you'll find some use in the python kube api wrapper, otherwise probably the specific package for your needs will be better.NoteNot to be used yet.
airflow-kube-job-operator
Airflow Kubernetes Job OperatorWhat is this?An Airflow Operator that manages creation, watching, and deletion of a Kubernetes Job. It assumes the client passes in a path to a yaml file that may have Jinja templated fields.Who is it for?This package makes the assumption that you're using Kubernetes somehow. Airflow itself may be deployed in Kubernetes (in_cluster mode) or you may just want it to manage Jobs running remotely on a cluster (give Airflow a kube config).Why would I use this?In our use of Airflow we struggled a lot with binding our business logic via many different custom Operators and Plugins directly to Airflow. Instead, we found Airflow to be a great manager of execution of code but not the best tool for writing the ETL/ML code itself.Ideally this should be one of the only Airflow Operators you need.How do I use it?Here are the parameters.ParameterDescriptionTypeyaml_file_nameThe name of the yaml file, could be a full pathstryaml_write_pathIf you want the rendered yaml file written, where should it be?stryaml_write_filenameIf you want the rendered yaml file written, what is the filename?stryaml_template_fieldsIf you have variables in your yaml file you want filled outdicttail_logsWhether to output a log tail of the pods to airflow, will only do it at an end statebool (F)tail_logs_everyevery x seconds to wait to begin a new log dump (nearest 5 sec)inttail_logs_line_countnum of lines from end to outputintlog_yamlWhether to log the rendered yamlbool (T)in_clusterWhether or not Airflow has cluster permissions to create and manage Jobsbool (F)config_fileThe path to the kube configfilestrcluster_contextIf you are using a kube config file include the cluster contextstrdelete_completed_jobAutodelete Jobs that completed without errorsbool (F)Step 1. Install the packagepipinstallairflow-kube-job-operatorStep 1.5 (Optional) Add Role to your Airflow deploymentIf you want the Jobs to get created without having to bundle your kubeconfig file somehow into your Airflow pods, you'll need to deploy Airflow in kubernetes and give Airflow some extra RBAC permissions to handle Jobs within your cluster.** This is needed if you want to use the optionin_cluster=True**Here's an example of what you may needkind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:airflowrules:-verbs:-create-list-get-watch-delete-update-patchapiGroups:-''-batchresources:-pods-jobs-jobs/status-verbs:-getapiGroups:-''resources:-pods/log-verbs:-create-getapiGroups:-''resources:-pods/execIf you want to give Airflow power to run Jobs Cluster-wide modify the ClusterRole instead.Alternatively, just give Airflow your kube cluster config. (A.ii.)Step 2. Create a template folder for your yaml filesThis template folder can be anywhere. It's up to you. But here's a suggestion.If you have...~/airflow/dagsthen~/airflow/kubernetes/jobCould be a valid choice.Lets create a very simple job and put it there.apiVersion:batch/v1kind:Jobmetadata:name:countdownnamespace:<WRITE YOUR AIRFLOW NAMESPACE HERE>spec:template:metadata:name:countdownspec:containers:-name:counterimage:centos:7command:-"bin/bash"-"-c"-"foriin987654321;doecho$i;done"restartPolicy:NeverSave the above at~/airflow/kubernetes/job/countdown.yamlStep 3. Create your DagFirst some questions to ask yourself...A. How do I want my Dag to have access to kubernetes? i. My Airflow has the above RBAC permissions to make Jobs ii. I rather just use my kube config file. It's accessible somewhere in Airflow already (web, worker, and scheduler) B. What does my yaml look like? i. I have a simple yaml file. Just create my Job. (The yaml, 'countdown.yaml' above is like this) ii. I have a single yaml file for my Job but I want some templated fields filled out. iii. I'm hardcore. I have multiple yaml files templated in the Jinja style so I can reuse my templates across tasks and dags.A.i. Usingin_cluster=TruefromairflowimportDAGfromdatetimeimportdatetime,timedeltafromairflow_kjoimportKubernetesJobOperatordefault_args={'owner':'airflow','depends_on_past':False,'email':['[email protected]'],'email_on_failure':False,'email_on_retry':False,'retries':1,# the number of times the pod will retry, can pass in per-task'retry_delay':timedelta(minutes=5),'start_date':datetime(2021,2,24,12,0),}withDAG('kubernetes_job_operator',default_args=default_args,description='KJO example DAG',schedule_interval=None,catchup=False)asdag:task_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='/path/to/airflow/kubernetes/job/countdown.yaml',in_cluster=True)A.ii. Usingconfig_file=/path/to/.kube/configtask_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='/path/to/airflow/kubernetes/job/countdown.yaml',config_file='/path/to/.kube/config',cluster_context='my_kube_config_context')What is this "my_kube_config_context" business? Read about it in the kubernetes config documentationhereB.i. Simple yaml file executionIn addition to the above Dag styles you could also make use of Airflow's nativetemplate_searchpathfield to clean up the Dag a bit.fromairflowimportDAGfromdatetimeimportdatetime,timedeltafromairflow_kjoimportKubernetesJobOperatordefault_args={'owner':'airflow','depends_on_past':False,'email':['[email protected]'],'email_on_failure':False,'email_on_retry':False,'retries':1,'retry_delay':timedelta(minutes=5),'start_date':datetime(2021,2,24,12,0),}withDAG('kubernetes_job_operator',default_args=default_args,description='KJO example DAG',schedule_interval=None,template_searchpath='/path/to/airflow/kubernetes/job'catchup=False)asdag:task_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='countdown.yaml',in_cluster=True)B.ii. Simple yaml templatingLet's make the yaml a little more interesting.apiVersion:batch/v1kind:Jobmetadata:name:countdown-templated-{{task_num}}namespace:<WRITE YOUR AIRFLOW NAMESPACE HERE>spec:template:metadata:name:countdownspec:containers:-name:counterimage:centos:7command:-"bin/bash"-"-c"-"{{command}}"restartPolicy:NeverAnd save this as~/airflow/kubernetes/job/countdown.yaml.tmplWe now have the fieldscommandandtask_numas variables in our yaml file. Here's how our Dag looks now...withDAG('kubernetes_job_operator',default_args=default_args,description='KJO example DAG',schedule_interval=None,template_searchpath='/path/to/airflow/kubernetes/job'catchup=False)asdag:command='sleep 60; for i in 5 4 3 2 1 ; do echo $i ; done'task_num=1task_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='countdown.yaml.tmpl',yaml_template_fields={'command':command,'task_num':task_num},in_cluster=True)B.iii. Multiple yaml templatesThis is very much up to you how you want your Jinja templates separated, if its valid yaml and valid Jinja, it will render and apply just fine...Heres an example use case.Create a 'header' template at~/airflow/kubernetes/job/countdown_header.yaml.tmplapiVersion:batch/v1kind:Jobmetadata:name:countdown-templated-separatednamespace:<WRITE YOUR AIRFLOW NAMESPACE HERE>{%block spec %}{%endblock %}Create a 'body' template at~/airflow/kubernetes/job/countdown_body.yaml.tmpl{%extends 'countdown_header.yaml.tmpl' %}{%block spec %}spec:template:metadata:name:countdownspec:containers:-name:counterimage:centos:7command:-"bin/bash"-"-c"-"{{command}}"restartPolicy:Never{%endblock %}Here's the Dag changes nowtask_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='countdown_body.yaml.tmpl',yaml_template_fields={'command':command},in_cluster=True)In this situation it may be useful to have Airflow write out the rendered yaml file somewhere.task_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='countdown_body.yaml.tmpl',yaml_template_fields={'command':command},yaml_write_path='/tmp',yaml_write_filename='rendered.yaml',# will be on the worker podin_cluster=True)It could be very useful to have an NFS to share the same filestore across pods for writing these rendered yaml files out.LoggingIf you're using Kubernetes you should have a logging solution of some sort to aggregate and provide searchability of all your logs. However, here are some use cases for forwarding the logs using the KJO.1. I just want a simple tail of the logs, I don't care about extra behavior configuration 2. I only want logs tailed out when the pods are in an end state; Completed, Errored 3. I want to specify how many lines are tailed out and/or how frequently its tailed outAdd 'tail_logs' to our task from above.task_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='countdown_body.yaml.tmpl',yaml_template_fields={'command':command},in_cluster=True,tail_logs=True)If any tail_logs* parameter is set, 'tail_logs' does not need to be set.Configure the behavior of the log tailtask_1=KubernetesJobOperator(task_id='example_kubernetes_job_operator',yaml_file_name='countdown_body.yaml.tmpl',yaml_template_fields={'command':command},in_cluster=True,tail_logs_every=60,# secondstail_logs_line_count=100)This could get to be quite noisy so be mindful of your particular use case.Notes....We need to think about how to add PVC support. If a client's Task relies on a PVC being created, they need a way to add it to their DAG and have it created and deleted as a part of the Job flow. Maybe a KubernetesPVCOperator is better than a parameter solution.ContributingThis is a young project and not yet battle tested. Contributions, suggestions, etc. appreciated.
airflow-kube-pvc-operator
Airflow Kubernetes PVC OperatorAirflow operator for kubernetes PVC typeAspirations:make an operator to take in a yaml file describing a pvc object and have airflow manage its life cycleshould be able to plug in with kube job operatorwould be cool to modify the UI to show the pvc is bound to the Jobthrow a warning if resource quota for storage limit in namespace is not set, Airflow should help onboard people to kube 'best practices'NotesNot to be used yet
airflow-kubernetes-job-operator
Please see readme.md @https://github.com/LamaAni/KubernetesJobOperator
airflow-kubernetes-job-operator-customize
Please see readme.md @https://github.com/LamaAni/KubernetesJobOperator
airflow-kubernetes-job-operator-eks-auth
This is a fork from @https://github.com/LamaAni/KubernetesJobOperator
airflow-kubernetes-job-operator-latest
Please see readme.md @https://github.com/Fahadsaadullahkhan/KubernetesJobOperator
airflow-kubernetes-job-operator-master
Please see readme.md @https://github.com/LamaAni/KubernetesJobOperator
airflow-kubernetes-job-operator-test27
Please see readme.md @https://github.com/LamaAni/KubernetesJobOperator
airflow-livy-operators
Airflow Livy OperatorsLets Airflow DAGs run Spark jobs via Livy:Sessions,Batches. This mode supports additional verification via Spark/YARN REST API.Seethis blog postfor more information and detailed comparison of ways to run Spark jobs from Airflow.Directories and files of interestairflow_home/plugins: Airflow Livy operators' code.airflow_home/dags: example DAGs for Airflow.batches: Spark jobs code, to be used in Livy batches.sessions: Spark code for Livy sessions. You can add templates to files' contents in order to pass parameters into it.helper.sh: helper shell script. Can be used to run sample DAGs, prep development environment and more. Run it to find out what other commands are available.How do I......run the examples?Prerequisites:Python 3. Make sure it's installed and in$PATHSpark cluster with Livy. I heavily recommend you "mock" one on your machine withmy Spark cluster on Docker Compose.Now,Optional - this step can be skipped if you're mocking a cluster on your machine. Openhelper.sh. Insideinit_airflow()function you'll see Airflow Connections for Livy, Spark and YARN. Redefine as appropriate.Define the way the sample batch files from this repo are delivered to a cluster:if you're using a docker-compose cluster: redefine the BATCH_DIR variable as appropriate.if you're using your own cluster: modify thecopy_batches()function so that it delivers the files to a place accessible by your cluster (could beaws s3 cpetc.)run./helper.sh upto bring up the whole infrastructure. Airflow UI will be available atlocalhost:8888. The credentials areadmin/admin.Ctrl+C to stop Airflow. Then./helper.sh downto dispose of remaining Airflow processes (shouldn't be required if everything goes well. Run this if you can't start Airflow again due to some non-informative errors) .... use it in my project?pipinstallairflow-livy-operatorsThis is how you import them:fromairflow_livy.sessionimportLivySessionOperatorfromairflow_livy.batchimportLivyBatchOperatorSee sample DAGs underairflow_home/dagsto learn how to use the operators.... set up the development environment?Alright, you want to contribute and need to be able to run the stuff on your machine, as well as the usual niceness that comes with IDEs (debugging, syntax highlighting)../helper.sh updevruns Airflow with local operators' code (as opposed to pulling them from PyPi). Useful for development../helper.sh full- run tests (pytest) with coverage report (will be saved tohtmlcov/), highlight code style errors (flake8), reformat all code (black+isort)./helper.sh ci- same as above, but only check the code formatting. This same command is ran by CI.(Pycharm-specific) point PyCharm to your newly-created virtual environment: go to"Preferences" -> "Project: airflow-livy-operators" -> "Project interpreter", select "Existing environment"and pickpython3executable fromvenvfolder (venv/bin/python3)... debug?(Pycharm-specific) Step-by-step debugging withairflow testand running PySpark batch jobs locally (with debugging as well) is supported via run configurations under.idea/runConfigurations. You shouldn't have to do anything to use them - just open the folder in PyCharm as a project.An example of how a batch can be ran on local Spark:python./batches/join_2_files.py\"file:////Users/vpanov/data/vpanov/bigdata-docker-compose/data/grades.csv"\"file:///Users/vpanov/data/vpanov/bigdata-docker-compose/data/ssn-address.tsv"\-file1_sep=,-file1_header=true\-file1_schema="\`Last name\` STRING, \`First name\` STRING, SSN STRING, Test1 INT, Test2 INT, Test3 INT, Test4 INT, Final INT, Grade STRING"\-file1_join_column=SSN-file2_header=false\-file2_schema="\`Last name\` STRING, \`First name\` STRING, SSN STRING, Address1 STRING, Address2 STRING"\-file2_join_column=SSN-output_header=true\-output_columns="file1.\`Last name\` AS LastName, file1.\`First name\` AS FirstName, file1.SSN, file2.Address1, file2.Address2"# Optionally append to save result to file#-output_path="file:///Users/vpanov/livy_batch_example"TODOhelper.sh - replace with modern tools (e.g. pipenv + Docker image)Disable some of flake8 flags for cleaner code
airflow-livy-operators-sexy
No description available on PyPI.
airflow-livy-plugins
Airflow Livy PluginsPlugins for Airflow to run Spark jobs via Livy:Sessions,Batches. This mode supports additional verification via Spark/YARN REST API.Seethis blog postfor more information and detailed comparison of ways to run Spark jobs from Airflow.Directories and files of interestairflow_home: example DAGs and plugins for Airflow. Can be used as Airflow home path.batches: Spark jobs code, to be used in Livy batches.sessions: (Optionally) templated Spark code for Livy sessions.airflow.sh: helper shell script. Can be used to run sample DAGs, prep development environment and more. Run it to find out what other commands are available.How do I......run the examples?Prerequisites:Python 3. Make sure it's installed and in$PATHNow,Do you have a Spark cluster with Livy running somewhere?No. Either get one, or "mock" it withmy Spark cluster on Docker Compose.Yes. You're golden!Optional - this step can be skipped if you're mocking a cluster on your machine. Openairflow.sh. Insideinit_airflow ()function you'll see Airflow Connections for Livy, Spark and YARN. Redefine as appropriate.run./airflow.sh upto bring up the whole infrastructure. Airflow UI will be available atlocalhost:8080.Ctrl+C to stop Airflow. Then./airflow.sh downto dispose of remaining Airflow processes (shouldn't be needed there if everything goes well).... use it in my project?pipinstallairflow-livy-pluginsThen link or copy the plugin files into$AIRFLOW_HOME/plugins(see how I do that in./airflow.sh). They'll get loaded into Airflow via Plugin Manager automatically. This is how you import the plugins:fromairflow.operatorsimportLivySessionOperatorfromairflow.operatorsimportLivyBatchOperatorPlugins are loaded at run-time so the imports above will look broken in your IDE, but will work fine in Airflow. Take a look at the sample DAGs to see my walkaround :)... set up the development environment?Alright, you want to contribute and need to be able to run the stuff on your machine, as well as the usual niceness that comes with IDEs (debugging, syntax highlighting). How do Irun./airflow.sh devto install all dev dependencies../airflow.sh updevruns local Airflow with local plugins (as opposed to pulling them from PyPi)(Pycharm-specific) point PyCharm to your newly-created virtual environment: go to"Preferences" -> "Project: airflow-livy-plugins" -> "Project interpreter", select "Existing environment"and pickpython3executable fromvenvfolder (venv/bin/python3)./airflow.sh cov- run tests with coverage report (will be saved tohtmlcov/)../airflow.sh lint- highlight code style errors../airflow.sh formatto reformat all code (Black+isort)... debug?(Pycharm-specific) Step-by-step debugging withairflow testand running PySpark batch jobs locally (with debugging as well) is supported via run configurations under.idea/runConfigurations. You shouldn't have to do anything to use them - just open the folder in PyCharm as a project.An example of how a batch can be ran on local Spark:python./batches/join_2_files.py\"file:////Users/vpanov/data/vpanov/bigdata-docker-compose/data/grades.csv"\"file:///Users/vpanov/data/vpanov/bigdata-docker-compose/data/ssn-address.tsv"\-file1_sep=,-file1_header=true\-file1_schema="\`Last name\` STRING, \`First name\` STRING, SSN STRING, Test1 INT, Test2 INT, Test3 INT, Test4 INT, Final INT, Grade STRING"\-file1_join_column=SSN-file2_header=false\-file2_schema="\`Last name\` STRING, \`First name\` STRING, SSN STRING, Address1 STRING, Address2 STRING"\-file2_join_column=SSN-output_header=true\-output_columns="file1.\`Last name\` AS LastName, file1.\`First name\` AS FirstName, file1.SSN, file2.Address1, file2.Address2"# Optionally append to save result to file#-output_path="file:///Users/vpanov/livy_batch_example"TODOairflow.sh - replace with modern tools (e.g. pipenv + Docker image)Disable some of flake8 flags for cleaner code
airflow-mailgun-email
airflow-mailgun-emailAirflow Email Backend to send email via Mailgun APIHow to configureInairflow.cfg[email] email_backend = airflow_mailgun_email.email_mailgun.send_email_mailgun [mailgun] domain_name = <your mailgun email domain name> api_password = <api key>How to build in Dev?pip install --editable .
airflow-massivedh-plugin
No description available on PyPI.
airflow-mcd
airflow-mcdMonte Carlo's Airflow provider.InstallationRequires Python 3.7 or greater and is compatible with Airflow 1.10.14 or greater.You can install and update using pip. For instance:pip install -U airflow-mcdThis package can be added like any other python dependency to Airflow (e.g. viarequirements.txt).Basic usageCallbacksSends a webhook back to Monte Carlo upon an event in Airflow. [Detailed examples and documentation here] (https://docs.getmontecarlo.com/docs/airflow-incidents-dags-and-tasks). Callbacks are at the DAG or Task level.To import:from airflow_mcd.callbacks import mcd_callbacksBroad Callbacksif you don't have existing callbacks, these provide all-in-one callbacks:dag_callbackstask_callbacksexamples:dag = DAG( 'dag_name',~~~~ **mcd_callbacks.dag_callbacks, ) task = BashOperator( task_id='task_name', bash_command='command', dag=dag, **mcd_callbacks.task_callbacks, )Explicit CallbacksCallback TypeDescriptionDAGTaskon_success_callbackInvoked when the DAG/task succeedsmcd_dag_success_callbackmcd_task_success_callbackon_failure_callbackInvoked when the DAG/task failsmcd_dag_failure_callbackmcd_task_failure_callbacksla_miss_callbackInvoked when task(s) in a DAG misses its defined SLAmcd_sla_miss_callbackN/Aon_retry_callbackInvoked when the task is up for retryN/Amcd_task_retry_callbackon_execute_callbackInvoked right before the task begins executing.N/Amcd_task_execute_callbackexamples:dag = DAG( 'dag_name', on_success_callback=mcd_callbacks.mcd_dag_success_callback, on_failure_callback=mcd_callbacks.mcd_dag_failure_callback, sla_miss_callback=mcd_callbacks.mcd_sla_miss_callback, ) task = BashOperator( task_id='task_name', bash_command='command', dag=dag, on_success_callback=mcd_callbacks.mcd_task_success_callback, on_failure_callback=mcd_callbacks.mcd_task_failure_callback, on_execute_callback=mcd_callbacks.mcd_task_execute_callback, task_retry_callback=mcd_callbacks.mcd_task_retry_callback, )Hooks:SessionHookCreates apycarlocompatible session. This is useful for creating your own operator built on top of our Python SDK.This hook expects an Airflow HTTP connection with the Monte Carlo API id as the "login" and the API token as the "password".Alternatively, you could define both the Monte Carlo API id and token in "extra" with the following format:{ "mcd_id": "<ID>", "mcd_token": "<TOKEN>" }Seeherefor details on how to generate a token.Operators:BaseMcdOperatorThis operator can be extended to build your own operator using ourSDKor any other dependencies. This is useful if you want implement your own custom logic (e.g. creating custom lineage after a task completes).SimpleCircuitBreakerOperatorThis operator can be used to execute a circuit breaker compatible rule (custom SQL monitor) to run integrity tests before allowing any downstream tasks to execute. Raises anAirflowFailExceptionif the rule condition is in breach when using an Airflow version newer than 1.10.11, as that is preferred for tasks that can be failed without retrying. Older Airflow versions raise anAirflowException. For instance:from datetime import datetime, timedelta from airflow import DAG try: from airflow.operators.bash import BashOperator except ImportError: # For airflow versions <= 2.0.0. This module was deprecated in 2.0.0. from airflow.operators.bash_operator import BashOperator from airflow_mcd.operators import SimpleCircuitBreakerOperator mcd_connection_id = 'mcd_default_session' with DAG('sample-dag', start_date=datetime(2022, 2, 8), catchup=False, schedule_interval=timedelta(1)) as dag: task1 = BashOperator( task_id='example_elt_job_1', bash_command='echo I am transforming a very important table!', ) breaker = SimpleCircuitBreakerOperator( task_id='example_circuit_breaker', mcd_session_conn_id=mcd_connection_id, rule_uuid='<RULE_UUID>' ) task2 = BashOperator( task_id='example_elt_job_2', bash_command='echo I am building a very important dashboard from the table created in task1!', trigger_rule='none_failed' ) task1 >> breaker >> task2This operator expects the following parameters:mcd_session_conn_id: A SessionHook compatible connection.rule_uuid: UUID of the rule (custom SQL monitor) to execute.The following parameters can also be passed:timeout_in_minutes[default=5]: Polling timeout in minutes. Note that The Data Collector Lambda has a max timeout of 15 minutes when executing a query. Queries that take longer to execute are not supported, so we recommend filtering down the query output to improve performance (e.g limit WHERE clause). If you expect a query to take the full 15 minutes we recommend padding the timeout to 20 minutes.fail_open[default=True]: Prevent any errors or timeouts when executing a rule from stopping your pipeline. RaisesAirflowSkipExceptionif set to True and any issues are encountered. Recommended to set thetrigger_ruleparam for any downstream tasks tonone_failedin this case.dbt OperatorsThe following suite of Airflow operators can be used to execute dbt commands. They include ourdbt Coreintegration (via ourPython SDK), to automatically send dbt artifacts to Monte Carlo.DbtBuildOperatorDbtRunOperatorDbtSeedOperatorDbtSnapshotOperatorDbtTestOperatorExample of usage:from airflow_mcd.operators.dbt import DbtRunOperator dbt_run = DbtRunOperator( task_id='run-model', # Airflow task id project_name='some_project', # name of project to associate dbt results job_name='some_job', # name of job to associate dbt results models='some_model', # dbt model selector mc_conn_id='monte_carlo', # id of Monte Carlo API connection configured in Airflow )Many more operator options are available. See the baseDbtOperatorfor a comprehensive list.Advanced ConfigurationTo reduce repetitive configuration of the dbt operators, you can define aDefaultConfigProviderthat would apply configuration to every Monte Carlo dbt operator.Example of usage:from airflow_mcd.operators.dbt import DefaultConfig, DefaultConfigProvider class DefaultConfig(DefaultConfigProvider): """ This default configuration will be applied to all Monte Carlo dbt operators. Any property defined here can be overridden with arguments provided to an operator. """ def config(self) -> DbtConfig: return DbtConfig( mc_conn_id='monte_carlo', env={ 'foo': 'bar', } )The location of this class should be provided in an environment variable:AIRFLOW_MCD_DBT_CONFIG_PROVIDER=configs.dbt.DefaultConfigIf you are using AWS Managed Apache Airflow (MWAA), the location of this class should be defined in a configuration option in your Airflow environment:mc.airflow_mcd_dbt_config_provider=configs.dbt.DefaultConfigTests and releasesLocally make test will run all tests. SeeREADME-dev.mdfor additional details on development. When ready for a review, create a PR against main.When ready to release, create a new Github release with a tag using semantic versioning (e.g. v0.42.0) and CircleCI will test and publish to PyPI. Note that an existing version will not be deployed.LicenseApache 2.0 - See theLICENSEfor more information.
airflow-metaplane
Metaplane Airflow ProviderSet up instructions in Metaplane docs:https://docs.metaplane.dev/docs/airflow
airflow-metrics
airflow-metricsairflow-metricsis an Airflow plugin for automatically sending metrics from Airflow to Datadog.Tested For:apache-airflow>=1.10.2, <=1.10.3Installationpipinstallairflow-metricsOptionalIf you want to the metrics fromBigQueryOperatorandGoogleCloudStorageToBigQueryOperator, then make sure the necessary dependencies are installed.pipinstallapache-airflow[gcp_api]Setupairflow-metricswill report all metrics to Datadog, so create anairflowconnection with your Datadog api key.airflowconnections--add--conn_iddatadog_default--conn_typeHTTP--conn_extr'{"api_key": "<your api key>"}'Note: If you skip this step, yourairflowinstallation should still work but no metrics will be reported.UsageThat's it!airflow-metricswill now begin sending metrics from Airflow to Datadog automatically.Metricsairflow-metricswill automatically begin reporting the following metricsairflow.task.stateThe total number of tasks in a state where the state is stored as a tag.airflow.task.state.bqThe current number of big query tasks in a state where the state is stored as a tag.airflow.dag.durationThe duration of a DAG in ms.airflow.task.durationThe duration of a task in ms.airflow.request.durationThe duration of a HTTP request in ms.airflow.request.status.successThe current number of HTTP requests with successful status codes (<400)airflow.request.status.failureThe current number of HTTP requests with unsuccessful status codes (>=400)airflow.task.upserted.bqThe number of rows upserted by a BigQueryOperator.airflow.task.delay.bqThe time taken for the big query job from a BigQueryOperator to start in ms.airflow.task.duration.bqThe time taken for the big query job from a BigQueryOperator to finish in ms.airflow.task.upserted.gcs_to_bqThe number of rows upserted by a GoogleCloudStorageToBigQueryOperator.airflow.task.delay.gcs_to_bqThe time taken for the big query from a GoogleCloudStorageToBigQueryOperator to start in ms.airflow.task.duration.gcs_to_bqThe time taken for the big query from a GoogleCloudStorageToBigQueryOperator to finish in ms.ConfigurationBy default,airflow-metricswill begin extracting metrics from Airflow as you run your DAGs and send them to Datadog. You can opt out of it entirely or opt out of a subset of the metrics by setting these configurations in yourairflow.cfg[airflow_metrics] airflow_metrics_enabled = True airflow_metrics_tasks_enabled = True airflow_metrics_bq_enabled = True airflow_metrics_gcs_to_bq_enabled = True airflow_metrics_requests_enabled = True airflow_metrics_thread_enabled = True`Limitationsairflow-metricsstarts a thread to report some metrics, and is not supported when using sqlite as your database.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Getting StartedSet up your virtual environment for python3 however you like.pipinstall-e. airflowinitdb airflowconnections--add--conn_iddatadog_default--conn_typeHTTP--conn_extr'{"api_key": ""}'Note: The last step is necessary, otherwise the plugin will not initialize correctly and will not collect metrics. But you are free to add a dummy key for development purposes.Running Testspipinstall-rrequirements-dev.txt pytest
airflow-metrics-gbq
Airflow Metrics to BigQuerySends airflow metrics to BigqueryInstallationpipinstallairflow-metrics-gbqUsageActivate statsd metrics inairflow.cfg[metrics]statsd_on=Truestatsd_host=localhoststatsd_port=8125statsd_prefix=airflowRestart the webserver and the schedulersystemctlrestartairflow-webserver.service systemctlrestartairflow-scheduler.serviceCheck that airflow is sending out metrics:nc-l-ulocalhost8125Install this packageCreate required tables (counters, gauges and timers), an example is sharedhereCreate materialized views which refresh when the base table changes, as describehereCreate a simple python scriptmonitor.pyto provide configuration:fromairflow_metrics_gbq.metricsimportAirflowMonitorif__name__=='__main__':monitor=AirflowMonitor(host="localhost",# Statsd host (airflow.cfg)port=8125,# Statsd port (airflow.cfg)gcp_credentials="path/to/service/account.json",dataset_id="monitoring",# dataset where the monitoring tables arecounts_table="counts",# counters tablelast_table="last",# gauges tabletimers_table="timers"# timers table)monitor.run()Run the program, ideally in the background to start sending metrics to BigQuery:pythonmonitor.py&The logs can be viewed in the GCP console under theairflow_monitoringapp_name in Google Cloud Logging.Future releasesIncrease test coverage (unit and integration tests)Add proper typing and mypy support and checksProvide more configurable optionsProvide better documentation
airflow-multi-dagrun
Multi dag runThis plugin contains operators for triggering a DAG run multiple times and you can dynamically specify how many DAG run instances create.It can be useful when you have to handle a big data and you want to split it into chunks and run multiple instances of the same task in parallel.When you see a lot launched target DAGs you can set up more workers. So this makes it pretty easy to scale.Installpipinstallairflow_multi_dagrunExampleCode for scheduling dagsimportdatetimeasdtfromairflowimportDAGfromairflow_multi_dagrun.operatorsimportTriggerMultiDagRunOperatordefgenerate_dag_run():foriinrange(100):yield{'index':i}default_args={'owner':'airflow','start_date':dt.datetime(2015,6,1),}dag=DAG('reindex_scheduler',schedule_interval=None,default_args=default_args)ran_dags=TriggerMultiDagRunOperator(task_id='gen_target_dag_run',dag=dag,trigger_dag_id='example_target_dag',python_callable=generate_dag_run,)This code will schedule dag with idexample_target_dag100 times and pass payload to it.Example of triggered dag:dag=DAG(dag_id='example_target_dag',schedule_interval=None,default_args={'start_date':datetime.utcnow(),'owner':'airflow'},)defrun_this_func(dag_run,**kwargs):print("Chunk received:{}".format(dag_run.conf['index']))chunk_handler=PythonOperator(task_id='chunk_handler',provide_context=True,python_callable=run_this_func,dag=dag)Run exampleThere is docker-compose config, so it requires docker to be installed:docker,docker-composemake init- create dbmake add-admin- createadminuser (is asks a password)make web- start docker containers, run airflow webservermake scheduler- start docker containers, run airflow schedulermake downwill stop and remove docker containersContributionsIf you have found a bug or have some idea for improvement feel free to create an issue or pull request.LicenseApache 2.0
airflownetwork
AirflowNetworkTable of ContentsAboutInstallationChecksLicenseAboutThis is a small library of functions and classes to examine EnergyPlus AirflowNetwork models. A driver program is provided to analyze models in the epJSON format. To summarize the model contents:airflownetwork summarize my_model.epJSONTo create a graph of the model in the DOT format:airflownetwork graph my_model.epJSONTo generate an audit of the model:airflownetwork audit my_model.epJSONFurther help is available on the command line:airflownetwork --helpInstallationpip install airflownetworkChecksThe script checks for a number of issues that may cause an AirflowNetwork to function poorly. These include:Link CountsModels with a large number of links (particularly those with too many links between adjacent zones) may model the building correctly and the simulation results may be correct, but the model performance may be quite poor. A situation that has been observed in user models is the use of individual window elements to model each and every window in a buildings. This may be correct, but as the number of windows increases the performance of the model will suffer, and the performance hit may be avoidable if windows the beahve the same are lumped together. For example, if 10 windows connect a zone to the ambient, but all 10 windows experience the same wind pressure and temperature difference, a single winwdow that represents all 10 will be sufficient and eliminate 9 of the 10 calculations required.Theauditcommand counts the links between zones and flags those that are considered excessive.ConnectednessModels in which there are zones that are "isolated" (i.e., are not connected to the rest of the model via linkages) have been known to be sensitive to convergence issues. For the most part, the solution procedure can handle multiple isolated subnetworks in a single matrix solution, issues that are encountered with these models can be hard to diagnose. The easiest fix to connect together subnetworks is to add one or more linkages very high flow resistance (e.g., a crack element with a very small flow coefficient).Theauditcommand checks that the multizone network (just with surfaces) and the full network (multizone + dsitribution) are fully connected. Models with intrazone features are not currently supported.Licenseairflownetworkis distributed under the terms of theBSD-3-Clauselicense.
airflow-no-cache
Failed to fetch description. HTTP Status Code: 404
airflow-notebook
{{description}}
airflow-notify-sns
Publish Airflow Notification to a SNS TopicThis package adds a callback function to use in failures to DAGs and Tasks in a Airflow project.Installationpipinstallairflow-notify-snsUsagefromdatetimeimporttimedelta# Airflow native imports to create a DAGfromairflowimportDAG,utilsfromairflow.operators.bash_operatorimportBashOperator# Here is function importfromairflow_notify_snsimportairflow_notify_sns# Dag Definitiondag=DAG(dag_id='test_dag',default_args={'owner':'airflow','depends_on_past':False,'start_date':utils.dates.days_ago(1),'retries':3,'retry_delay':timedelta(minutes=5)},schedule_interval="@daily",dagrun_timeout=timedelta(minutes=60),sla_miss_callback=airflow_notify_sns,on_failure_callback=airflow_notify_sns)# Add your tasks heret=BashOperator(dag=dag,task_id='test_env',bash_command='/tmp/test.sh',env={'EXECUTION_DATE':'{{ ds }}'},on_failure_callback=airflow_notify_sns)When DAG or tasks ends in error, a notification will be send to a SNS Topic using AWS default connection (aws_default).Required VariableThis module will try to find a variable namedairflow_notify_sns_arnin your Airflow environment, containing SNS Topic ARN where message will be published to.If variable is not found, function will abort execution with no error.
airflownz
Airflow Hook for NetezzaInstall this package in the python environment with Airflow to use the Netezza Hook.Installation:pip install airflownzMore about creating your own provider packages -https://airflow.apache.org/docs/apache-airflow-providers/index.html#how-to-create-your-own-provider
airflow-oracle-snowflake-plugin
airflow-oracle-snowflake-pluginSteps to use the OracleToSnowflake from the pluginInstall the plugin bypip install airflow-oracle-snowflake-plugin. You can putairflow-oracle-snowflake-pluginin the requirements.txt file for CI/CD operations. This plugin will also install the following dependencies if not already satisfied:oracledbapache-airflow-providers-oracleapache-airflow-providers-snowflakeCreateconfig.pyinsidedags/table_configdirectory. This file will include the necessary information about the source and destination database table specifications. It will have the structure as follows:CONFIG=[{'source_schema':'ADMIN','source_table':'CUSTOMERS','destination_schema':'PUBLIC','destination_table':'CUSTOMERS','columns':[('ID','varchar'),('FULL_NAME','varchar'),('ADDRESS','varchar'),('EMAIL','varchar'),('PHONE_NUMBER','varchar'),]},]Import the operator, sql_utils and the config in your DAG python file by including the following statements:from airflow_oracle_snowflake_plugin.oracle_to_snowflake_operator import OracleToSnowflake import airflow_oracle_snowflake_plugin.utils.sql_utils as sql_utils from table_config.config import CONFIGImplement a for loop to iterate over all the table configurations and create DAG tasks using the operator as follows:forconfiginCONFIG:create_table_statement=sql_utils.get_create_statement(table_name=config.get('destination_table'),columns_definition=config.get('columns'))create_table_if_not_exists=SnowflakeOperator(task_id='create_{}'.format(config.get('destination_table')),snowflake_conn_id='SNOWFLAKE',sql=create_table_statement,warehouse='LANDING',database='LANDING_DEV',role='ACCOUNTADMIN',schema=config.get('destination_schema'),dag=dag)fill_table_statement=sql_utils.get_select_statement(table_name=config.get('source_table'),schema_name=config.get('source_schema'),columns_definition=config.get('columns'),sql_server_syntax=False)oracle_to_snowflake_operator=OracleToSnowflake(task_id='recreate_{}'.format(config.get('destination_table')),dag=dag,warehouse='LANDING',database='LANDING_DEV',role='ACCOUNTADMIN',schema='PUBLIC',source_schema=config.get('source_schema'),source_table=config.get('source_table'),destination_schema=config.get('destination_schema'),destination_table=config.get('destination_table'),fill_table_statement=fill_table_statement,snowflake_conn_id='SNOWFLAKE',oracle_conn_id='ORACLE',recreate_table=True)create_table_if_not_exists>>oracle_to_snowflake_operatorThis script will create two tasks for each table in Oracle database that you want to migrate. This will be determined by theCONFIGarray inconfig.py.First TaskFirst task creates the table in the Snowflake database if it doesn't exist already using the SnowflakeOperator. It requires:An existing airflow connection to your Snowflake accountName of the warehouse to use ('LANDING' in the example above)Name of the database to use ('LANDING_DEV' in the example above)Name of the role to use ('ACCOUNTADMIN' in the example above).It takes an SQL statement which we have provided as thecreate_table_statementgenerated by thesql_utils.get_create_statementmethod. The method usesCONFIGand extracts the table name, columns, and their data types.Second TaskThe second task uses theOracleToSnowflakeoperator from the plugin. It creates a temporary csv file after selecting the rows from the source table, uploads it to a Snowflake stage, and finally uploads it to the destination table in Snowflake. It requires:An existing airflow connection id to your Snowflake account as well as your Oracle database instance. The connection IDs will default toSNOWFLAKEandORACLEif not provided.Inside the operator, a custom Snowflake hook is used which will upload the csv file to a Snowflake table. This hook requires:Name of the warehouse to use (defaults to 'LANDING' if not provided)Name of the database to use (defaults to'LANDING_DEV' if not provided)Name of the role to use (defaults to 'ACCOUNTADMIN' if not provided).It takes an SQL statement which we have provided as thefill_table_statementgenerated by thesql_utils.get_select_statementmethod. The method usesCONFIGand extracts the table name, schema, and the columns.NoteAdded tags to facilitate version releasing and CI/CD operations
airflow-pentaho-plugin
Pentaho Airflow pluginThis plugins runs Jobs and Transformations through Carte servers. It allows to orchestrate a massive number of trans/jobs taking care of the dependencies between them, even between different instances. This is done by usingCarteJobOperatorandCarteTransOperatorIt also runs Pan (transformations) and Kitchen (Jobs) in local mode, both from repository and local XML files. For this approach, useKitchenOperatorandPanOperatorRequirementsA Apache Airflow system deployed.One or many working PDI CE installations.A Carte server for Carte Operators.SetupThe same setup process must be performed on webserver, scheduler and workers (that runs this tasks) to get it working. If you want to deploy specific workers to run this kind of tasks, seeQueues, inAirflowConceptssection.Pip packageFirst of all, the package should be installed viapip installcommand.pipinstallairflow-pentaho-pluginAirflow connectionThen, a new connection needs to be added to Airflow Connections, to do this, go to Airflow web UI, and click onAdmin -> Connectionson the top menu. Now, click onCreatetab.Use HTTP connection type. Enter theConn Id, this plugin usespdi_defaultby default, the username and the password for your Pentaho Repository.At the bottom of the form, fill theExtrafield withpentaho_home, the path where your pdi-ce is placed, andrep, the repository name for this connection, using a json formatted string like it follows.{"pentaho_home":"/opt/pentaho","rep":"Default"}CarteIn order to useCarteJobOperator, the connection should be set different. Fillhost(includinghttp://orhttps://) andportfor Carte hostname and port,usernameandpasswordfor PDI repository, andextraas it follows.{"rep":"Default","carte_username":"cluster","carte_password":"cluster"}UsageCarteJobOperatorCarteJobOperator is responsible for running jobs in remote slave servers. Here it is an example ofCarteJobOperatorusage.# For versions before 2.0# from airflow.operators.airflow_pentaho import CarteJobOperatorfromairflow_pentaho.operators.carteimportCarteJobOperator# ... ## Define the task using the CarteJobOperatoravg_spent=CarteJobOperator(conn_id='pdi_default',task_id="average_spent",job="/home/bi/average_spent",params={"date":"{{ ds }}"},# Date in yyyy-mm-dd formatdag=dag)# ... #some_task>>avg_spent>>another_taskKitchenOperatorKitchen operator is responsible for running Jobs. Lets suppose that we have a definedJobsaved on/home/bi/average_spentin our repository with the argumentdateas input parameter. Lets define the task using theKitchenOperator.# For versions before 2.0# from airflow.operators.airflow_pentaho import KitchenOperatorfromairflow_pentaho.operators.kettleimportKitchenOperator# ... ## Define the task using the KitchenOperatoravg_spent=KitchenOperator(conn_id='pdi_default',queue="pdi",task_id="average_spent",directory="/home/bi",job="average_spent",params={"date":"{{ ds }}"},# Date in yyyy-mm-dd formatdag=dag)# ... #some_task>>avg_spent>>another_taskCarteTransOperatorCarteTransOperator is responsible for running transformations in remote slave servers. Here it is an example ofCarteTransOperatorusage.# For versions before 2.0# from airflow.operators.airflow_pentaho import CarteTransOperatorfromairflow_pentaho.operators.carteimportCarteTransOperator# ... ## Define the task using the CarteJobOperatorenriche_customers=CarteTransOperator(conn_id='pdi_default',task_id="enrich_customer_data",job="/home/bi/enrich_customer_data",params={"date":"{{ ds }}"},# Date in yyyy-mm-dd formatdag=dag)# ... #some_task>>enrich_customers>>another_taskPanOperatorPan operator is responsible for running transformations. Lets suppose that we have one saved on/home/bi/clean_somedata. Lets define the task using thePanOperator. In this case, the transformation receives a parameter that determines the file to be cleaned.# For versions before 2.0# from airflow.operators.airflow_pentaho import PanOperatorfromairflow_pentaho.operators.kettleimportPanOperator# ... ## Define the task using the PanOperatorclean_input=PanOperator(conn_id='pdi_default',queue="pdi",task_id="cleanup",directory="/home/bi",trans="clean_somedata",params={"file":"/tmp/input_data/{{ ds }}/sells.csv"},dag=dag)# ... #some_task>>clean_input>>another_taskFor more information, please seesample_dags/pdi_flow.py
airflowPlugin
Failed to fetch description. HTTP Status Code: 404
airflow-plugin-config-storage
airflow-plugin-config-storageInject connections into the airflow database from configurationQuickstartBasic$pipinstallairflow-plugin-config-storage $exportAIRFLOW_CONN_POSTGRES_MASTER=postgres://username:[email protected]/my-schema $load-airflow-conf-env-var $airflowwebserverCommonCLI Commandsdelete-all-airflow-connnectionsRemoves all the connections from Airflow. Used to clean out the default connections.Environment VariablesStructureThe Environment Variables to read from by default are the same as those defined in theAirflow documentation.CLI Commandsload-airflow-conf-env-varTakes a single optional argument--env-var-prefix ENV_VAR_PREFIXto override the Environment Variable prefix. Default isAIRLFOW_CONN_.AWSNOTE: Not yet implemented, these are proposalsSSMHierarchy/${user_prefix}/airflow/connections/[ { "conn_type": "s3", }, {} ]/${user_prefix}/airflow/
airflow-plugin-glue-presto-apas
airflow-plugin-glue_presto_apasAn Airflow Plugin toAdd a Partition As Select(APAS)on Presto that uses Glue Data Catalog as a Hive metastore.Usagefrom datetime import timedelta import airflow from airflow.models import DAG from airflow.operators.glue_add_partition import GlueAddPartitionOperator from airflow.operators.glue_presto_apas import GluePrestoApasOperator args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': airflow.utils.dates.days_ago(2), 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5), } dag = DAG( dag_id='example-dag', schedule_interval='0 0 * * *', default_args=args, ) GluePrestoApasOperator(task_id='example-task-1', db='example_db', table='example_table', sql='example.sql', partition_kv={ 'table_schema': 'example_db', 'table_name': 'example_table' }, catalog_region_name='ap-northeast-1', dag=dag, ) GlueAddPartitionOperator(task_id='example-task-2', db='example_db', table='example_table', partition_kv={ 'table_schema': 'example_db', 'table_name': 'example_table' }, catalog_region_name='ap-northeast-1', dag=dag, ) if __name__ == "__main__": dag.cli()Configurationglue_presto_apas.GluePrestoApasOperatordb: database name for parititioning (string, required)table: table name for parititioning (string, required)sql: sql file name for selecting data (string, required)fmt: data format when storing data (string, default =parquet)additional_properties: additional properties for creating table. (dict[string, string], optional)location: location for the data (string, default = auto generated by hive repairable way)partition_kv: key values for partitioning (dict[string, string], required)save_mode: mode when storing data (string, default =overwrite, available values areskip_if_exists,error_if_exists,ignore,overwrite)catalog_id: glue data catalog id if you use a catalog different from account/region default catalog. (string, optional)catalog_region_name: glue data catalog region if you use a catalog different from account/region default catalog. (string, us-east-1 )presto_conn_id: connection id for presto (string, default = 'presto_default')aws_conn_id: connection id for aws (string, default = 'aws_default')Templates can be used in the options[db,table,sql,location,partition_kv].glue_add_partition.GlueAddPartitionOperatordb: database name for parititioning (string, required)table: table name for parititioning (string, required)location: location for the data (string, default = auto generated by hive repairable way)partition_kv: key values for partitioning (dict[string, string], required)mode: mode when storing data (string, default =overwrite, available values areskip_if_exists,error_if_exists,overwrite)follow_location: Skip to add a partition and drop the partition if the location does not exist. (boolean, default =True)catalog_id: glue data catalog id if you use a catalog different from account/region default catalog. (string, optional)catalog_region_name: glue data catalog region if you use a catalog different from account/region default catalog. (string, us-east-1 )aws_conn_id: connection id for aws (string, default = 'aws_default')Templates can be used in the options[db,table,location,partition_kv].DevelopmentRun ExamplePRESTO_HOST=${YOUR PRESTO HOST} PRESTO_PORT=${YOUR PRESTO PORT} ./run-example.shReleasepoetry publish --build
airflow_plugin_honeypot
UNKNOWN
airflow-plugins
Airflow PluginsAirflow plugins.Free software: MIT licenseDocumentation:https://airflow-plugins.readthedocs.io.FeaturesDatabase operationsSlack operationsZIP operationsGit operationsFile operationsFile sensorsCookiecutter operationsAirflow variables utilsHistory0.1.3 (2018-01-18)First release on PyPI.
airflow-poetry-test
No description available on PyPI.
airflow-portainer
Airflow provider for portainerOperatorPortainerOperator(task_id="task",portainer_conn_id="portainer",endpoint_id=16,timeout=30,container_name="container",command="...",user="www-data")
airflow-postmark
No description available on PyPI.
airflow-prometheus-exporter
Airflow Prometheus ExporterThe Airflow Prometheus Exporter exposes various metrics about the Scheduler, DAGs and Tasks which helps improve the observability of an Airflow cluster.The exporter is based on thisprometheus exporter for Airflow.RequirementsThe plugin has been tested with:Airflow >= 1.10.4Python 3.6+The scheduler metrics assume that there is a DAG namedcanary_dag. In our setup, thecanary_dagis a DAG which has a tasks which perform very simple actions such as establishing database connections. This DAG is used to test the uptime of the Airflow scheduler itself.InstallationThe exporter can be installed as an Airflow Plugin using:pip install airflow-prometheus-exporterThis should ideally be installed in your Airflow virtualenv.MetricsMetrics will be available athttp://<your_airflow_host_and_port>/admin/metrics/Task Specific Metricsairflow_task_statusNumber of tasks with a specific status.All the possible states are listedhere.airflow_task_durationDuration of successful tasks in seconds.airflow_task_fail_countNumber of times a particular task has failed.airflow_xcom_paramvalue of configurable parameter in xcom tablexcom fields is deserialized as a dictionary and if key is found for a paticular task-id, the value is reported as a guageAdd task / key combinations in config.yaml:xcom_params:-task_id:abckey:count-task_id:defkey:errorsa task_id of 'all' will match against all airflow tasks:xcom_params: - task_id: all key: countDag Specific Metricsairflow_dag_statusNumber of DAGs with a specific status.All the possible states are listedhereairflow_dag_run_durationDuration of successful DagRun in seconds.Scheduler Metricsairflow_dag_scheduler_delayScheduling delay for a DAG Run in seconds. This metric assumes there is acanary_dag.The scheduling delay is measured as the delay between when a DAG is marked asSCHEDULEDand when it actually startsRUNNING.airflow_task_scheduler_delayScheduling delay for a Task in seconds. This metric assumes there is acanary_dag.airflow_num_queued_tasksNumber of tasks in theQUEUEDstate at any given instance.
airflow-provider-aerospike
Aerospike Provider for Apache AirflowInstallationrequirements:python3.8.0+aerospike14.0.0+apache-airflow2.2.0+You can install this package as:pipinstallairflow-provider-aerospikeConfigurationIn the Airflow interface, configure aConnectionfor Aerospike. Configure the following fields:Conn Id:aerospike_conn_idConn Type:AerospikePort: Aerospike cluster port (usually at 3000)Host: Cluster node address (The client will learn about the other nodes in the cluster from the seed node)Operatorscurrently, the provider supports simple operations such as Fetching single or multiple keys and Creating/Updating keys.Sensorscurrently, the provider supports simple methods such as checking if single or multiple keys exist.
airflow-provider-alembic
Alembic Airflow ProviderAn Airflow Provider to use Alembic to manage database migrationsRead more hereSetupLocallyInstall the Alembic CLI withpip install alembicIn AirflowAddairflow-provider-alembicto yourrequirements.txtor equivalentUsageCreate the required files for Alembic in either yourdagsfolder or theincludefoldermkdirdags/migrationscddags/migrations alembicinit.Create a revisionalembicrevision-m"My Database Revision"Edit the revision - adding, modifying, or removing objects as needed...defupgrade():# Use ORM to create objectsop.create_table('foo',sa.Column('id',sa.Integer,primary_key=True),sa.Column('name',sa.String(50),nullable=False),sa.Column('description',sa.Unicode(200)),)# Or run raw SQLop.execute("SELECT 1;")defdowngrade():# Specify the opposite of your upgrade, to rollbackop.drop_table('account')Add a Connection to Airflow For demo purposes, we will add an in-memory SQLite3 Connection namedsqlitevia our.envfile:AIRFLOW_CONN_SQLITE="sqlite:///:memory:"Restart (or start) your project withastro dev restartAdd a DAG, to run your revision. Because this has@once, it will run as soon as the DAG is turned on. Future runs for future revisions will need to be triggered.importosfromdatetimeimportdatetimefromairflow.modelsimportDAGfromairflow.models.paramimportParamfromairflow_provider_alembic.operators.alembicimportAlembicOperatorwithDAG("example_alembic",schedule="@once",# also consider "None"start_date=datetime(1970,1,1),params={"command":Param("upgrade"),"revision":Param("head")})asdag:AlembicOperator(task_id="alembic_op",conn_id="sqlite",command="{{ params.command }}",revision="{{ params.revision }}",script_location="/usr/local/airflow/dags/migrations",)Extra CapabilitiesYou can utilize any of the Alembic commands in theAlembicOperator- such asdowngradeTheAlembicHookhas methods to run anyalembic commands
airflow-provider-anomaly-detection
Anomaly Detection with Apache AirflowPainless anomaly detection (usingPyOD) withApache Airflow.HowExample AlertAlert Text (ascii art yay!)Alert ChartGetting StartedPrerequisitesInstallationConfigurationDockerAnomaly GalleryHowHow it works in a 🌰:Create and express your metrics via SQL queries (examplehere).Some YAML configuration fun (examplehere, defaultshere).Receive useful alerts when metrics look anomalous (examplehere).Theexample dagwill create 4 dags for each "metric batch" (a metric batch is just the resulting table of 1 or more metrics created in step 1 above):<dag_name_prefix><metric_batch_name>_ingestion<dag_name_suffix>: Ingests the metric data into a table in BigQuery.<dag_name_prefix><metric_batch_name>_training<dag_name_suffix>: Uses recent metrics andpreprocess.sqlto train an anomaly detection model for each metric and save it to GCS.<dag_name_prefix><metric_batch_name>_scoring<dag_name_suffix>: Uses latest metrics andpreprocess.sqlto score recent data using latest trained model.<dag_name_prefix><metric_batch_name>_alerting<dag_name_suffix>: Uses recent scores andalert_status.sqlto trigger an alert email if alert conditions are met.Example AlertExample output of an alert. Horizontal bar chart used to show metric values over time. Smoothed anomaly score is shown as a%and any flagged anomalies are marked with*.In the example below you can see that the anomaly score is elevated when the metric dips and also when it spikes.Alert Text (ascii art yay!)🔥 [some_metric_last1h] looks anomalous (2023-01-25 16:00:00) 🔥some_metric_last1h (2023-01-24 15:30:00 to 2023-01-25 16:00:00) t=0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,742.00 72% 2023-01-25 16:00:00 t=-1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3,165.00 * 81% 2023-01-25 15:30:00 t=-2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3,448.00 * 95% 2023-01-25 15:15:00 t=-3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3,441.00 76% 2023-01-25 15:00:00 t=-4 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,475.00 72% 2023-01-25 14:30:00 t=-5 ~~~~~~~~~~~~~~~~~~~~~~~~~~ 1,833.00 72% 2023-01-25 14:15:00 t=-6 ~~~~~~~~~~~~~~~~~~~~ 1,406.00 72% 2023-01-25 14:00:00 t=-7 ~~~~~~~~~~~~~~~~~~~ 1,327.00 * 89% 2023-01-25 13:30:00 t=-8 ~~~~~~~~~~~~~~~~~~~ 1,363.00 78% 2023-01-25 13:15:00 t=-9 ~~~~~~~~~~~~~~~~~~~~~~~~ 1,656.00 66% 2023-01-25 13:00:00 t=-10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,133.00 51% 2023-01-25 12:30:00 t=-11 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,392.00 40% 2023-01-25 12:15:00 t=-12 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,509.00 41% 2023-01-25 12:00:00 t=-13 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,729.00 42% 2023-01-25 11:30:00 t=-14 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,696.00 44% 2023-01-25 11:15:00 t=-15 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,618.00 41% 2023-01-25 11:00:00 t=-16 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,390.00 39% 2023-01-25 10:30:00 t=-17 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,601.00 27% 2023-01-24 20:00:00 t=-18 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,833.00 25% 2023-01-24 17:30:00 t=-19 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,910.00 28% 2023-01-24 17:15:00 t=-20 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,757.00 22% 2023-01-24 17:00:00 t=-21 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,696.00 34% 2023-01-24 16:30:00 t=-22 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,651.00 37% 2023-01-24 16:15:00 t=-23 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,797.00 39% 2023-01-24 16:00:00 t=-24 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2,739.00 40% 2023-01-24 15:30:00Below is the sql to pull the metric in question for investigation (this is included in the alert for convenience).select*from`metrics.metrics`mjoin`metrics.metrics_scored`sonm.metric_name=s.metric_nameandm.metric_timestamp=s.metric_timestampwherem.metric_name='some_metric_last1h'orderbym.metric_timestampdescAlert ChartA slightly more fancy chart is also attached to alert emails. The top line graph shows the metric values over time. The bottom line graph shows the smoothed anomaly score over time along with the alert status for any flagged anomalies where the smoothed anomaly score passes the threshold.Getting StartedCheck out theexample dagto get started.PrerequisitesCurrently only Google BiqQuery is supported as a data source. The plan is to add Snowflake next and then probably Redshift. PR's to add other data sources are very welcome (some refactoring probably needed).Requirements are listed inrequirements.txt.You will need to have sendgrid_default connection setup in airflow to send emails. You can also use thesendgrid_api_keyvia environment variable if you prefer. See.example.envfor more details.You will need to have agoogle_cloud_defaultconnection setup in airflow to pull data from bigquery. See.example.envfor more details.InstallationInstall fromPyPIas usual.pipinstallairflow-provider-anomaly-detectionConfigurationSee the example configuration files in theexample dagfolder. You can use adefaults.yamlor specific<metric-batch>.yamlfor each metric batch if needed.DockerYou can use the docker compose file to spin up an airflow instance with the provider installed and the example dag available. This is useful for quickly trying it out locally. It will mount the local folders (you can see this indocker-compose.yaml) into the container so you can make changes to the code or configs and see them reflected in the running airflow instance.dockercomposeup-dAnomaly GalleryLook at some of these beautiful anomalies! (More at/anomaly_gallery/README.md)(these are all real anomalies from various business metrics as i have been dogfooding this at work for a little while now)Sharpe drop in metric followed by an elevated anomaly score.A subtle change and some "saw tooth" behaviour leading to an anomaly.A bump and spike example - two anomalies for one!An example of a regular ETL timing delay.
airflow-provider-appops
AppOpsApp 投放运营工具集AdmobHooksfromappops.hooks.admobimportAdmobHookh=AdmobHook()h.accountsOut[3]:['pub-xxxx']h.mediationreport_json(query_body={"report_spec":{"dateRange":{"startDate":{"year":2021,"month":8,"day":30},"endDate":{"year":2021,"month":8,"day":30}},"dimensions":["DATE","FORMAT"],#"metrics":["ESTIMATED_EARNINGS","AD_REQUESTS","MATCH_RATE","MATCHED_REQUESTS","IMPRESSIONS","CLICKS"],"sortConditions":[{"metric":"ESTIMATED_EARNINGS","order":"ASCENDING"}],"localizationSettings":{"currencyCode":"USD","languageCode":"en-US"}}},accounts=None,# using accounts from connection)OperatorignoreAds
airflow-provider-azure-machinelearning
Airflow Provider for Azure Machine LearningSource Code|Package_PyPI|Example DAGs|Example Docker ContainersThis package enables you to submit workflows to Azure Machine Learning from Apache Airflow.Pre-requisitesAzure AccountandAzure Machine LearningworkspaceTo verfiy your workspace is set up successfully, you can try to access your workspace atAzure Machine Learning Studio, and try to perform basic actions like allocating compute clusters and submittnig a training job, etc.A runningApache Airflowinstance.InstallationIn you Apache Airflow instance, run:pip install airflow-provider-azure-machinelearningOr, try it out by following examples in thedev folder, or Airflow'sHow-to-Guideto set up Airflow in Docker containers.Configure Azure Machine Learning Connections in AirflowTo send workload to your Azure Machine Learning workspace from Airflow, you need to set up an "Azure Machine Learning" Connection in your Airflow instance:Make sure this package is installed to your Airflow instance. Without this, you will not see "Azure Machine Learning" in the drop down in step 3 and will not be able to add this type of connections.On Airflow web portal, navigate toAdmin-->Connections, and click on+to add a new connection.From the "Connection Type" dropdown, select "Azure Machine Learning". You should see a form like belowConnection Idis a unique identifier for your connection. You will also need to pass this string into AzureML Airflow operators. Check out thoseexample dags.Descriptionis optional. All other fields are required.Tenant ID. You can followthis instructionto retrieve it.Subscription ID,Resource Group Name, andWorkspace Namecan uniquely identify your workspace in Azure Machine Learning. After openingAzure Machine Learning Studio, select the desired workspace, then click the "Change workspace" on the upper-right corner of the website (to the left of the profile icon). Here you can find theWorkspace Name. Now, click "View All Properties in Azure Portal'. This is Azure resource page of your workspace. From there you can retrieveSubscription ID, andResource Group Name.Client IDandSecretare a pair. They are basically 'username' and 'password' to the service principle based authentification process. You need to generate them in Azure Portal, and give it 'Contributor' permissions to the resource group of your workspace. That ensures your Airflow connection can read/write your Azure ML resources to facilitate workloads. Please follow the 3 simple steps below to set them up.To create a service principal, you need to follow 3 simple steps:Create aClient ID. Follow instruction from the "Register an application with Azure AD and create a service principal" section of Azure guidehowto-create-service-principal-portal.Application ID, akaClient ID, is the unique identifier of this service principal.Create aSecret. You can create aSecretunder this application in the Azure Portal following the instructions in the "Option 2: Create a new application secret" section ofthis instruction. Once asecretis successfully created, you will not be able to see the value. So we recommend you store your secret into Azure Key Vault, followingthis instruction.Give this Service PrincipalContribtoraccess to your Azure Machine LearningResource Group. Repeat the instruction form the item 7 above and land on your workspaces' resource page and click on theResource Group. From the left hand panel, selectAccess Control (IAM)and assignContributorrole to the the Application from above. This step is important. Without it, your Airflow will not have the necessary write access to necessary resources to create compute clusters, to execute training workloads, or to upload data, etc. Here isan instruction to assign roles.NoteIf "Azure Machine Learning" is missing from the dropdown in step 3 above, it meansairflow-providers-azure-machinelearningpackage is not successfully installed. You can follow instructions in theInstallation sectionto install it, and use commands like ``pip show airflow-provider-azure-machinelearning``` in the Airflow webserver container/machine to verify the package is installed correctly.You can have many connections in one Airflow instance for different Azure Machine Learning workspaces. You can do this to:Orchestrate workloads across multiple workspace/subscription from 1 single DAG.Achieve isolation between different engineers' workload.Achieve isolation between experimental and production environments.The instructions above are for adding a connection via the Airflow UI. You can also do so via the Airflow Cli. You can find more examples of how to do this via Cli atAirflow Documentation. Below is an example Airflow command:airflowconnectionsadd\--conn-type"azure_machine_learning"\--conn-description"[Description]"\--conn-host"schema"\--conn-login"[Client-ID]"\--conn-password"[Secret]"\--conn-extra'{"extra__azure_machine_learning__tenantId": "[Tenant-ID]", "extra__azure_machine_learning__subscriptionId": "[Subscription-ID]", "extra__azure_machine_learning__resource_group_name": "[Resource-Group-Name]", "extra__azure_machine_learning__workspace_name": "[Workspace-Name]"}'\"[Connection-ID]"ExamplesCheck outexample_dagson how to make use of this provider package. If you do not have a running Airflow instance, please refer toexample docker containers, or [Apache Airflow documentations)https://airflow.apache.org/).Dev EnvironmentTo build this package, run its tests, run its linting tools, etc, you will need following:Via pip:pip install -r dev/requirements.txtVia conda:conda env create -f dev/environment.ymlRunning the tests and lintersAll tests are intestsfolder. To run them, from this folder, runpytestThis repo usesblack,flake8, andisortto keep coding format consistent. From this folder, runblack .,isort ., andflake8.IssuesPlease submit issues and pull requests in our official repo:https://github.com/azure/airflow-provider-azure-machinelearning.ContributingThis project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visithttps://cla.opensource.microsoft.com.When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.This project has adopted theMicrosoft Open Source Code of Conduct. For more information see theCode of Conduct FAQor [email protected] any additional questions or comments.TrademarksThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must followMicrosoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.Release History0.0.1Features AddedFirst preview.
airflow-provider-bigquery-reservation
Airflow BigQuery Reservation ProviderWarningThis package is a pre-released of the official apache-airflow-providers-google package. All of these operators will be integrated to the official package, soon.This repository provides an Apache Airflow provider based onBigQuery Reservation API.Airflow OperatorsBigQueryReservationCreateOperator: Buy BigQuery slots (commitments) and assign them to a GCP project (reserve and assign).BigQueryReservationDeleteOperator: Delete BigQuery commitments and remove associated ressources (rservation and assignment).BigQueryBiEngineReservationCreateOperator: Create or Update a BI engine reservation.BigQueryBiEngineReservationDeleteOperator: Delete or Update a BI engine reservation.You could find DAG sampleshere.RequirementsAGoogle Cloud connectionhas to be defined. By default, all hooks and operators usegoogle_cloud_default.This connection requires the following roles on the Google Cloud project(s) used in these operators:BigQuery Resource AdminBigQuery Job User-Required forBigQueryReservationCreateOperatorbecause of the reservation attachment check.Defining a new dedicated connection and custom GCP role could be good practices to respect the principle of least privilege.How to installpipinstall--userairflow-provider-bigquery-reservation
airflow-provider-census
Census Provider for Apache AirflowThis package allows you to trigger syncs forCensus.InstallationInstall theairflow-provider-censuspackage from PyPI using your preferred way of installing python packages.ConfigurationThere are 2 ways to configure a Census connection depending on whether you are using Airflow 1.10 or Airflow 2.TheCensusHookandCensusOperatoruse thecensus_defaultconnection id as a default, although this is configurable if you would like to use your own connection id.Finding the secret-tokenGo to any sync athttps://app.getcensus.com/syncsClick on the Sync Configuration tab.Next to API TRIGGER, click "Click to show"The url will be of the formathttps://bearer:secret-token:[email protected]/api/v1/syncs/0/triggerthe secret token will be of the format "secret-token:arandomstring" in the url above, including the "secret-token:" part. Do not include the "@".Configuration in Airflow 1.10In the Airflow Connections UI, create a new connection:Conn ID: census_defaultConn Type: HTTPPassword: secret-tokenConfiguration in Airflow 2In the Airflow Connections UI, create a new connection:Conn Id: census_defaultConn Type: CensusCensus Secret Token: secret-tokenHooksCensusHookCensusHookis a class that inherits fromHttpHookand can be used to run http requests for Census. You will most likely interact with the operator rather than the hook.The hook can be imported by the following code:fromairflow_provider_census.hooks.censusimportCensusHookOperatorsCensusOperatorCensusOperatortriggers a sync job in Census. The operator takes the following parameters:sync_id : Navigate to the sync and check the url for the sync id. For examplehttps://app.getcensus.com/syncs/0/overviewhere, the sync_id would be 0.census_conn_id : The connection id to use. This is optional and defaults to 'census_default'.The operator can be imported by the following code:fromairflow_provider_census.operators.censusimportCensusOperatorSensorsCensusSensorCensusSensorpolls a sync run in Census. The sensor takes the following parameters:sync_run_id : The sync run id you get back from the CensusOperator which triggers a new sync.census_conn_id : The connection id to use. This is optional and defaults to 'census_default'.The sensor can be imported by the following code:fromairflow_provider_census.sensors.censusimportCensusSensorExampleThe following example will run a Census sync once a day:fromairflow_provider_census.operators.censusimportCensusOperatorfromairflowimportDAGfromairflow.utils.datesimportdays_agofromdatetimeimporttimedeltadefault_args={"owner":"airflow","start_date":days_ago(1)}dag=DAG('census',default_args=default_args)sync=CensusOperator(sync_id=27,dag=dag,task_id='sync')sensor=CensusSensor(sync_run_id="{{ ti.xcom_pull(task_ids = 'sync') }}",dag=dag,task_id='sensor')sync>>sensorFeedbackSource code available on Github. Feedback and pull requests are greatly appreciated. Let us know if we can improve this.From:wave: The folks atCensusoriginally put this together. Have data? We'll sync your data warehouse with your CRM and the customer success apps critical to your team.Need help setting this up?You can always contact us [email protected] the live chat in the bottom right corner.
airflow-provider-chatgpt
Chat Gpt
airflow-provider-clickhouse
airflow-providers-clickhouseProvider allow connect Apache Airflow to Yandex Clickhouse Database and perform query using ClickhouseOperator
airflow-provider-couchbase
Packageapache-airflow-providers-couchbaseRelease:1.0.0couchbaseProvider packageThis is a provider package forcouchbaseprovider. All classes for this provider package are inairflow.providers.couchbasepython package.InstallationYou can install this package on top of an existing Airflow 2 installation (seeRequirementsbelow for the minimum Airflow version supported) viapip install apache-airflow-providers-couchbaseThe package supports the following python versions:>=3.7RequirementsPIP packageVersion requiredapache-airflow>=2.3.0couchbase>=4.0.0Changelog1.0.0Initial version of the provider.
airflow-provider-cube
Airflow Cube ProviderCube is the semantic layer for building data applications. It helps data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application.Connection requirementsCubeHookusesGenericconnection. So, in order to create a connection to a Cube you will need to create an AirflowGenericconnection and specify the following required properties:host- your Cube's host (must contain schema, address and port);password- your Cube'sCUBEJS_API_SECRET;extra.security_context- your usersecurityContext(in a Cube's terms).Rest of the connection fields will be ignored.Note: theCubeBuildOperatoruses Cube's/cubejs-api/v1/pre-aggregations/jobsendpoint that is forbiden by the default. Make sure that specifiedsecurityContextand Cube'scontextToPermissionsfunction are configured in a way that allow you to run this query. See Cube's documentation for a more context.Package descriptionThis package provides the commonCubeHookclass, the abstractCubeBaseOperator, theCubeQueryOperatorto run analytical queries over a Cube and theCubeBuildOperatorto run pre-aggregations build process.DependenciesPython 3.10PackageVersiontyping2.6.0jwt2.6.0json2.0.9requests2.28.2airflow2.5.1
airflow-provider-datarobot
DataRobot Provider for Apache AirflowThis package provides operators, sensors, and a hook to integrateDataRobotinto Apache Airflow. Using these components, you should be able to build the essential DataRobot pipeline - create a project, train models, deploy a model, and score predictions against the model deployment.Install the Airflow providerThe DataRobot provider for Apache Airflow requires an environment with the following dependencies installed:Apache Airflow>= 2.3DataRobot Python API Client>= 3.2.0To install the DataRobot provider, run the following command:pipinstallairflow-provider-datarobotCreate a connection from Airflow to DataRobotThe next step is to create a connection from Airflow to DataRobot:In the Airflow user interface, clickAdmin > Connectionstoadd an Airflow connection.On theList Connectionpage, click+ Add a new record.In theAdd Connectiondialog box, configure the following fields:FieldDescriptionConnection Iddatarobot_default(this name is used by default in all operators)Connection TypeDataRobotAPI KeyA DataRobot API key, created in theDataRobot Developer Tools, from theAPI Keyssection.DataRobot endpoint URLhttps://app.datarobot.com/api/v2by defaultClickTestto establish a test connection between Airflow and DataRobot.When the connection test is successful, clickSave.Create preconfigured connections to DataRobotYou can create preconfigured connections to store and manage credentials to use with Airflow Operators, replicating theconnection on the DataRobot side.Currently, the supported credential types are:CredentialsDescriptionDataRobot Basic CredentialsLogin/password pairsDataRobot GCP CredentialsGoogle Cloud Service account keyDataRobot AWS CredentialsAWS access keysDataRobot Azure Storage CredentialsAzure Storage secretDataRobot OAuth CredentialsOAuth tokensDataRobot JDBC DataSourceJDBC connection attributesAftercreating a preconfigured connection through the Airflow UI or API, you can access your stored credentials withGetOrCreateCredentialOperatororGetOrCreateDataStoreOperatorto replicate them in DataRobot and retrieve the correspondingcredentials_idordatastore_id.JSON configuration for the DAG runOperators and sensors use parameters from theconfigJSON submitted when triggering the DAG; for example:{"training_data":"s3-presigned-url-or-local-path-to-training-data","project_name":"ProjectcreatedfromAirflow","autopilot_settings":{"target":"readmitted"},"deployment_label":"DeploymentcreatedfromAirflow","score_settings":{"intake_settings":{"type":"s3","url":"s3://path/to/scoring-data/Diabetes10k.csv","credential_id":"<credential_id>"},"output_settings":{"type":"s3","url":"s3://path/to/results-dir/Diabetes10k_predictions.csv","credential_id":"<credential_id>"}}}These config values are accessible in theexecute()method of any operator in the DAG through thecontext["params"]variable; for example, to get training data, you could use the following:defexecute(self,context:Dict[str,Any])->str:...training_data=context["params"]["training_data"]...ModulesOperatorsGetOrCreateCredentialOperatorFetches a credential by name. This operator attempts to find a DataRobot credential with the provided name. If the credential doesn't exist, the operator creates it using the Airflow preconfigured connection with the same connection name.Returns a credential ID.Required config parameters:ParameterTypeDescriptioncredentials_param_namestrThe name of parameter in the config file for the credential name.GetOrCreateDataStoreOperatorFetches a DataStore by Connection name. If the DataStore does not exist, the operator attempts to create it using Airflow preconfigured connection with the same connection name.Returns a credential ID.Required config params:ParameterTypeDescriptionconnection_param_namestrThe name of the parameter in the config file for the connection name.CreateDatasetFromDataStoreOperatorLoads a dataset from a JDBC Connection to the DataRobot AI Catalog.Returns a dataset ID.Required config params:ParameterTypeDescriptiondatarobot_jdbc_connectionstrThe existing preconfigured DataRobot JDBC connection name.dataset_namestrThe name of the loaded dataset.table_schemastrThe database table schema.table_namestrThe source table name.do_snapshotboolIfTrue, creates a snapshot dataset. IfFalse, creates a remote dataset. If unset, uses the server default (True). Creating snapshots from non-file sources may be disabled by theDisable AI Catalog Snapshotspermission.persist_data_after_ingestionboolIfTrue, enforce saving all data (for download and sampling) and allow a user to view the extended data profile (which includes data statistics like min, max, median, mean, histogram, etc.). IfFalse, don't enforce saving data. If unset, uses the server default (True). The data schema (feature names and types) will still be available. Specifying this parameter toFalseanddoSnapshottoTrueresults in an error.UploadDatasetOperatorUploads a local file to the DataRobot AI Catalog.Returns a dataset ID.Required config params:ParameterTypeDescriptiondataset_file_pathstrThe local path to the training dataset.UpdateDatasetFromFileOperatorCreates a new dataset version from a file.Returns a dataset version ID when the new version uploads successfully.Required config params:ParameterTypeDescriptiondataset_idstrThe DataRobot AI Catalog dataset ID.dataset_file_pathstrThe local path to the training dataset.CreateDatasetVersionOperatorCreates a new version of the existing dataset in the AI Catalog.Returns a dataset version ID.Required config params:ParameterTypeDescriptiondataset_idstrThe DataRobot AI Catalog dataset ID.datasource_idstrThe existing DataRobot datasource ID.credential_idstrThe existing DataRobot credential ID.CreateOrUpdateDataSourceOperatorCreates a data source or updates it if it already exists.Returns a DataRobot DataSource ID.Required config params:ParameterTypeDescriptiondata_store_idstrTHe DataRobot datastore ID.CreateProjectOperatorCreates a DataRobot project.Returns a project ID.Several options of source dataset supported:Local file or pre-signed S3 URLCreate a project directly from a local file or a pre-signed S3 URL.Required config params:ParameterTypeDescriptiontraining_datastrThe pre-signed S3 URL or the local path to the training dataset.project_namestrThe project name.Note:In case of an S3 input, thetraining_datavalue must be apre-signed AWS S3 URL.AI Catalog dataset from config fileCreate a project from an existing dataset in the DataRobot AI Catalog using a dataset ID defined in the config file.Required config params:ParameterTypeDescriptiontraining_dataset_idstrThe dataset ID corresponding to existing dataset in the DataRobot AI Catalog.project_namestrThe project name.AI Catalog dataset from previous operatorCreate a project from an existing dataset in the DataRobot AI Catalog using a dataset ID from the previous operator. In this case, your previous operator must return a valid dataset ID (for exampleUploadDatasetOperator) and you should use this output value as adataset_idargument in theCreateProjectOperatorobject creation step.Required config params:ParameterTypeDescriptionproject_namestrThe project name.For moreproject settings, see the DataRobot documentation.TrainModelsOperatorRuns DataRobot Autopilot to train models.ReturnsNone.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.Required config params:ParameterTypeDescriptiontargetstrThe name of the column defining the modeling target."autopilot_settings":{"target":"readmitted"}For moreautopilot settings, see the DataRobot documentation.DeployModelOperatorDeploy a specified model.Returns a deployment ID.Parameters:ParameterTypeDescriptionmodel_idstrThe DataRobot model ID.Required config params:ParameterTypeDescriptiondeployment_labelstrThe deployment label name.For moredeployment settings, see the DataRobot documentation.DeployRecommendedModelOperatorDeploys a recommended model.Returns a deployment ID.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.Required config params:ParameterTypeDescriptiondeployment_labelstrThe deployment label name.For moredeployment settings, see the DataRobot documentation.ScorePredictionsOperatorScores batch predictions against the deployment.Returns a batch prediction job ID.Prerequisites:UseGetOrCreateCredentialOperatorto pass acredential_idfrom the preconfigured DataRobot Credentials (Airflow Connections) or manually set thecredential_idparameter in the config.Note:You canadd S3 credentials to DataRobot via the Python API client.Oruse a Dataset ID from the DataRobot AI Catalog.Oruse a DataStore ID for a JDBC source connection; you can useGetOrCreateDataStoreOperatorto passdatastore_idfrom a preconfigured Airflow Connection.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.intake_datastore_idstrThe DataRobot datastore ID for the JDBC source connection.output_datastore_idstrThe DataRobot datastore ID for the JDBC destination connection.intake_credential_idstrThe DataRobot credentials ID for the source connection.output_credential_idstrThe DataRobot credentials ID for the destination connection.Sample config: Pre-signed S3 URL"score_settings":{"intake_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k.csv",},"output_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k_predictions.csv",}}Sample config: Pre-signed S3 URL with a manually set credential ID"score_settings":{"intake_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k.csv","credential_id":"<credential_id>"},"output_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k_predictions.csv","credential_id":"<credential_id>"}}Sample config: Scoring dataset in the AI Catalog"score_settings":{"intake_settings":{"type":"dataset","dataset_id":"<datasetId>",},"output_settings":{}}For morebatch prediction settings, see the DataRobot documentation.GetTargetDriftOperatorGets the target drift from a deployment.Returns a dict with the target drift data.Parameters:ParameterTypeDescriptiondeployment_idstrTHe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"target_drift":{}GetFeatureDriftOperatorGets the feature drift from a deployment.Returns a dict with the feature drift data.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"feature_drift":{}GetServiceStatsOperatorGets service stats measurements from a deployment.Returns a dict with the service stats measurements data.Parameters:ParameterTypeDescriptiondeployment_idstrTHe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"service_stats":{}GetAccuracyOperatorGets the accuracy of a deployment’s predictions.Returns a dict with the accuracy for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"accuracy":{}GetBiasAndFairnessSettingsOperatorGets the Bias And Fairness settings for deployment.Returns a dict with theBias And Fairness settings for a Deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required.UpdateBiasAndFairnessSettingsOperatorUpdates the Bias And Fairness settings for deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.Sample config params:"protected_features": ["attribute1"], "preferable_target_value": "True", "fairness_metrics_set": "equalParity", "fairness_threshold": 0.1,GetSegmentAnalysisSettingsOperatorGets the segment analysis settings for a deployment.Returns a dict with thesegment analysis settings for a deploymentParameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required.UpdateSegmentAnalysisSettingsOperatorUpdates the segment analysis settings for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.Sample config params:"segment_analysis_enabled": True, "segment_analysis_attributes": ["attribute1", "attribute2"],GetMonitoringSettingsOperatorGets the monitoring settings for deployment.Returns a dict with the config params for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required.Sample monitoring settings:{ "drift_tracking_settings": { } "association_id_settings": { } "predictions_data_collection_settings": { } }DictionaryDescriptiondrift_tracking_settingsThedrift tracking settings for this deployment.association_id_settingsTheassociation ID settings for this deployment.predictions_data_collection_settingsThepredictions data collection settings of this deployment.UpdateMonitoringSettingsOperatorUpdates monitoring settings for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.Sample config params:"target_drift_enabled": True, "feature_drift_enabled": True, "association_id_column": ["id"], "required_association_id": False, "predictions_data_collection_enabled": False,BatchMonitoringOperatorCreates a batch monitoring job for the deployment.Returns a batch monitoring job ID.Prerequisites:UseGetOrCreateCredentialOperatorto pass acredential_idfrom the preconfigured DataRobot Credentials (Airflow Connections) or manually set thecredential_idparameter in the config.Note:You canadd S3 credentials to DataRobot via the Python API client.Oruse a Dataset ID from the DataRobot AI Catalog.Oruse a DataStore ID for a JDBC source connection; you can useGetOrCreateDataStoreOperatorto passdatastore_idfrom a preconfigured Airflow Connection.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.datastore_idstrThe DataRobot datastore ID.credential_idstrThe DataRobot credentials ID.Sample config params:Sample config"deployment_id":"61150a2fadb5586af4118980","monitoring_settings":{"intake_settings":{"type":"bigquery","dataset":"integration_example_demo","table":"actuals_demo","bucket":"datarobot_demo_airflow",},"monitoring_columns":{"predictions_columns":[{"class_name":"True","column_name":"target_True_PREDICTION"},{"class_name":"False","column_name":"target_False_PREDICTION"},],"association_id_column":"id","actuals_value_column":"ACTUAL",},}Sample config: Manually set credential ID"deployment_id":"61150a2fadb5586af4118980","monitoring_settings":{"intake_settings":{"type":"bigquery","dataset":"integration_example_demo","table":"actuals_demo","bucket":"datarobot_demo_airflow","credential_id":"<credential_id>"},"monitoring_columns":{"predictions_columns":[{"class_name":"True","column_name":"target_True_PREDICTION"},{"class_name":"False","column_name":"target_False_PREDICTION"},],"association_id_column":"id","actuals_value_column":"ACTUAL",},}For morebatch monitoring settings, see the DataRobot documentation.DownloadModelScoringCodeOperatorDownloads scoring code artifact from a model.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.model_idstrThe DataRobot model ID.base_pathstrThe base path for storing a downloaded model artifact.Sample config params:"source_code": False,For morescoring code download parameters, see the DataRobot documentation.DownloadDeploymentScoringCodeOperatorDownloads scoring code artifact from a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.base_pathstrThe base path for storing a downloaded model artifact.Sample config params:"source_code": False, "include_agent": False, "include_prediction_explanations": False, "include_prediction_intervals": False,For morescoring code download parameters, see the DataRobot documentation.SubmitActualsFromCatalogOperatorDownloads scoring code artifact from a deployment.Returns an actuals upload job ID.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.dataset_idstrThe DataRobot AI Catalog dataset ID.dataset_version_idstrThe DataRobot AI Catalog dataset version ID.Sample config params:"association_id_column": "id", "actual_value_column": "ACTUAL", "timestamp_column": "timestamp",StartAutopilotOperatorTriggers DataRobot Autopilot to train a set of models.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.featurelist_idstrSpecifies which feature list to use.relationships_configuration_idstrID of the relationships configuration to use.segmentation_task_idstrID of the segementation task to use.Sample config params:"autopilot_settings": { "target": "column_name", "mode": AUTOPILOT_MODE.QUICK, }For moreanalyze_and_model parameters, see the DataRobot documentation.CreateExecutionEnvironmentOperatorCreate an execution environment.Returns an execution environment ID.Parameters:ParameterTypeDescriptionnamestrThe execution environment name.descriptionstrThe execution environment description.programming_languagestrThe programming language of the environment to be created.Sample config params:"execution_environment_name": "My Demo Env", "custom_model_description": "This is a custom model created by Airflow", "programming_language": "python",For moreexecution environment creation parameters, see the DataRobot documentation.CreateExecutionEnvironmentVersionOperatorCreate an execution environment version.Returns a version ID for the newly created execution environment .Parameters:ParameterTypeDescriptionexecution_environment_idstrThe ID of the execution environment.docker_context_pathstrThe file path to a Docker context archive or folder.environment_version_labelstrA short, human-readable string to label the environment version.environment_version_descriptionstrThe execution environment version description.For moreexecution environment version creation parameters, see the DataRobot documentation.CreateCustomInferenceModelOperatorCreate a custom inference model.Returns the ID for the created custom model.Parameters:ParameterTypeDescriptionnamestrName of the custom model.descriptionstrDescription of the custom model.Sample DAG config params:"target_type": - Target type of the custom inference model. Values: [`datarobot.TARGET_TYPE.BINARY`, `datarobot.TARGET_TYPE.REGRESSION`, `datarobot.TARGET_TYPE.MULTICLASS`, `datarobot.TARGET_TYPE.UNSTRUCTURED`] "target_name": - Target feature name. It is optional (ignored if provided) for `datarobot.TARGET_TYPE.UNSTRUCTURED` target type. "programming_language": - Programming language of the custom learning model. "positive_class_label": - Custom inference model positive class label for binary classification. "negative_class_label": - Custom inference model negative class label for binary classification. "prediction_threshold": - Custom inference model prediction threshold. "class_labels": - Custom inference model class labels for multiclass classification. "network_egress_policy": - Determines whether the given custom model is isolated, or can access the public network. "maximum_memory": - The maximum memory that might be allocated by the custom model. "replicas": - A fixed number of replicas that will be deployed in the cluster.For morecustom inference model creation parameters, see the DataRobot documentation.CreateCustomModelVersionOperatorCreate a custom model version.Returns the version ID for the created custom model.Parameters:ParameterTypeDescriptioncustom_model_idstrThe ID of the custom model.base_environment_idstrThe ID of the base environment to use with the custom model version.training_dataset_idstrThe ID of the training dataset to assign to the custom model.holdout_dataset_idstrThe ID of the holdout dataset to assign to the custom model.custom_model_folderstrThe path to a folder containing files to be uploaded. Each file in the folder is uploaded under path relative to a folder path.create_from_previousboolIf set to True, this parameter creates a custom model version containing files from a previous version.Sample DAG config params:"is_major_update" - The flag defining if a custom model version will be a minor or a major version. "files" - The list of tuples, where values in each tuple are the local filesystem path and the path the file should be placed in the model. "files_to_delete" - The list of a file items IDs to be deleted. "network_egress_policy": - Determines whether the given custom model is isolated, or can access the public network. "maximum_memory": - The maximum memory that might be allocated by the custom model. "replicas": - A fixed number of replicas that will be deployed in the cluster. "required_metadata_values" - Additional parameters required by the execution environment. "keep_training_holdout_data" - If the version should inherit training and holdout data from the previous version.For morecustom inference model creation parameters, see the DataRobot documentation.CustomModelTestOperatorCreate and start a custom model test.Returns an ID for the custom model test.Parameters:ParameterTypeDescriptioncustom_model_idstrThe ID of the custom model.custom_model_version_idstrThe ID of the custom model version.dataset_idstrThe ID of the testing dataset forstructured custom models. Ignored and not required forunstructured models.Sample DAG config params:"network_egress_policy": - Determines whether the given custom model is isolated, or can access the public network. "maximum_memory": - The maximum memory that might be allocated by the custom model. "replicas": - A fixed number of replicas that will be deployed in the cluster.For morecustom model test creation parameters, see the DataRobot documentation.GetCustomModelTestOverallStatusOperatorGet the overall status for custom model tests.Returns the custom model test status.Parameters:ParameterTypeDescriptioncustom_model_test_idstrThe ID of the custom model test.For morecustom model test get status parameters, see the DataRobot documentation.CreateCustomModelDeploymentOperatorCreate a deployment from a DataRobot custom model image.Returns the deployment ID.Parameters:ParameterTypeDescriptioncustom_model_version_idstrThe ID of the deployed custom model.deployment_namestrA human-readable label for the deployment.default_prediction_server_idstrAn identifier for the default prediction server.descriptionstrA human-readable description of the deployment.importancestrThe deployment importance level.For morecreate_from_custom_model_version parameters, see the DataRobot documentation.GetDeploymentModelOperatorGets information about the deployment's current model.Returns a model information from a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrAn identifier for the deployed model.For moreget deployment parameters, see the DataRobot documentation.ReplaceModelOperatorReplaces the current model for a deployment.Returns model information for the mode replacing the deployed model.Parameters:ParameterTypeDescriptiondeployment_idstrAn identifier for the deployed model.new_model_idstrThe ID of the replacement model. If you are replacing the deployment's model with a custom inference model, you must use a specific custom model version ID.reasonstrThe reason for the model replacement. Must be one of 'ACCURACY', 'DATA_DRIFT', 'ERRORS', 'SCHEDULED_REFRESH', 'SCORING_SPEED', or 'OTHER'. This value will be stored in the model history to keep track of why a model was replaced.For morereplace_model parameters, see the DataRobot documentation.ActivateDeploymentOperatorActivate or deactivate a Deployment.Returns the Deployment status (active or inactive).Parameters:ParameterTypeDescriptiondeployment_idstrAn identifier for the deployed model.activatestrIf set to True, this parameter activates the deployment. Set to False to deactivate the deployment.For moreactivate deployment parameters, see the DataRobot documentation.GetDeploymentStatusOperatorGet the deployment status (active or inactive).Returns the deployment status.Parameters:ParameterTypeDescriptiondeployment_idstrAn identifier for the deployed model.For moredeployment parameters, see the DataRobot documentation.RelationshipsConfigurationOperatorCreates a relationship configuration.Returns the relationships configuration ID.Parameters:ParameterTypeDescriptiondataset_definitionsstrA list of dataset definitions. Each element is a dict retrieved from theDatasetDefinitionOperatoroperator.relationshipsstrA list of relationships. Each element is a dict retrieved from DatasetRelationshipOperator operator.feature_discovery_settingsstrOptional. A list of Feature Discovery settings. If not provided, it will be retrieved from the DAG configuration parameters. Otherwise, default settings are used.For moreFeature Discovery parameters, see the DataRobot documentation.DatasetDefinitionOperatorCreates a dataset definition for Feature Discovery.Returns a dataset definition dict.Parameters:ParameterTypeDescriptiondataset_identifierstrThe alias of the dataset, used directly as part of the generated feature names.dataset_idstrThe identifier of the dataset in the AI Catalog.dataset_version_idstrThe identifier of the dataset version in the AI Catalog.primary_temporal_keystrThe name of the column indicating the time of record creation.feature_list_idstrSpecifies the feature list to use.snapshot_policystrThe policy to use when creating a project or making predictions. If omitted, the endpoint will use 'latest' by default.For morecreate-dataset-definitions-and-relationships-using-helper-functions, see the DataRobot documentation.DatasetRelationshipOperatorCreate a relationship between datasets defined in DatasetDefinition.Returns a dataset definition dict.Parameters:ParameterTypeDescriptiondataset1_identifierList[str]Identifier of the first dataset in this relationship. This is specified in the identifier field of thedataset_definitionstructure. If set to None, then the relationship is with the primary dataset.dataset2_identifierList[str]Identifier of the second dataset in this relationship. This is specified in the identifier field of thedataset_definitionschema.dataset1_keysList[str]A list of strings (max length: 10 min length: 1). The column(s) from the first dataset which are used to join to the second dataset.dataset2_keysList[str]A list of strings (max length: 10 min length: 1). The column(s) from the second dataset that are used to join to the first dataset.feature_derivation_window_startintHow many time units of each dataset's primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin. Will be a negative integer, If present, the feature engineering graph performs time-aware joins.feature_derivation_window_endintDetermines how many units of time of each dataset's record primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end. It is a non-positive integer if present. If present, the feature engineering graph performs time-aware joins.feature_derivation_window_time_unitstrThe unit of time the feature derivation window. One ofdatarobot.enums.AllowedTimeUnitsSAFERIf present, time-aware joins will be used. Only applicable when dataset1_identifier is not provided.feature_derivation_windowsListA list of feature derivation windows settings. If present, time-aware joins will be used. Only allowed whenfeature_derivation_window_start,feature_derivation_window_end, andfeature_derivation_window_time_unitare not provided.prediction_point_roundingList[dict]Closest value ofprediction_point_rounding_time_unitto round the prediction point into the past when applying the feature derivation if present. Only applicable whendataset1_identifieris not provided.prediction_point_rounding_time_unitstrTime unit of the prediction point rounding. One ofdatarobot.enums.AllowedTimeUnitsSAFER. Only applicable whendataset1_identifieris not provided.For morecreate-dataset-definitions-and-relationships-using-helper-functions, see the DataRobot documentation.ComputeFeatureImpactOperatorCreates a Feature Impact job in DataRobot.Returns a Feature Impact job ID.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.model_idstrDataRobot model ID.For morerequest_feature_impact, see the DataRobot documentation.ComputeFeatureEffectsOperatorSubmit a request to compute Feature Effects for the model.Returns the Feature Effects job ID.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.model_idstrDataRobot model ID.For morerequest_feature_impact parameters, see the DataRobot documentation.ComputeShapOperatorSubmit a request to compute a SHAP impact job for the model.Returns a SHAP impact job ID.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.model_idstrDataRobot model ID.For moreshap-impact parameters, see the DataRobot documentation.CreateExternalModelPackageOperatorCreate an external model package in DataRobot MLOps from JSON configuration.Returns a model package ID of the newly created model package.Parameters:ParameterTypeDescriptionmodel_infostrA JSON object of external model parameters.Example of JSON configuration for a regression model:.. code-block:: python{ "name": "Lending club regression", "modelDescription": { "description": "Regression on lending club dataset" } "target": { "type": "Regression", "name": "loan_amnt" } }Example JSON for a binary classification model:.. code-block:: python{ "name": "Surgical Model", "modelDescription": { "description": "Binary classification on surgical dataset", "location": "/tmp/myModel" }, "target": { "type": "Binary", "name": "complication", "classNames": ["Yes","No"], # minority/positive class should be listed first "predictionThreshold": 0.5 } } }Example JSON for a multiclass classification model:.. code-block:: python{ "name": "Iris classifier", "modelDescription": { "description": "Classification on iris dataset", "location": "/tmp/myModel" }, "target": { "type": "Multiclass", "name": "Species", "classNames": [ "Iris-versicolor", "Iris-virginica", "Iris-setosa" ] } }DeployModelPackageOperatorCreate a deployment from a DataRobot model package.Returns the created deployment ID.Parameters:ParameterTypeDescriptiondeployment_namestrA human readable label of the deployment.model_package_idstrThe ID of the DataRobot model package to deploy.default_prediction_server_idstrAn identifier of a prediction server to be used as the default prediction server. When working with prediction environments, the default prediction server ID should not be provided.prediction_environment_idstrAn identifier of a prediction environment to be used for model deployment.descriptionstrA human readable description of the deployment.importancestrDeployment importance level.user_provided_idstrA user-provided unique ID associated with a deployment definition in a remote git repository.additional_metadataDict[str, str]A Key/Value pair dict, with additional metadata.AddExternalDatasetOperatorUpload a new dataset from the AI Catalog to make predictions for a model.Returns an external dataset ID for the model,Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.dataset_idstrDataRobot AI Catalog dataset ID.credential_idstrDataRobot credentials ID.dataset_version_idstrDataRobot AI Catalog dataset version ID.For moreupload_dataset_from_catalog parameters, see the DataRobot documentation.RequestModelPredictionsOperatorRequests predictions against a previously uploaded dataset.Returns a model predictions job ID.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.model_idstrDataRobot model ID.external_dataset_idstrDataRobot external dataset ID.For morerequest_predictions, see the DataRobot documentation.TrainModelOperatorSubmit a job to the queue to train a model from a specific blueprint.Returns a model training job ID.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.blueprint_idstrDataRobot blueprint ID.featurelist_idstrThe identifier of the feature list to use. If not defined, the default feature list for this project is used.source_project_idstrThe source project that created theblueprint_id. IfNone, it defaults to looking in this project. Note that you must have read permissions in this project.Example of DAG config params: { "sample_pct": "scoring_type": "training_row_count": "n_clusters": }For morestart-training-a-model, see the DataRobot documentation.RetrainModelOperatorSubmit a job to the queue to retrain a model on a specific sample size and/or custom feature list.Returns a model retraining job ID.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.model_idstrDataRobot model ID.featurelist_idstrThe identifier of the feature list to use. If not defined, the default for this project is used.Example of DAG config params: { "sample_pct": "scoring_type": "training_row_count": }For moretrain-a-model-on-a-different-sample-size, see the DataRobot documentation.PredictionExplanationsInitializationOperatorInitialize prediction explanations for a model.Returns a prediction explanations initialization job ID.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.model_idstrDataRobot model ID.For moreprediction-explanations, see the DataRobot documentation.ComputePredictionExplanationsOperatorCreate prediction explanations for the specified dataset.Returns a job ID for the prediction explanations for the specified dataset.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.model_idstrDataRobot model ID.external_dataset_idstrDataRobot external dataset ID.Example of DAG config params:{ "max_explanations" "threshold_low" "threshold_high" }For moreprediction-explanations, see the DataRobot documentation.SensorsAutopilotCompleteSensorChecks if Autopilot is complete.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.ScoringCompleteSensorChecks if batch scoring is complete.Parameters:ParameterTypeDescriptionjob_idstrThe batch prediction job ID.MonitoringJobCompleteSensorChecks if a monitoring job is complete.Parameters:ParameterTypeDescriptionjob_idstrThe batch monitoring job ID.BaseAsyncResolutionSensorChecks if the DataRobot Async API call is complete.Parameters:ParameterTypeDescriptionjob_idstrThe DataRobot async API call status check ID.DataRobotJobSensorChecks whether a DataRobot job is complete.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.job_idstrDataRobot job ID.ModelTrainingJobSensorChecks whether a DataRobot model training job is complete.Returns False if the job is not yet completed, and returns PokeReturnValue(True, trained_model.id) if model training has completed.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.job_idstrDataRobot job ID.HooksDataRobotHookA hook to initialize the DataRobot Public API client.PipelineThe modules described above allow you to construct a standard DataRobot pipeline in an Airflow DAG:create_project_op >> train_models_op >> autopilot_complete_sensor >> deploy_model_op >> score_predictions_op >> scoring_complete_sensorExample DAGSSee the thedatarobot_provider/example_dagsdirectory for the example DAGs.You can find the following examples using a preconfigured connection in thedatarobot_provider/example_dagsdirectory:Example DAGDescriptiondatarobot_pipeline_dag.pyRun the basic end-to-end workflow in DataRobot.datarobot_score_dag.pyPerform DataRobot batch scoring.datarobot_jdbc_batch_scoring_dag.pyPerform DataRobot batch scoring with a JDBC data source.datarobot_aws_s3_batch_scoring_dag.pyUse DataRobot AWS Credentials withScorePredictionsOperator.datarobot_gcp_storage_batch_scoring_dag.pyUse DataRobot GCP Credentials withScorePredictionsOperator.datarobot_bigquery_batch_scoring_dag.pyUse DataRobot GCP Credentials withScorePredictionsOperator.datarobot_azure_storage_batch_scoring_dag.pyUse DataRobot Azure Storage Credentials withScorePredictionsOperator.datarobot_jdbc_dataset_dag.pyUpload a dataset to the AI Catalog through a JDBC connection.datarobot_batch_monitoring_job_dag.pyRun a batch monitoring job.datarobot_create_project_from_ai_catalog_dag.pyCreate a DataRobot project from a DataRobot AI Catalog dataset.datarobot_create_project_from_dataset_version_dag.pyCreate a DataRobot project from a specific dataset version in the DataRobot AI Catalog.datarobot_dataset_new_version_dag.pyCreate a new version of an existing dataset in the AI Catalog.datarobot_dataset_upload_dag.pyUpload a local file to the AI Catalog.datarobot_get_datastore_dag.pyCreate a DataRobot data store withGetOrCreateDataStoreOperator.datarobot_jdbc_dataset_dag.pyCreate a DataRobot project from a JDBC data source.datarobot_jdbc_dynamic_dataset_dag.pyCreate a DataRobot project from a JDBC dynamic data source.datarobot_upload_actuals_catalog_dag.pyUpload actuals from the DataRobot AI Catalog.deployment_service_stats_dag.pyGet a deployment's service statistics withGetServiceStatsOperator.deployment_stat_and_accuracy_dag.pyGet a deployment's service statistics and accuracy.deployment_update_monitoring_settings_dag.pyUpdate a deployment's monitoring settings.deployment_update_segment_analysis_settings_dag.pyUpdate a deployment's segment analysis settings.download_scoring_code_from_deployment_dag.pyDownload a Scoring Code JAR file from a DataRobot deployment.advanced_datarobot_pipeline_jdbc_dag.pyRun the advanced end-to-end workflow in DataRobot.datarobot_autopilot_options_pipeline_dag.pyCreates a DataRobot project and starts Autopilot with advanced options.datarobot_custom_model_pipeline_dag.pyCreate an end-to-end workflow with custom models in DataRobot.datarobot_custom_partitioning_pipeline_dag.pyCreate a custom partitioned project and train models.datarobot_datetime_partitioning_pipeline_dag.pyCreate a datetime partitioned project.datarobot_external_model_pipeline_dag.pyAn end-to-end workflow with external models in DataRobot.datarobot_feature_discovery_pipeline_dag.pyCreate a Feature Discovery project and train models.datarobot_timeseries_pipeline_dag.pyCreate a time series DataRobot project.deployment_activate_deactivate_dag.pyAn example of deployment activation/deactivation and getting deployment status.deployment_replace_model_dag.pyAn example of model replacement for deployments.model_compute_insights_dag.pyAn example of computing Feature Impact and Feature Effects.model_compute_prediction_explanations_dag.pyAn example of a compute prediction explanations job.model_compute_predictions_dag.pyAn example of computing predictions for model.model_compute_shap_dag.pyAn example of computing SHAP.model_retrain_dag.pyExample of model retraining job on a specific sample size/featurelist.model_train_dag.pyExample of model training job based on specific blueprint.The advanced end-to-end workflow in DataRobot (advanced_datarobot_pipeline_jdbc_dag.py) contains the following steps:Ingest a dataset to the AI Catalog from JDBC datasourceCreate a DataRobot projectTrain models using AutopilotDeploy the recommended modelChange deployment settings (enable monitoring settings, segment analysis, and bias and fairness)Run batch scoring using a JDBC datasourceUpload actuals from a JDBC datasourceCollect deployment metrics: service statistics, features drift, target drift, accuracy and process it with custom python operator.IssuesPlease submitissuesandpull requestsin our official repo:https://github.com/datarobot/airflow-provider-datarobotWe are happy to hear from you. Please email any feedback to the authors [email protected] NoticeCopyright 2023 DataRobot, Inc. and its affiliates.All rights reserved.This is proprietary source code of DataRobot, Inc. and its affiliates.Released under the terms of DataRobot Tool and Utility Agreement.
airflow-provider-datarobot-early-access
DataRobot Provider for Apache AirflowThis package provides operators, sensors, and a hook to integrateDataRobotinto Apache Airflow. Using these components, you should be able to build the essential DataRobot pipeline - create a project, train models, deploy a model, and score predictions against the model deployment.Install the Airflow providerThe DataRobot provider for Apache Airflow requires an environment with the following dependencies installed:Apache Airflow>= 2.3DataRobot Python API Client>= 3.2.0To install the DataRobot provider, run the following command:pipinstallairflow-provider-datarobotCreate a connection from Airflow to DataRobotThe next step is to create a connection from Airflow to DataRobot:In the Airflow user interface, clickAdmin > Connectionstoadd an Airflow connection.On theList Connectionpage, click+ Add a new record.In theAdd Connectiondialog box, configure the following fields:FieldDescriptionConnection Iddatarobot_default(this name is used by default in all operators)Connection TypeDataRobotAPI KeyA DataRobot API key, created in theDataRobot Developer Tools, from theAPI Keyssection.DataRobot endpoint URLhttps://app.datarobot.com/api/v2by defaultClickTestto establish a test connection between Airflow and DataRobot.When the connection test is successful, clickSave.Create preconfigured connections to DataRobotYou can create preconfigured connections to store and manage credentials to use with Airflow Operators, replicating theconnection on the DataRobot side.Currently, the supported credential types are:CredentialsDescriptionDataRobot Basic CredentialsLogin/password pairsDataRobot GCP CredentialsGoogle Cloud Service account keyDataRobot AWS CredentialsAWS access keysDataRobot Azure Storage CredentialsAzure Storage secretDataRobot OAuth CredentialsOAuth tokensDataRobot JDBC DataSourceJDBC connection attributesAftercreating a preconfigured connection through the Airflow UI or API, you can access your stored credentials withGetOrCreateCredentialOperatororGetOrCreateDataStoreOperatorto replicate them in DataRobot and retrieve the correspondingcredentials_idordatastore_id.JSON configuration for the DAG runOperators and sensors use parameters from theconfigJSON submitted when triggering the DAG; for example:{"training_data":"s3-presigned-url-or-local-path-to-training-data","project_name":"ProjectcreatedfromAirflow","autopilot_settings":{"target":"readmitted"},"deployment_label":"DeploymentcreatedfromAirflow","score_settings":{"intake_settings":{"type":"s3","url":"s3://path/to/scoring-data/Diabetes10k.csv","credential_id":"<credential_id>"},"output_settings":{"type":"s3","url":"s3://path/to/results-dir/Diabetes10k_predictions.csv","credential_id":"<credential_id>"}}}These config values are accessible in theexecute()method of any operator in the DAG through thecontext["params"]variable; for example, to get training data, you could use the following:defexecute(self,context:Dict[str,Any])->str:...training_data=context["params"]["training_data"]...ModulesOperatorsGetOrCreateCredentialOperatorFetches a credential by name. This operator attempts to find a DataRobot credential with the provided name. If the credential doesn't exist, the operator creates it using the Airflow preconfigured connection with the same connection name.Returns a credential ID.Required config parameters:ParameterTypeDescriptioncredentials_param_namestrThe name of parameter in the config file for the credential name.GetOrCreateDataStoreOperatorFetches a DataStore by Connection name. If the DataStore does not exist, the operator attempts to create it using Airflow preconfigured connection with the same connection name.Returns a credential ID.Required config params:ParameterTypeDescriptionconnection_param_namestrThe name of the parameter in the config file for the connection name.CreateDatasetFromDataStoreOperatorLoads a dataset from a JDBC Connection to the DataRobot AI Catalog.Returns a dataset ID.Required config params:ParameterTypeDescriptiondatarobot_jdbc_connectionstrThe existing preconfigured DataRobot JDBC connection name.dataset_namestrThe name of the loaded dataset.table_schemastrThe database table schema.table_namestrThe source table name.do_snapshotboolIfTrue, creates a snapshot dataset. IfFalse, creates a remote dataset. If unset, uses the server default (True). Creating snapshots from non-file sources may be disabled by theDisable AI Catalog Snapshotspermission.persist_data_after_ingestionboolIfTrue, enforce saving all data (for download and sampling) and allow a user to view the extended data profile (which includes data statistics like min, max, median, mean, histogram, etc.). IfFalse, don't enforce saving data. If unset, uses the server default (True). The data schema (feature names and types) will still be available. Specifying this parameter toFalseanddoSnapshottoTrueresults in an error.UploadDatasetOperatorUploads a local file to the DataRobot AI Catalog.Returns a dataset ID.Required config params:ParameterTypeDescriptiondataset_file_pathstrThe local path to the training dataset.UpdateDatasetFromFileOperatorCreates a new dataset version from a file.Returns a dataset version ID when the new version uploads successfully.Required config params:ParameterTypeDescriptiondataset_idstrThe DataRobot AI Catalog dataset ID.dataset_file_pathstrThe local path to the training dataset.CreateDatasetVersionOperatorCreates a new version of the existing dataset in the AI Catalog.Returns a dataset version ID.Required config params:ParameterTypeDescriptiondataset_idstrThe DataRobot AI Catalog dataset ID.datasource_idstrThe existing DataRobot datasource ID.credential_idstrThe existing DataRobot credential ID.CreateOrUpdateDataSourceOperatorCreates a data source or updates it if it already exists.Returns a DataRobot DataSource ID.Required config params:ParameterTypeDescriptiondata_store_idstrTHe DataRobot datastore ID.CreateProjectOperatorCreates a DataRobot project.Returns a project ID.Several options of source dataset supported:Local file or pre-signed S3 URLCreate a project directly from a local file or a pre-signed S3 URL.Required config params:ParameterTypeDescriptiontraining_datastrThe pre-signed S3 URL or the local path to the training dataset.project_namestrThe project name.Note:In case of an S3 input, thetraining_datavalue must be apre-signed AWS S3 URL.AI Catalog dataset from config fileCreate a project from an existing dataset in the DataRobot AI Catalog using a dataset ID defined in the config file.Required config params:ParameterTypeDescriptiontraining_dataset_idstrThe dataset ID corresponding to existing dataset in the DataRobot AI Catalog.project_namestrThe project name.AI Catalog dataset from previous operatorCreate a project from an existing dataset in the DataRobot AI Catalog using a dataset ID from the previous operator. In this case, your previous operator must return a valid dataset ID (for exampleUploadDatasetOperator) and you should use this output value as adataset_idargument in theCreateProjectOperatorobject creation step.Required config params:ParameterTypeDescriptionproject_namestrThe project name.For moreproject settings, see the DataRobot documentation.TrainModelsOperatorRuns DataRobot Autopilot to train models.ReturnsNone.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.Required config params:ParameterTypeDescriptiontargetstrThe name of the column defining the modeling target."autopilot_settings":{"target":"readmitted"}For moreautopilot settings, see the DataRobot documentation.DeployModelOperatorDeploy a specified model.Returns a deployment ID.Parameters:ParameterTypeDescriptionmodel_idstrThe DataRobot model ID.Required config params:ParameterTypeDescriptiondeployment_labelstrThe deployment label name.For moredeployment settings, see the DataRobot documentation.DeployRecommendedModelOperatorDeploys a recommended model.Returns a deployment ID.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.Required config params:ParameterTypeDescriptiondeployment_labelstrThe deployment label name.For moredeployment settings, see the DataRobot documentation.ScorePredictionsOperatorScores batch predictions against the deployment.Returns a batch prediction job ID.Prerequisites:UseGetOrCreateCredentialOperatorto pass acredential_idfrom the preconfigured DataRobot Credentials (Airflow Connections) or manually set thecredential_idparameter in the config.Note:You canadd S3 credentials to DataRobot via the Python API client.Oruse a Dataset ID from the DataRobot AI Catalog.Oruse a DataStore ID for a JDBC source connection; you can useGetOrCreateDataStoreOperatorto passdatastore_idfrom a preconfigured Airflow Connection.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.intake_datastore_idstrThe DataRobot datastore ID for the JDBC source connection.output_datastore_idstrThe DataRobot datastore ID for the JDBC destination connection.intake_credential_idstrThe DataRobot credentials ID for the source connection.output_credential_idstrThe DataRobot credentials ID for the destination connection.Sample config: Pre-signed S3 URL"score_settings":{"intake_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k.csv",},"output_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k_predictions.csv",}}Sample config: Pre-signed S3 URL with a manually set credential ID"score_settings":{"intake_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k.csv","credential_id":"<credential_id>"},"output_settings":{"type":"s3","url":"s3://my-bucket/Diabetes10k_predictions.csv","credential_id":"<credential_id>"}}Sample config: Scoring dataset in the AI Catalog"score_settings":{"intake_settings":{"type":"dataset","dataset_id":"<datasetId>",},"output_settings":{}}For morebatch prediction settings, see the DataRobot documentation.GetTargetDriftOperatorGets the target drift from a deployment.Returns a dict with the target drift data.Parameters:ParameterTypeDescriptiondeployment_idstrTHe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"target_drift":{}GetFeatureDriftOperatorGets the feature drift from a deployment.Returns a dict with the feature drift data.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"feature_drift":{}GetServiceStatsOperatorGets service stats measurements from a deployment.Returns a dict with the service stats measurements data.Parameters:ParameterTypeDescriptiondeployment_idstrTHe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"service_stats":{}GetAccuracyOperatorGets the accuracy of a deployment’s predictions.Returns a dict with the accuracy for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required; however, theoptional paramsmay be passed in the config as follows:"accuracy":{}GetBiasAndFairnessSettingsOperatorGets the Bias And Fairness settings for deployment.Returns a dict with theBias And Fairness settings for a Deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required.UpdateBiasAndFairnessSettingsOperatorUpdates the Bias And Fairness settings for deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.Sample config params:"protected_features": ["attribute1"], "preferable_target_value": "True", "fairness_metrics_set": "equalParity", "fairness_threshold": 0.1,GetSegmentAnalysisSettingsOperatorGets the segment analysis settings for a deployment.Returns a dict with thesegment analysis settings for a deploymentParameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required.UpdateSegmentAnalysisSettingsOperatorUpdates the segment analysis settings for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.Sample config params:"segment_analysis_enabled": True, "segment_analysis_attributes": ["attribute1", "attribute2"],GetMonitoringSettingsOperatorGets the monitoring settings for deployment.Returns a dict with the config params for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.No config params are required.Sample monitoring settings:{ "drift_tracking_settings": { } "association_id_settings": { } "predictions_data_collection_settings": { } }DictionaryDescriptiondrift_tracking_settingsThedrift tracking settings for this deployment.association_id_settingsTheassociation ID settings for this deployment.predictions_data_collection_settingsThepredictions data collection settings of this deployment.UpdateMonitoringSettingsOperatorUpdates monitoring settings for a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.Sample config params:"target_drift_enabled": True, "feature_drift_enabled": True, "association_id_column": ["id"], "required_association_id": False, "predictions_data_collection_enabled": False,BatchMonitoringOperatorCreates a batch monitoring job for the deployment.Returns a batch monitoring job ID.Prerequisites:UseGetOrCreateCredentialOperatorto pass acredential_idfrom the preconfigured DataRobot Credentials (Airflow Connections) or manually set thecredential_idparameter in the config.Note:You canadd S3 credentials to DataRobot via the Python API client.Oruse a Dataset ID from the DataRobot AI Catalog.Oruse a DataStore ID for a JDBC source connection; you can useGetOrCreateDataStoreOperatorto passdatastore_idfrom a preconfigured Airflow Connection.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.datastore_idstrThe DataRobot datastore ID.credential_idstrThe DataRobot credentials ID.Sample config params:Sample config"deployment_id":"61150a2fadb5586af4118980","monitoring_settings":{"intake_settings":{"type":"bigquery","dataset":"integration_example_demo","table":"actuals_demo","bucket":"datarobot_demo_airflow",},"monitoring_columns":{"predictions_columns":[{"class_name":"True","column_name":"target_True_PREDICTION"},{"class_name":"False","column_name":"target_False_PREDICTION"},],"association_id_column":"id","actuals_value_column":"ACTUAL",},}Sample config: Manually set credential ID"deployment_id":"61150a2fadb5586af4118980","monitoring_settings":{"intake_settings":{"type":"bigquery","dataset":"integration_example_demo","table":"actuals_demo","bucket":"datarobot_demo_airflow","credential_id":"<credential_id>"},"monitoring_columns":{"predictions_columns":[{"class_name":"True","column_name":"target_True_PREDICTION"},{"class_name":"False","column_name":"target_False_PREDICTION"},],"association_id_column":"id","actuals_value_column":"ACTUAL",},}For morebatch monitoring settings, see the DataRobot documentation.DownloadModelScoringCodeOperatorDownloads scoring code artifact from a model.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.model_idstrThe DataRobot model ID.base_pathstrThe base path for storing a downloaded model artifact.Sample config params:"source_code": False,For morescoring code download parameters, see the DataRobot documentation.DownloadDeploymentScoringCodeOperatorDownloads scoring code artifact from a deployment.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.base_pathstrThe base path for storing a downloaded model artifact.Sample config params:"source_code": False, "include_agent": False, "include_prediction_explanations": False, "include_prediction_intervals": False,For morescoring code download parameters, see the DataRobot documentation.SubmitActualsFromCatalogOperatorDownloads scoring code artifact from a deployment.Returns an actuals upload job ID.Parameters:ParameterTypeDescriptiondeployment_idstrThe DataRobot deployment ID.dataset_idstrThe DataRobot AI Catalog dataset ID.dataset_version_idstrThe DataRobot AI Catalog dataset version ID.Sample config params:"association_id_column": "id", "actual_value_column": "ACTUAL", "timestamp_column": "timestamp",StartAutopilotOperatorTriggers DataRobot Autopilot to train set of models.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.featurelist_idstrSpecifies which feature list to use.relationships_configuration_idstrID of the relationships configuration to use.segmentation_task_idstrID of the relationships configuration to use.Sample config params:"autopilot_settings": { "target": "column_name", "mode": AUTOPILOT_MODE.QUICK, }For moreanalyze_and_model parameters, see the DataRobot documentation.CreateExecutionEnvironmentOperatorCreate an execution environment.Returns an execution environment ID.Parameters:ParameterTypeDescriptionnamestrThe execution environment name.descriptionstrexecution environment description.programming_languagestrprogramming language of the environment to be created.Sample config params:"execution_environment_name": "My Demo Env", "custom_model_description": "This is a custom model created by Airflow", "programming_language": "python",For moreexecution environment creation parameters, see the DataRobot documentation.CreateExecutionEnvironmentVersionOperatorCreate an execution environment version.Returns a created execution environment version IDParameters:ParameterTypeDescriptionexecution_environment_idstrThe id of the execution environment.docker_context_pathstrThe path to a docker context archive or folder.environment_version_labelstrshort human readable string to label the version.environment_version_descriptionstrexecution environment version description.For moreexecution environment version creation parameters, see the DataRobot documentation.CreateCustomInferenceModelOperatorCreate a custom inference model.Returns a created custom model IDParameters:ParameterTypeDescriptionnamestrName of the custom model.descriptionstrDescription of the custom model.Sample DAG config params:"target_type": - Target type of the custom inference model. Values: [`datarobot.TARGET_TYPE.BINARY`, `datarobot.TARGET_TYPE.REGRESSION`, `datarobot.TARGET_TYPE.MULTICLASS`, `datarobot.TARGET_TYPE.UNSTRUCTURED`] "target_name": - Target feature name. It is optional(ignored if provided) for `datarobot.TARGET_TYPE.UNSTRUCTURED` target type. "programming_language": - Programming language of the custom learning model. "positive_class_label": - Custom inference model positive class label for binary classification. "negative_class_label": - Custom inference model negative class label for binary classification. "prediction_threshold": - Custom inference model prediction threshold. "class_labels": - Custom inference model class labels for multiclass classification. "network_egress_policy": - Determines whether the given custom model is isolated, or can access the public network. "maximum_memory": - The maximum memory that might be allocated by the custom-model. "replicas": - A fixed number of replicas that will be deployed in the cluster.For morecustom inference model creation parameters, see the DataRobot documentation.CreateCustomModelVersionOperatorCreate a custom model version.Returns a created custom model version IDParameters:ParameterTypeDescriptioncustom_model_idstrThe ID of the custom model.base_environment_idstrThe ID of the base environment to use with the custom model version.training_dataset_idstrThe ID of the training dataset to assign to the custom model.holdout_dataset_idstrThe ID of the holdout dataset to assign to the custom model.custom_model_folderstrThe ID of the holdout dataset to assign to the custom model.create_from_previousboolif set to True - creates a custom model version containing files from a previous version.Sample DAG config params:"is_major_update" - The flag defining if a custom model version will be a minor or a major version. "files" - The list of tuples, where values in each tuple are the local filesystem path and the path the file should be placed in the model. "files_to_delete" - The list of a file items ids to be deleted. "network_egress_policy": - Determines whether the given custom model is isolated, or can access the public network. "maximum_memory": - The maximum memory that might be allocated by the custom-model. "replicas": - A fixed number of replicas that will be deployed in the cluster. "required_metadata_values" - Additional parameters required by the execution environment. "keep_training_holdout_data" - If the version should inherit training and holdout data from the previous version.For morecustom inference model creation parameters, see the DataRobot documentation.CustomModelTestOperatorCreate and start a custom model test.Returns a created custom model test IDParameters:ParameterTypeDescriptioncustom_model_idstrThe ID of the custom model.custom_model_version_idstrThe ID of the custom model version.dataset_idstrThe id of the testing dataset for non-unstructured custom models. Ignored and not required for unstructured models.Sample DAG config params:"network_egress_policy": - Determines whether the given custom model is isolated, or can access the public network. "maximum_memory": - The maximum memory that might be allocated by the custom-model. "replicas": - A fixed number of replicas that will be deployed in the cluster.For morecustom model test creation parameters, see the DataRobot documentation.GetCustomModelTestOverallStatusOperatorGet a custom model testing overall status.Returns a custom model test overall statusParameters:ParameterTypeDescriptioncustom_model_test_idstrThe ID of the custom model test.For morecustom model test get status parameters, see the DataRobot documentation.CreateCustomModelDeploymentOperatorCreate a deployment from a DataRobot custom model image.Returns the created deployment idParameters:ParameterTypeDescriptioncustom_model_version_idstrThe id of the DataRobot custom model version to deploydeployment_namestra human readable label (name) of the deploymentdefault_prediction_server_idstran identifier of a prediction server to be used as the default prediction serverdescriptionstra human readable description of the deploymentimportancestrdeployment importanceFor morecreate_from_custom_model_version parameters, see the DataRobot documentation.GetDeploymentModelOperatorGets current model info from a deployment.Returns a model info from a DeploymentParameters:ParameterTypeDescriptiondeployment_idstrDataRobot deployment IDFor moreget deployment parameters, see the DataRobot documentation.ReplaceModelOperatorReplaces the current model for a deployment.Returns a model info from a DeploymentParameters:ParameterTypeDescriptiondeployment_idstrDataRobot deployment IDnew_model_idstrThe id of the new model to use. If replacing the deployment's model with a CustomInferenceModel, a specific CustomModelVersion ID must be used.reasonstrThe reason for the model replacement. Must be one of 'ACCURACY', 'DATA_DRIFT', 'ERRORS', 'SCHEDULED_REFRESH', 'SCORING_SPEED', or 'OTHER'. This value will be stored in the model history to keep track of why a model was replacedFor morereplace_model parameters, see the DataRobot documentation.ActivateDeploymentOperatorActivate or deactivate a Deployment.Returns the Deployment status (active/inactive)Parameters:ParameterTypeDescriptiondeployment_idstrDataRobot deployment IDactivatestrif set to True - activate deployment, if set to False - deactivate deploymentFor moreactivate deployment, see the DataRobot documentation.GetDeploymentStatusOperatorGet a Deployment status (active/inactive).Returns the Deployment status (active/inactive)Parameters:ParameterTypeDescriptiondeployment_idstrDataRobot deployment IDFor moredeployment, see the DataRobot documentation.RelationshipsConfigurationOperatorCreate a Relationships Configuration.Returns Relationships Configuration IDParameters:ParameterTypeDescriptiondataset_definitionsstrlist of dataset definitions. Each element is a dict retrieved from DatasetDefinitionOperator operatorrelationshipsstrlist of relationships. Each element is a dict retrieved from DatasetRelationshipOperator operatorfeature_discovery_settingsstrlist of feature discovery settings, optional. If not provided, it will be retrieved from DAG configuration params otherwise default settings will be used.For morefeature-discovery, see the DataRobot documentation.DatasetDefinitionOperatorCreate a Dataset definition for the Feature Discovery.Returns Dataset definition dictParameters:ParameterTypeDescriptiondataset_identifierstrAlias of the dataset (used directly as part of the generated feature names)dataset_idstrIdentifier of the dataset in DataRobot AI Catalogdataset_version_idstrIdentifier of the dataset version in DataRobot AI Catalogprimary_temporal_keystrName of the column indicating time of record creationfeature_list_idstrSpecifies which feature list to use.snapshot_policystrPolicy to use when creating a project or making predictions. If omitted, by default endpoint will use 'latest'.For morecreate-dataset-definitions-and-relationships-using-helper-functions, see the DataRobot documentation.DatasetRelationshipOperatorCreate a Relationship between dataset defined in DatasetDefinition.Returns Dataset definition dictParameters:ParameterTypeDescriptiondataset1_identifierList[str]Identifier of the first dataset in this relationship. This is specified in the identifier field of dataset_definition structure. If None, then the relationship is with the primary dataset.dataset2_identifierList[str]Identifier of the second dataset in this relationship. This is specified in the identifier field of dataset_definition schema.dataset1_keysList[str]list of string (max length: 10 min length: 1). Column(s) from the first dataset which are used to join to the second datasetdataset2_keysList[str]list of string (max length: 10 min length: 1). Column(s) from the second dataset that are used to join to the first datasetfeature_derivation_window_startintHow many time units of each dataset's primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin. Will be a negative integer, If present, the feature engineering Graph will perform time-aware joins.feature_derivation_window_endintHow many time units of each dataset's record primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end. Will be a non-positive integer, if present. If present, the feature engineering Graph will perform time-aware joins.feature_derivation_window_time_unitstrTime unit of the feature derivation window. One ofdatarobot.enums.AllowedTimeUnitsSAFERIf present, time-aware joins will be used. Only applicable when dataset1_identifier is not provided.feature_derivation_windowsListList of feature derivation windows settings. If present, time-aware joins will be used. Only allowed when feature_derivation_window_start, feature_derivation_window_end and feature_derivation_window_time_unit are not provided.prediction_point_roundingList[dict]Closest value of prediction_point_rounding_time_unit to round the prediction point into the past when applying the feature deri if present.Only applicable when dataset1_identifier is not provided.prediction_point_rounding_time_unitstrTime unit of the prediction point rounding. One ofdatarobot.enums.AllowedTimeUnitsSAFEROnly applicable when dataset1_identifier is not provided.For morecreate-dataset-definitions-and-relationships-using-helper-functions, see the DataRobot documentation.ComputeFeatureImpactOperatorCreates Feature Impact job in DataRobot.Returns Feature Impact job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDmodel_idstrDataRobot model IDFor morerequest_feature_impact, see the DataRobot documentation.ComputeFeatureEffectsOperatorSubmit request to compute Feature Effects for the model.Returns Feature Effects job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDmodel_idstrDataRobot model IDFor morerequest_feature_impact, see the DataRobot documentation.ComputeShapOperatorSubmit request to compute SHAP impact job for the model.Returns SHAP impact job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDmodel_idstrDataRobot model IDFor moreshap-impact, see the DataRobot documentation.CreateExternalModelPackageOperatorCreate an external model package in DataRobot MLOps from JSON configuration.Returns a model package ID of newly created ModelPackage.Parameters:ParameterTypeDescriptionmodel_infostrA JSON object of external model parameters.Example of JSON configuration for a regression model:.. code-block:: python{ "name": "Lending club regression", "modelDescription": { "description": "Regression on lending club dataset" } "target": { "type": "Regression", "name": "loan_amnt" } }Example JSON for a binary classification model:.. code-block:: python{ "name": "Surgical Model", "modelDescription": { "description": "Binary classification on surgical dataset", "location": "/tmp/myModel" }, "target": { "type": "Binary", "name": "complication", "classNames": ["Yes","No"], # minority/positive class should be listed first "predictionThreshold": 0.5 } } }Example JSON for a multiclass classification model:.. code-block:: python{ "name": "Iris classifier", "modelDescription": { "description": "Classification on iris dataset", "location": "/tmp/myModel" }, "target": { "type": "Multiclass", "name": "Species", "classNames": [ "Iris-versicolor", "Iris-virginica", "Iris-setosa" ] } }DeployModelPackageOperatorCreate a deployment from a DataRobot model package.Returns The created deployment IDParameters:ParameterTypeDescriptiondeployment_namestrA human readable label of the deployment.model_package_idstrThe ID of the DataRobot model package to deploy.default_prediction_server_idstrAn identifier of a prediction server to be used as the default prediction server. When working with prediction environments, default prediction server Id should not be providedprediction_environment_idstrAn identifier of a prediction environment to be used for model deployment.descriptionstrA human readable description of the deployment.importancestrDeployment importance level.user_provided_idstrA user-provided unique ID associated with a deployment definition in a remote git repository.additional_metadataDict[str, str]A Key/Value pair dict, with additional metadata.AddExternalDatasetOperatorUpload a new dataset from a catalog dataset to make predictions for a modelReturns external dataset ID for the modelParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDdataset_idstrDataRobot AI Catalog dataset IDcredential_idstrDataRobot Credentials IDdataset_version_idstrDataRobot AI Catalog dataset version IDFor moreupload_dataset_from_catalog, see the DataRobot documentation.RequestModelPredictionsOperatorRequests predictions against a previously uploaded dataset.Returns model predictions job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDmodel_idstrDataRobot model IDexternal_dataset_idstrDataRobot external dataset IDFor morerequest_predictions, see the DataRobot documentation.TrainModelOperatorSubmit a job to the queue to train a model from specific blueprint.Returns model training job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDblueprint_idstrDataRobot blueprint IDfeaturelist_idstrThe identifier of the featurelist to use. If not defined, the default for this project is used.source_project_idstrWhich project created this blueprint_id. IfNone, it defaults to looking in this project. Note that you must have read permissions in this project.Example of DAG config params: { "sample_pct": "scoring_type": "training_row_count": "n_clusters": }For morestart-training-a-model, see the DataRobot documentation.RetrainModelOperatorSubmit a job to the queue to retrain a model on a specific sample size and/or custom featurelist.Returns a model retraining job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDmodel_idstrDataRobot model IDfeaturelist_idstrThe identifier of the featurelist to use. If not defined, the default for this project is used.Example of DAG config params: { "sample_pct": "scoring_type": "training_row_count": }For moretrain-a-model-on-a-different-sample-size, see the DataRobot documentation.PredictionExplanationsInitializationOperatorTriggering a prediction explanations initialization of a model.Returns a Prediction Explanations Initialization job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDmodel_idstrDataRobot model IDFor moreprediction-explanations, see the DataRobot documentation.ComputePredictionExplanationsOperatorCreate prediction explanations for the specified dataset.Returns a Triggered prediction explanations for the specified dataset job IDParameters:ParameterTypeDescriptionproject_idstrDataRobot project IDmodel_idstrDataRobot model IDexternal_dataset_idstrDataRobot external dataset IDExample of DAG config params:{ "max_explanations" "threshold_low" "threshold_high" }For moreprediction-explanations, see the DataRobot documentation.SensorsAutopilotCompleteSensorChecks if Autopilot is complete.Parameters:ParameterTypeDescriptionproject_idstrThe DataRobot project ID.ScoringCompleteSensorChecks if batch scoring is complete.Parameters:ParameterTypeDescriptionjob_idstrThe batch prediction job ID.MonitoringJobCompleteSensorChecks if a monitoring job is complete.Parameters:ParameterTypeDescriptionjob_idstrThe batch monitoring job ID.BaseAsyncResolutionSensorChecks if the DataRobot Async API call is complete.Parameters:ParameterTypeDescriptionjob_idstrThe DataRobot async API call status check ID.DataRobotJobSensorChecks whether DataRobot Job is complete.Parameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.job_idstrDataRobot Job ID.ModelTrainingJobSensorChecks whether DataRobot Model Training Job is complete.Returns False if job not yet completed, PokeReturnValue(True, trained_model.id) if model training completedParameters:ParameterTypeDescriptionproject_idstrDataRobot project ID.job_idstrDataRobot Job ID.HooksDataRobotHookA hook to initialize the DataRobot Public API client.PipelineThe modules described above allow you to construct a standard DataRobot pipeline in an Airflow DAG:create_project_op >> train_models_op >> autopilot_complete_sensor >> deploy_model_op >> score_predictions_op >> scoring_complete_sensorExample DAGSSee the thedatarobot_provider/example_dagsdirectory for the example DAGs.You can find the following examples using a preconfigured connection in thedatarobot_provider/example_dagsdirectory:Example DAGDescriptiondatarobot_pipeline_dag.pyRun the basic end-to-end workflow in DataRobot.datarobot_score_dag.pyPerform DataRobot batch scoring.datarobot_jdbc_batch_scoring_dag.pyPerform DataRobot batch scoring with a JDBC data source.datarobot_aws_s3_batch_scoring_dag.pyUse DataRobot AWS Credentials withScorePredictionsOperator.datarobot_gcp_storage_batch_scoring_dag.pyUse DataRobot GCP Credentials withScorePredictionsOperator.datarobot_bigquery_batch_scoring_dag.pyUse DataRobot GCP Credentials withScorePredictionsOperator.datarobot_azure_storage_batch_scoring_dag.pyUse DataRobot Azure Storage Credentials withScorePredictionsOperator.datarobot_jdbc_dataset_dag.pyUpload a dataset to the AI Catalog through a JDBC connection.datarobot_batch_monitoring_job_dag.pyRun a batch monitoring job.datarobot_create_project_from_ai_catalog_dag.pyCreate a DataRobot project from a DataRobot AI Catalog dataset.datarobot_create_project_from_dataset_version_dag.pyCreate a DataRobot project from a specific dataset version in the DataRobot AI Catalog.datarobot_dataset_new_version_dag.pyCreate a new version of an existing dataset in the DataRobot AI Catalog.datarobot_dataset_upload_dag.pyUpload a local file to the DataRobot AI Catalog.datarobot_get_datastore_dag.pyCreate a DataRobot DataStore withGetOrCreateDataStoreOperator.datarobot_jdbc_dataset_dag.pyCreate a DataRobot project from a JDBC data source.datarobot_jdbc_dynamic_dataset_dag.pyCreate a DataRobot project from a JDBC dynamic data source.datarobot_upload_actuals_catalog_dag.pyUpload actuals from the DataRobot AI Catalog.deployment_service_stats_dag.pyGet a deployment's service statistics withGetServiceStatsOperatordeployment_stat_and_accuracy_dag.pyGet a deployment's service statistics and accuracy.deployment_update_monitoring_settings_dag.pyUpdate a deployment's monitoring settings.deployment_update_segment_analysis_settings_dag.pyUpdate a deployment's segment analysis settings.download_scoring_code_from_deployment_dag.pyDownload scoring code (JAR file) from a DataRobot deployment.advanced_datarobot_pipeline_jdbc_dag.pyRun the advanced end-to-end workflow in DataRobot.datarobot_autopilot_options_pipeline_dag.pyCreates datarobot project and starts autopilot with advanced options.datarobot_custom_model_pipeline_dag.pyCreating end-to-end workflow with custom models in DataRobot.datarobot_custom_partitioning_pipeline_dag.pyCreating custom partitioned project, train models piplinedatarobot_datetime_partitioning_pipeline_dag.pyCreating datetime partitioned project, train models pipline.datarobot_external_model_pipeline_dag.pyCreating end-to-end workflow with external models in DataRobot.datarobot_feature_discovery_pipeline_dag.pyCreating feature-discovery DataRobot project, train models pipline.datarobot_timeseries_pipeline_dag.pyCreating timeseries DataRobot project, train models pipline.deployment_activate_deactivate_dag.pyExample of Deployment activation/deactivaion and get Deployment status.deployment_replace_model_dag.pyExample of Deployment model replacement.model_compute_insights_dag.pyExample of compute FeatureImpact and FeatureEffects job.model_compute_prediction_explanations_dag.pyExample of compute prediction explanations job.model_compute_predictions_dag.pyExample of compute predictions for model.model_compute_shap_dag.pyExample of compute SHAP job.model_retrain_dag.pyExample of model retraining job on specific sample size/featurelist.model_train_dag.pyExample of model training job based on specific blueprint.The advanced end-to-end workflow in DataRobot (advanced_datarobot_pipeline_jdbc_dag.py) contains the following steps:Ingest a dataset to the AI Catalog from JDBC datasourceCreate a DataRobot projectTrain models using AutopilotDeploy the recommended modelChange deployment settings (enable monitoring settings, segment analysis, and bias and fairness)Run batch scoring using a JDBC datasourceUpload actuals from a JDBC datasourceCollect deployment metrics: service statistics, features drift, target drift, accuracy and process it with custom python operator.IssuesPlease submitissuesandpull requestsin our official repo:https://github.com/datarobot/airflow-provider-datarobotWe are happy to hear from you. Please email any feedback to the authors [email protected] NoticeCopyright 2023 DataRobot, Inc. and its affiliates.All rights reserved.This is proprietary source code of DataRobot, Inc. and its affiliates.Released under the terms of DataRobot Tool and Utility Agreement.
airflow-provider-db2
airflow-provider-db2Creates a custom connection type to use ibm_db library that permits security mechanism
airflow-provider-dolphindb
Airflow-provider-dolphindbInstalling airflow-provider-dolphindbpipinstallairflow-provider-dolphindbExample DAGsTo create a database and a table in it, and then execute an external .dos script file to insert data, follow these steps:Copy theexample_dolphindb.pyfile to your DAGs folder. If you use the default airflow configurationairflow.cfg, you may need to create the DAGs folder yourself, which is located inAIRFLOW_HOME/dags.Copy theinsert_data.dosfile to the same directory asexample_dolphindb.py.Start your DolphinDB server on port 8848.Start airflow in the development environment:cd/your/project/dir/# Only absolute paths are acceptedexportAIRFLOW_HOME=/your/project/dir/exportAIRFLOW_CONN_DOLPHINDB_DEFAULT="dolphindb://admin:[email protected]:8848"python-mairflowstandalonePlease refer to theOfficial documentation for the production environment.Now, you can find the example_dolphindb DAG on your airflow web page. You can try to trigger it.Developer DocumentationInstalling Apache AirflowRefer tohttps://airflow.apache.org/docs/apache-airflow/stable/start.htmlfor further details on this topic.# It is recommended to use the current project directory as the airflow working directorycd/your/source/dir/airflow-provider-dolphindb# Only absolute paths are acceptedexportAIRFLOW_HOME=/your/source/dir/airflow-provider-dolphindb# Install apache-airflow 2.6.3AIRFLOW_VERSION=2.6.3PYTHON_VERSION="$(python--version|cut-d" "-f2|cut-d"."-f1-2)"CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"pipinstall"apache-airflow==${AIRFLOW_VERSION}"--constraint"${CONSTRAINT_URL}"Additionally, you may need to install Kubernetes to eliminate errors in airflow routines:pipinstallkubernetesInstalling airflow-provider-dolphindb for testingRefer tohttps://pip.pypa.io/en/stable/cli/pip_install/#install-editablefor further details on this topic.python-mpipinstall-e.TestingRun the following commands to validate the installation procedure above.cd/your/source/dir/airflow-provider-dolphindb# Only absolute paths are acceptedexportAIRFLOW_HOME=/your/source/dir/airflow-provider-dolphindbexportAIRFLOW_CONN_DOLPHINDB_DEFAULT="dolphindb://admin:[email protected]:8848"pytestPackaging airflow-provider-dolphindbRun the following command.python-mbuild
airflow-provider-duckdb
airflow-provider-duckdbA DuckDB provider for Airflow. This provider exposes a hook/connection that returns a DuckDB connection.This works for either local or MotherDuck connections.Installationpipinstallairflow-provider-duckdbConnectionThe connection type isduckdb. It supports setting the following parameters:Airflow field nameAirflow UI labelDescriptionhostPath to local database filePath to local file. Leave blank (with no password) for in-memory database.schemaMotherDuck database nameName of the MotherDuck database. Leave blank for default.passwordMotherDuck Service tokenMotherDuck Service token. Leave blank for local database.These have been relabeled in the Airflow UI for clarity.For example, if you want to connect to a local file:Airflow field nameAirflow UI labelValuehostPath to local database file/path/to/file.dbschemaMotherDuck database name(leave blank)passwordMotherDuck Service token(leave blank)If you want to connect to a MotherDuck database:Airflow field nameAirflow UI labelValuehostPath to local database file(leave blank)schemaMotherDuck database name<YOUR_DB_NAME>, or leave blank for defaultpasswordMotherDuck Service token<YOUR_SERVICE_TOKEN>Usageimportpandasaspdimportpendulumfromairflow.decoratorsimportdag,taskfromduckdb_provider.hooks.duckdb_hookimportDuckDBHook@dag(schedule=None,start_date=pendulum.datetime(2022,1,1,tz="UTC"),catchup=False,)defduckdb_transform():@taskdefcreate_df()->pd.DataFrame:"""Create a dataframe with some sample data"""df=pd.DataFrame({"a":[1,2,3],"b":[4,5,6],"c":[7,8,9],})returndf@taskdefsimple_select(df:pd.DataFrame)->pd.DataFrame:"""Use DuckDB to select a subset of the data"""hook=DuckDBHook.get_hook('duckdb_default')conn=hook.get_conn()# execute a simple queryres=conn.execute("SELECT a, b, c FROM df WHERE a >= 2").df()returnres@taskdefadd_col(df:pd.DataFrame)->pd.DataFrame:"""Use DuckDB to add a column to the data"""hook=DuckDBHook.get_hook('duckdb_default')conn=hook.get_conn()# add a columnconn.execute("CREATE TABLE tb AS SELECT *, a + b AS d FROM df")# get the tablereturnconn.execute("SELECT * FROM tb").df()@taskdefaggregate(df:pd.DataFrame)->pd.DataFrame:"""Use DuckDB to aggregate the data"""hook=DuckDBHook.get_hook('duckdb_default')conn=hook.get_conn()# aggregatereturnconn.execute("SELECT SUM(a), COUNT(b) FROM df").df()create_df_res=create_df()simple_select_res=simple_select(create_df_res)add_col_res=add_col(simple_select_res)aggregate_res=aggregate(add_col_res)duckdb_transform()
airflow-provider-evidently
evidently
airflow-provider-firebolt
Firebolt Provider for Apache AirflowThis is the provider package for thefireboltprovider. All classes for this provider package are in thefirebolt_providerPython package.ContentsInstallationConfigurationModulesOperatorsHooksInstallationYou can install this package viapipinstallairflow-provider-fireboltairflow-provider-fireboltrequiresapache-airflow2.0+ andfirebolt-sdk1.1+.ConfigurationIn the Airflow user interface, configure a Connection for Firebolt. Configure the following fields:Conn Id:firebolt_conn_id.Conn Type:Firebolt.Client ID: Service account ID.Client Secret: Service account secret.Engine_Name: Firebolt Engine Name.Account: Name of the account you're connecting to.Client id and secret credentials can be obtained by registering aService account.NoteIf you're accessing Firebolt UI viaapp.firebolt.iothen use Username and Password instead of Client ID and Client Secret to connect.ModulesOperatorsoperators.firebolt.FireboltOperatorruns a provided SQL script against Firebolt and returns results.operators.firebolt.FireboltStartEngineOperatoroperators.firebolt.FireboltStopEngineOperatorstarts/stops the specified engine, and waits until it is actually started/stopped. If theengine_nameis not specified, it will use theengine_namefrom the connection, if it also not specified it will start the default engine of the connection database. Note: start/stop operator requires actual engine name, if engine URL is specified instead, start/stop engine operators will not be able to handle it correctly.Hookshooks.firebolt.FireboltHookestablishes a connection to Firebolt.ContributingSee:CONTRIBUTING.MD
airflow-provider-fivetran
Fivetran Provider for Apache AirflowThis package provides an operator, sensor, and hook that integratesFivetraninto Apache Airflow.FivetranOperatorallows you to start Fivetran jobs from Airflow andFivetranSensorallows you to monitor a Fivetran sync job for completion before running downstream processes.Fivetran automates your data pipeline, and Airflow automates your data processing.InstallationPrerequisites: An environment runningapache-airflow.pip install airflow-provider-fivetranConfigurationIn the Airflow user interface, configure a Connection for Fivetran. Most of the Connection config fields will be left blank. Configure the following fields:Conn Id:fivetran_defaultConn Type:FivetranFivetran API Key: Your Fivetran API KeyFivetran API Secret: Your Fivetran API SecretFind the Fivetran API Key and Secret in theFivetran Account Settings, under theAPI Configsection. See our documentation for more information onFivetran API Authentication.The sensor and operator assume theConn Idis set tofivetran_default, however if you are managing multipe Fivetran accounts, you can set this to anything you like. See the DAG in examples to see how to specify a customConn Id.ModulesFivetran OperatorFivetranOperatorstarts a Fivetran sync job. Note that when a Fivetran sync job is controlled via an Operator, it is no longer run on the schedule as managed by Fivetran. In other words, it is now scheduled only from Airflow.FivetranOperatorrequires that you specify theconnector_idof the sync job to start. You can findconnector_idin the Settings page of the connector you configured in theFivetran dashboard.Import into your DAG via:from fivetran_provider.operators.fivetran import FivetranOperatorFivetran SensorFivetranSensormonitors a Fivetran sync job for completion. Monitoring withFivetranSensorallows you to trigger downstream processes only when the Fivetran sync jobs have completed, ensuring data consistency. You can use multiple instances ofFivetranSensorto monitor multiple Fivetran connectors.Note, it is possible to monitor a sync that is scheduled and managed from Fivetran; in other words, you can useFivetranSensorwithout usingFivetranOperator. If used in this way, your DAG will wait until the sync job starts on its Fivetran-controlled schedule and then completes.FivetranSensorrequires that you specify theconnector_idof the sync job to start. You can findconnector_idin the Settings page of the connector you configured in theFivetran dashboard.Import into your DAG via:from fivetran_provider.sensors.fivetran import FivetranSensorExamplesSee theexamplesdirectory for an example DAG.IssuesPlease submitissuesandpull requestsin our official repo:https://github.com/fivetran/airflow-provider-fivetranWe are happy to hear from you. Please email any feedback to the authors [email protected] thanks toPete DeJoy,Plinio Guzman, andDavid KoenitzerofAstronomer.iofor their contributions and support in getting this provider off the ground.
airflow-provider-fivetran-async
Fivetran Async Provider for Apache AirflowThis package provides an async operator, sensor and hook that integratesFivetraninto Apache Airflow.FivetranSensorallows you to monitor a Fivetran sync job for completion before running downstream processes.FivetranOperatorsubmits a Fivetran sync job and polls for its status on the triggerer. Since an async sensor or operator frees up worker slot while polling is happening on the triggerer, they consume less resources when compared to traditional "sync" sensors and operators.Fivetran automates your data pipeline, and Airflow automates your data processing.InstallationPrerequisites: An environment runningapache-airflow.pip install airflow-provider-fivetran-asyncConfigurationIn the Airflow user interface, configure a Connection for Fivetran. Most of the Connection config fields will be left blank. Configure the following fields:Conn Id:fivetranConn Type:FivetranLogin: Fivetran API KeyPassword: Fivetran API SecretFind the Fivetran API Key and Secret in theFivetran Account Settings, under theAPI Configsection. See our documentation for more information onFivetran API Authentication.The sensor assumes theConn Idis set tofivetran, however if you are managing multiple Fivetran accounts, you can set this to anything you like. See the DAG in examples to see how to specify a customConn Id.ModulesFivetran Operator Asyncfromfivetran_provider_async.operatorsimportFivetranOperatorFivetranOperatorsubmits a Fivetran sync job and monitors it on trigger for completion.FivetranOperatorrequires that you specify theconnector_idof the Fivetran connector you wish to trigger. You can findconnector_idin the Settings page of the connector you configured in theFivetran dashboard.TheFivetranOperatorwill wait for the sync to complete so long aswait_for_completion=True(this is the default). It is recommended that you run in deferrable mode (this is also the default). Ifwait_for_completion=False, the operator will return the timestamp for the last sync.Import into your DAG via:Fivetran Sensor Asyncfromfivetran_provider_async.sensorsimportFivetranSensorFivetranSensormonitors a Fivetran sync job for completion. Monitoring withFivetranSensorallows you to trigger downstream processes only when the Fivetran sync jobs have completed, ensuring data consistency.FivetranSensorrequires that you specify theconnector_idof the Fivetran connector you want to wait for. You can findconnector_idin the Settings page of the connector you configured in theFivetran dashboard.You can use multiple instances ofFivetranSensorto monitor multiple Fivetran connectors.FivetranSensoris most commonly useful in two scenarios:Fivetran is using a separate scheduler than the Airflow scheduler.You setwait_for_completion=Falsein theFivetranOperator, and you need to await theFivetranOperatortask later. (You may want to do this if you want to arrange your DAG such that some tasks are dependent onstartinga sync and other tasks are dependent oncompletinga sync).If you are doing the 1st pattern, you may find it useful to set thecompleted_after_timetodata_interval_end, ordata_interval_endwith some buffer:fivetran_sensor=FivetranSensor(task_id="wait_for_fivetran_externally_scheduled_sync",connector_id="bronzing_largely",poke_interval=5,completed_after_time="{{ data_interval_end + macros.timedelta(minutes=1) }}",)If you are doing the 2nd pattern, you can use XComs to pass the target completed time to the sensor:fivetran_op=FivetranOperator(task_id="fivetran_sync_my_db",connector_id="bronzing_largely",wait_for_completion=False,)fivetran_sensor=FivetranSensor(task_id="wait_for_fivetran_db_sync",connector_id="bronzing_largely",poke_interval=5,completed_after_time="{{ task_instance.xcom_pull('fivetran_sync_op', key='return_value') }}",)fivetran_op>>fivetran_sensorYou may also specify theFivetranSensorwithout acompleted_after_time. In this case, the sensor will make note of when the last completed time was, and will wait for a new completed time.ExamplesSee theexamplesdirectory for an example DAG.IssuesPlease submitissuesandpull requestsin our official repo:https://github.com/astronomer/airflow-provider-fivetran-asyncWe are happy to hear from you. Please email any feedback to the authors [email protected].
airflow-provider-fivetran-atlassian
airflow-provider-fivetran-2
airflow-provider-flyte
Flyte Provider for Apache AirflowThis package provides an operator, a sensor, and a hook that integratesFlyteinto Apache Airflow.FlyteOperatoris helpful to trigger a task/workflow in Flyte andFlyteSensorenables monitoring a Flyte execution status for completion.InstallationPrerequisites: An environment runningapache-airflow.pip install airflow-provider-flyteConfigurationIn the Airflow UI, configure aConnectionfor Flyte.Host (required): The FlyteAdmin host.Port (optional): The FlyteAdmin port.Login (optional):client_idPassword (optional):client_credentials_secretExtra (optional): Specify theextraparameter as JSON dictionary to provide additional parameters.project: The default project to connect to.domain: The default domain to connect to.insecure: Whether to use SSL or not.command: The command to execute to return a token using an external process.scopes: List of scopes to request.auth_mode: The OAuth mode to use. Defaults to pkce flow.env_prefix: Prefix that will be used to lookup for injected secrets at runtime.default_dir: Default directory that will be used to find secrets as individual files.file_prefix: Prefix for the file in thedefault_dir.statsd_host: The statsd host.statsd_port: The statsd port.statsd_disabled: Whether to send statsd or not.statsd_disabled_tags: Turn on to reduce cardinality.local_sandbox_pathS3 Config:s3_enable_debugs3_endpoints3_retriess3_backoffs3_access_key_ids3_secret_access_keyGCS Config:gsutil_parallelismModulesFlyte OperatorTheFlyteOperatorrequires aflyte_conn_idto fetch all the connection-related parameters that are useful to instantiateFlyteRemote. Also, you must give alaunchplan_nameto trigger a workflow, ortask_nameto trigger a task; you can give a handful of other values that are optional, such asproject,domain,max_parallelism,raw_data_prefix,kubernetes_service_account,labels,annotations,secrets,notifications,disable_notifications,oauth2_client,version, andinputs.Import into your DAG via:from flyte_provider.operators.flyte import FlyteOperatorFlyte SensorIf you need to wait for an execution to complete, useFlyteSensor. Monitoring withFlyteSensorallows you to trigger downstream processes only when the Flyte executions are complete.Import into your DAG via:from flyte_provider.sensors.flyte import FlyteSensorExamplesSee theexamplesdirectory for an example DAG.IssuesPlease file issues and open pull requestshere. If you hit any roadblock, hit us up onSlack.
airflow-provider-fxiaoke
Apache-airflow-providers-fxiaokeAirflow plugin forfxiaoke APIInstallpipinstallairflow-providers-fxiaokeUsageCreate New connections namedfxiaoke_defaultwith extra json:{"app_id":"", "app_secret":"", "permanent_code":"", "open_user_id":""}HooksFxiaokeHooksOperatorsFxiaokeToGCSOperator
airflow-provider-grafana-loki
Airflow Grafana Loki ProviderLog Handler for pushing Airflow Task Log to Grafana LokiThis package provides Hook and LogHandler that integrates with Grafana Loki. LokiTaskLogHandler is a python log handler that handles and reads task instance logs. It extends airflow FileTaskHandler and uploads to and reads from Grafana Loki.InstallationInstall usingpippip install airflow-provider-grafana-lokiConfiguration Airflow to write logs to Grafana LokiAirflow can be configured to read and write task logs in Grafana Loki. It uses an existing Airflow connection to read or write logs. If you don't have a connection properly setup, this process will fail.Follow the steps below to enable Grafana Loki logging:Airflow's logging system requires a custom.pyfile to be located in the :envvar:PYTHONPATH, so that it's importable from Airflow. Start by creating a directory to store the config file,$AIRFLOW_HOME/configis recommended.Create empty files called$AIRFLOW_HOME/config/log_config.pyand$AIRFLOW_HOME/config/__init__.py.Copy the contents ofairflow/config_templates/airflow_local_settings.pyinto thelog_config.pyfile created inStep 2.Customize the following portions of the template:elif REMOTE_BASE_LOG_FOLDER.startswith('loki'): LOKI_HANDLER: Dict[str, Dict[str, Union[str, bool]]] = { 'task': { 'class': 'grafana_loki_provider.log.loki_task_handler.LokiTaskHandler', 'formatter': 'airflow', 'name':"airflow_task", 'base_log_folder': str(os.path.expanduser(BASE_LOG_FOLDER)), 'filename_template': FILENAME_TEMPLATE }, } DEFAULT_LOGGING_CONFIG['handlers'].update(LOKI_HANDLER) else: raise AirflowException( "Incorrect remote log configuration. Please check the configuration of option 'host' in " "section 'elasticsearch' if you are using Elasticsearch. In the other case, " "'remote_base_log_folder' option in the 'logging' section." )Make sure a Grafana Loki (Loki) connection hook has been defined in Airflow. The hook should have read and write access to the Grafana Loki Api.Update$AIRFLOW_HOME/airflow.cfgto contain:[logging] remote_logging = True remote_base_log_folder = loki logging_config_class= log_config.DEFAULT_LOGGING_CONFIG remote_log_conn_id = <name of the Grafana Loki connection>Restart the Airflow webserver and scheduler, and trigger (or wait for) a new task execution.Verify that logs are showing up for newly executed tasks is showing up in Airflow UI.in case you are using gevent worker class, you might faceRecursionError: maximum recursion depth exceedederror while reading logs from Loki. please refer following issue for more info:gevent/gevent/#1016apache/airflow/#9118current workaround is to add monkey patching at the top of the airflow log settings file. in this above case,$AIRFLOW_HOME/config/log_config.pyeg:""Airflow logging settings.""" from __future__ import annotations import gevent.monkey gevent.monkey.patch_all() import osNote: The provider is in active development stage. All sorts of feedback, and bug reports are welcome. I will try to addresss and resolve all issues to the best of my abilityIncase of any issue or you need any help, please feel free to open an issue.Your contribution to the projects is highly appreciated and welcome.
airflow-provider-graphgrid
The airflow-provider-graphgrid package includes Operators and functionality in order to better streamline Airflow workflows within GraphGrid CDP.Airflow DAGs can leverage theGraphGridDockerOperatorandGraphGridMountviafromgraphgrid_provider.operators.graphgrid_dockerimport\GraphGridDockerOperator,GraphGridMountand use them as if they were a normalDockerOperatort_0=GraphGridDockerOperator(task_id='task_0',dag=dag,mounts=[GraphGridMount(target="/some_path",source="/some_other_path",type="bind")],image="some-image",auto_remove=True,)
airflow-provider-great-expectations
Apache Airflow Provider for Great ExpectationsA set of Airflow operators forGreat Expectations, a Python library for testing and validating data.Version Warning:Due to apply_default decorator removal, this version of the provider requires Airflow 2.1.0+. If your Airflow version is < 2.1.0, and you want to install this provider version, first upgrade Airflow to at least version 2.1.0. Otherwise, your Airflow package version will be upgraded automatically, and you will have to manually run airflow upgrade db to complete the migration.Notes on compatibilityThis operator currently works with the Great Expectations V3 Batch Request API only. If you would like to use the operator in conjunction with the V2 Batch Kwargs API, you must use a version below 0.1.0This operator uses Great Expectations Checkpoints instead of the former ValidationOperators.Because of the above, this operator requires Great Expectations >=v0.13.9, which is pinned in the requirements.txt starting with release 0.0.5.Great Expectations version 0.13.8 contained a bug that would make this operator not work.Great Expectations version 0.13.7 and below will work with version 0.0.4 of this operator and below.This package has been most recently unit tested withapache-airflow=2.4.3andgreat-expectation=0.15.34.Formerly, there was a separate operator for BigQuery, to facilitate the use of GCP stores. This functionality is now baked into the core Great Expectations library, so the generic Operator will work with any back-end and SQL dialect for which you have a working Data Context and Datasources.InstallationPre-requisites: An environment runninggreat-expectationsandapache-airflow- these are requirements of this package that will be installed as dependencies.pip install airflow-provider-great-expectationsDepending on your use-case, you might need to addENV AIRFLOW__CORE__ENABLE_XCOM_PICKLING=trueto your Dockerfile to enable XCOM to pass data between tasks.UsageThe operator requires a DataContext to run which can be specified either as:A path to a directory in which a yaml-based DataContext configuration is locatedA Great Expectations DataContextConfig objectAdditonally, a Checkpoint may be supplied, which can be specified either as:The name of a Checkpoint already located in the Checkpoint Store of the specified DataContextA Great Expectations CheckpointConfig objectAlthough if no Checkpoint is supplied, a default one will be built.The operator also enables you to pass in a Python dictionary containing kwargs which will be added/substituted to the Checkpoint at runtime.ModulesGreat Expectations Base Operator: A base operator for Great Expectations. Import into your DAG via:from great_expectations_provider.operators.great_expectations import GreatExpectationsOperatorPreviously Available Email Alert FunctionalityThe email alert functionality available in version0.0.7has been removed, in order to keep the purpose of the operator more narrow and related to running the Great Expectations validations, etc. There is now avalidation_failure_callbackparameter to the base operator's constructor, which can be used for any kind of notification upon failure, given that the notification mechanisms provided by the Great Expectations framework itself doesn't suffice.ExamplesSee theexample_dagsdirectory for an example DAG with some sample tasks that demonstrate operator functionality.The example DAG can be exercised in one of two ways:With the open-source Astro CLI (recommended):Initialize a project with theAstro CLICopy the example DAG into thedags/folder of your astro projectCopy the directories in theincludefolder of this repository into theincludedirectory of your Astro projectCopy your GCPcredentials.jsonfile into the base directory of your Astro projectAdd the following to yourDockerfileto install theairflow-provider-great-expectationspackage, enable xcom pickling, and add the required Airflow variables and connection to run the example DAG:RUN pip install --user airflow_provider_great_expectations ENV AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True ENV GOOGLE_APPLICATION_CREDENTIALS=/usr/local/airflow/credentials.json ENV AIRFLOW_VAR_MY_PROJECT=<YOUR_GCP_PROJECT_ID> ENV AIRFLOW_VAR_MY_BUCKET=<YOUR_GCS_BUCKET> ENV AIRFLOW_VAR_MY_DATASET=<YOUR_BQ_DATASET> ENV AIRFLOW_VAR_MY_TABLE=<YOUR_BQ_TABLE> ENV AIRFLOW_CONN_MY_BIGQUERY_CONN_ID='google-cloud-platform://?extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery&extra__google_cloud_platform__project=bombora-dev&extra__google_cloud_platform__key_path=%2Fusr%2Flocal%2Fairflow%2Fairflow-gcp.bombora-dev.iam.gserviceaccount.com.json'Runastro dev startto view the DAG on a local Airflow instance (you will need Docker running)With a vanilla Airflow installation:Add the example DAG to yourdags/folderMake thegreat_expectationsanddatadirectories ininclude/available in your environment.Change thedata_fileandge_root_dirpaths in your DAG file to point to the appropriate places.Change the paths ingreat-expectations/checkpoints/*.ymlto point to the absolute path of your data files.Change the value ofenable_xcom_picklingtotruein your airflow.cfgSet the appropriate Airflow variables and connection as detailed in the above instructions for using theastroCLIDevelopmentSetting Up the Virtual EnvironmentAny virtual environment tool can be used, but the simplest approach is likely using thevenvtool included in the Python standard library.For example, creating a virtual environment for development against this package can be done with the following (assumingbash):# Create the virtual environment using venv: $ python -m venv --prompt my-af-ge-venv .venv # Activate the virtual environment: $ . .venv/bin/activate # Install the package and testing dependencies: (my-af-ge-venv) $ pip install -e '.[tests]'Running Unit, Integration, and Functional TestsOnce the above is done, running the unit and integration tests can be done with either of the following approaches.UsingpytestThepytestlibrary and CLI is preferred by this project, and many Python developers, because of its rich API, and the additional control it gives you over things like test output, test markers, etc. It is included as a dependency inrequirements.txt.The simple commandpytest -p no:warnings, when run in the virtual environment created with the above process, provides a concise output when all tests pass, filtering out deprecation warnings that may be issued by Airflow, and a only as detailed as necessary output when they dont:(my-af-ge-venv) $ pytest -p no:warnings =========================================================================================== test session starts ============================================================================================ platform darwin -- Python 3.7.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 rootdir: /Users/jpayne/repos-bombora/bombora-airflow-provider-great-expectations, configfile: pytest.ini, testpaths: tests plugins: anyio-3.3.0 collected 7 items tests/operators/test_great_expectations.py ....... [100%] ============================================================================================ 7 passed in 11.99s ============================================================================================Functional TestingFunctional testing entails simply running the example DAG using, for instance, one of the approaches outlined above, only with the adjustment that the local development package be installed in the target Airflow environment.Again, the recommended approach is to use theAstro CLI**This operator is in early stages of development! Feel free to submit issues, PRs, or join the #integration-airflow channel in theGreat Expectations Slackfor feedback. Thanks toPete DeJoyand theAstronomer.ioteam for the support.
airflow-provider-great-expectations-cta
Apache Airflow Provider for Great ExpectationsA set of Airflow operators forGreat Expectations, a Python library for testing and validating data.Version Warning:Due to apply_default decorator removal, this version of the provider requires Airflow 2.1.0+. If your Airflow version is < 2.1.0, and you want to install this provider version, first upgrade Airflow to at least version 2.1.0. Otherwise, your Airflow package version will be upgraded automatically, and you will have to manually run airflow upgrade db to complete the migration.Notes on compatibilityThis operator currently works with the Great Expectations V3 Batch Request API only. If you would like to use the operator in conjunction with the V2 Batch Kwargs API, you must use a version below 0.1.0This operator uses Great Expectations Checkpoints instead of the former ValidationOperators.Because of the above, this operator requires Great Expectations >=v0.13.9, which is pinned in the requirements.txt starting with release 0.0.5.Great Expectations version 0.13.8 contained a bug that would make this operator not work.Great Expectations version 0.13.7 and below will work with version 0.0.4 of this operator and below.This package has been most recently unit tested withapache-airflow=2.4.3andgreat-expectation=0.15.34.Formerly, there was a separate operator for BigQuery, to facilitate the use of GCP stores. This functionality is now baked into the core Great Expectations library, so the generic Operator will work with any back-end and SQL dialect for which you have a working Data Context and Datasources.InstallationPre-requisites: An environment runninggreat-expectationsandapache-airflow- these are requirements of this package that will be installed as dependencies.pip install airflow-provider-great-expectationsDepending on your use-case, you might need to addENV AIRFLOW__CORE__ENABLE_XCOM_PICKLING=trueto your Dockerfile to enable XCOM to pass data between tasks.UsageThe operator requires a DataContext to run which can be specified either as:A path to a directory in which a yaml-based DataContext configuration is locatedA Great Expectations DataContextConfig objectAdditonally, a Checkpoint may be supplied, which can be specified either as:The name of a Checkpoint already located in the Checkpoint Store of the specified DataContextA Great Expectations CheckpointConfig objectAlthough if no Checkpoint is supplied, a default one will be built.The operator also enables you to pass in a Python dictionary containing kwargs which will be added/substituted to the Checkpoint at runtime.ModulesGreat Expectations Base Operator: A base operator for Great Expectations. Import into your DAG via:from great_expectations_provider.operators.great_expectations import GreatExpectationsOperatorPreviously Available Email Alert FunctionalityThe email alert functionality available in version0.0.7has been removed, in order to keep the purpose of the operator more narrow and related to running the Great Expectations validations, etc. There is now avalidation_failure_callbackparameter to the base operator's constructor, which can be used for any kind of notification upon failure, given that the notification mechanisms provided by the Great Expectations framework itself doesn't suffice.ExamplesSee theexample_dagsdirectory for an example DAG with some sample tasks that demonstrate operator functionality.The example DAG can be exercised in one of two ways:With the open-source Astro CLI (recommended):Initialize a project with theAstro CLICopy the example DAG into thedags/folder of your astro projectCopy the directories in theincludefolder of this repository into theincludedirectory of your Astro projectCopy your GCPcredentials.jsonfile into the base directory of your Astro projectAdd the following to yourDockerfileto install theairflow-provider-great-expectationspackage, enable xcom pickling, and add the required Airflow variables and connection to run the example DAG:RUN pip install --user airflow_provider_great_expectations ENV AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True ENV GOOGLE_APPLICATION_CREDENTIALS=/usr/local/airflow/credentials.json ENV AIRFLOW_VAR_MY_PROJECT=<YOUR_GCP_PROJECT_ID> ENV AIRFLOW_VAR_MY_BUCKET=<YOUR_GCS_BUCKET> ENV AIRFLOW_VAR_MY_DATASET=<YOUR_BQ_DATASET> ENV AIRFLOW_VAR_MY_TABLE=<YOUR_BQ_TABLE> ENV AIRFLOW_CONN_MY_BIGQUERY_CONN_ID='google-cloud-platform://?extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery&extra__google_cloud_platform__project=bombora-dev&extra__google_cloud_platform__key_path=%2Fusr%2Flocal%2Fairflow%2Fairflow-gcp.bombora-dev.iam.gserviceaccount.com.json'Runastro dev startto view the DAG on a local Airflow instance (you will need Docker running)With a vanilla Airflow installation:Add the example DAG to yourdags/folderMake thegreat_expectationsanddatadirectories ininclude/available in your environment.Change thedata_fileandge_root_dirpaths in your DAG file to point to the appropriate places.Change the paths ingreat-expectations/checkpoints/*.ymlto point to the absolute path of your data files.Change the value ofenable_xcom_picklingtotruein your airflow.cfgSet the appropriate Airflow variables and connection as detailed in the above instructions for using theastroCLIDevelopmentSetting Up the Virtual EnvironmentAny virtual environment tool can be used, but the simplest approach is likely using thevenvtool included in the Python standard library.For example, creating a virtual environment for development against this package can be done with the following (assumingbash):# Create the virtual environment using venv: $ python -m venv --prompt my-af-ge-venv .venv # Activate the virtual environment: $ . .venv/bin/activate # Install the package and testing dependencies: (my-af-ge-venv) $ pip install -e '.[tests]'Running Unit, Integration, and Functional TestsOnce the above is done, running the unit and integration tests can be done with either of the following approaches.UsingpytestThepytestlibrary and CLI is preferred by this project, and many Python developers, because of its rich API, and the additional control it gives you over things like test output, test markers, etc. It is included as a dependency inrequirements.txt.The simple commandpytest -p no:warnings, when run in the virtual environment created with the above process, provides a concise output when all tests pass, filtering out deprecation warnings that may be issued by Airflow, and a only as detailed as necessary output when they dont:(my-af-ge-venv) $ pytest -p no:warnings =========================================================================================== test session starts ============================================================================================ platform darwin -- Python 3.7.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 rootdir: /Users/jpayne/repos-bombora/bombora-airflow-provider-great-expectations, configfile: pytest.ini, testpaths: tests plugins: anyio-3.3.0 collected 7 items tests/operators/test_great_expectations.py ....... [100%] ============================================================================================ 7 passed in 11.99s ============================================================================================Functional TestingFunctional testing entails simply running the example DAG using, for instance, one of the approaches outlined above, only with the adjustment that the local development package be installed in the target Airflow environment.Again, the recommended approach is to use theAstro CLI**This operator is in early stages of development! Feel free to submit issues, PRs, or join the #integration-airflow channel in theGreat Expectations Slackfor feedback. Thanks toPete DeJoyand theAstronomer.ioteam for the support.
airflow-provider-hex
Hex Airflow ProviderProvides an Airflow Operator and Hook to trigger Hex project runs.ThisAirflow Provider Packageprovides Hooks and Operators for interacting with the Hex API.RequirementsAirflow >=2.2Hex API TokenInitial SetupInstall the package.pip install airflow-provider-hexAfter creating a Hex API token, set up your Airflow Connection Credentials in the Airflow UI.Connection ID:hex_defaultConnection Type:Hex ConnectionHost:https://app.hex.techHex API Token:your-token-hereOperatorsTheairflow_provider_hex.operators.hex.HexRunProjectOperatorOperator runs Hex Projects, either synchronously or asynchronously.In the synchronous mode, the Operator will start a Hex Project run and then poll the run until either an error or success status is returned, or until the poll timeout. If the timeout occurs, the default behaviour is to attempt to cancel the run.In the asynchronous mode, the Operator will request that a Hex Project is run, but will not poll for completion. This can be useful for long-running projects.The operator accepts inputs in the form of a dictionary. These can be used to override existing input elements in your Hex project.You may also optionally include notifications for a particular run. See theHex API documentationfor details.HooksTheairflow_provider_hex.hooks.hex.HexHookprovides a low-level interface to the Hex API.These can be useful for testing and development, as they provide both a genericrunmethod which sends an authenticated request to the Hex API, as well as implementations of therunmethod that provide access to specific endpoints.ExamplesA simplified example DAG demonstrates how to use theAirflow Operatorfromairflow_provider_hex.operators.heximportHexRunProjectOperatorPROJ_ID='abcdef-ghijkl-mnopq'notifications:list[NotificationDetails]=[{"type":"SUCCESS","includeSuccessScreenshot":True,"slackChannelIds":["HEX666SQG"],"userIds":[],"groupIds":[],}]...sync_run=HexRunProjectOperator(task_id="run",hex_conn_id="hex_default",project_id=PROJ_ID,dag=dag,notifications=notifications)
airflow-provider-hightouch
Apache Airflow Provider for HightouchProvides an Airflow Operator and Hook forHightouch. This allows the user to initiate a run for a sync from Airflow.InstallationPre-requisites: An environment runningapache-airflow>= 1.10, including >= 2.pip install airflow-provider-hightouchConfigurationIn the Airflow Connections UI, create a new connection for Hightouch.Conn ID:hightouch_defaultConn Type:HTTPHost:https://api.hightouch.comPassword: enter the API key for your workspace. You can generate an API key from yourWorkspace SettingsThe Operator uses thehightouch_defaultconnection id by default, but if needed, you can create additional Airflow Connections and reference them in the operatorModulesHightouchTriggerSyncOperatorStarts a Hightouch Sync Run. Requires thesync_idor thesync_slugfor the sync you wish to run.Returns thesync_run_idof the sync it triggers.The run is synchronous by default, and the task will be marked complete once the sync is successfully completed.However, you can request a asynchronous request instead by passingsynchronous=Falseto the operator.If the API key is not authorized or if the request is invalid the task will fail. If a run is already in progress, a new run will be triggered following the completion of the existing run.HightouchSyncRunSensorMonitors a Hightouch Sync Run. Requires thesync_idand thesync_run_idof the sync you wish to monitor. To obtain thesync_run_idof a sync triggered in Airflow, we recommend using XComs to pass the return value ofHightouchTriggerSyncOperator.ExamplesCreating a run is as simple as importing the operator and providing it with a sync_id. Anexample dagis available as well.from airflow_provider_hightouch.operators.hightouch import HightouchTriggerSyncOperator with DAG(....) as dag: ... my_task = HightouchTriggerSyncOperator(task_id="run_my_sync", sync_id="123") my_other_task = HightouchTriggerSyncOperator(task_id="run_my_sync", sync_slug="my-sync-slug")IssuesPlease submitissuesandpull requestsin our official repo:https://github.com/hightouchio/airflow-provider-hightouchWe are happy to hear from you, for any feedback please email the authors [email protected] thanks toFivetranfor their provider andMarcos Marx's Airbyte contribution in the core Airflow repo for doing this before we had to so we could generously learn from their hard work.
airflow-provider-huawei-cloud
Failed to fetch description. HTTP Status Code: 404
airflow-provider-huawei-cloud-demo
Apache Airflow Huawei Cloud Provider
airflow-provider-hugman
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
airflow-provider-hyperopt
HyperOpt
airflow-provider-kafka
Kafka Airflow ProviderAn airflow provider to:interact with kafka clustersread from topicswrite to topicswait for specific messages to arrive to a topicThis package currently contains3 hooks (airflow_provider_kafka.hooks) :admin_client.KafkaAdminClientHook- a hook to work against the actual kafka admin clientconsumer.KafkaConsumerHook- a hook that creates a consumer and provides it for interactionproducer.KafkaProducerHook- a hook that creates a producer and provides it for interaction4 operators (airflow_provider_kafka.operators) :await_message.AwaitKafkaMessageOperator- a deferable operator (sensor) that awaits to encounter a message in the log before triggering down stream tasks.consume_from_topic.ConsumeFromTopicOperator- an operator that reads from a topic and applies a function to each message fetched.produce_to_topic.ProduceToTopicOperator- an operator that uses a iterable to produce messages as key/value pairs to a kafka topic.event_triggers_function.EventTriggersFunctionOperator- an operator that listens for messages on the topic and then triggers a downstream function before going back to listening.1 triggerairflow_provider_kafka.triggers:await_message.AwaitMessageTriggerQuick startpip install airflow-provider-kafkaExample usages :basic read/write/sense on a topicevent listener patternFAQsWhy confluent kafka and not (other library) ?A few reasons: theconfluent-kafkalibrary is guaranteed to be 1:1 functional with librdkafka, is faster, and is maintained by a company with a commercial stake in ensuring the continued quality and upkeep of it as a product.Why not release this into airflow directly ?I could probably make the PR and get it through, but the airflow code base is getting huge and I don't want to burden the maintainers with code that they don't own for maintainence. Also there's been multiple attempts to get a Kafka provider in before and this is just faster.Why is most of the configuration handled in a dict ?Because that's howconfluent-kafkadoes it. I'd rather maintain interfaces that people already using kafka are comfortable with as a starting point - I'm happy to add more options/ interfaces in later but would prefer to be thoughtful about it to ensure that there difference between these operators and the actual client interface are minimal.Local DevelopmentUnit TestsUnit tests are located attests/unit, a kafka server isn't required to run these tests. execute withpytestSetup on M1 MacInstalling on M1 chip means a brew install of thelibrdkafkalibrary before you canpip install confluent-kafkabrewinstalllibrdkafkaexportC_INCLUDE_PATH=/opt/homebrew/Cellar/librdkafka/1.8.2/includeexportLIBRARY_PATH=/opt/homebrew/Cellar/librdkafka/1.8.2/lib pipinstallconfluent-kafka
airflow-provider-kinetica
1. Kinetica Provider for Apache AirflowTheairflow-provider-kineticapackage provides a SQL operator and hook for Kinetica.1. Overview2. Installation3. Testing3.1. Configure Conda environment3.2. Install Airflow3.3. Start Airflow in Standalone mode3.4. Install the package in editable mode.4. Building5. See Also1. OverviewFeatures included in this package are:Airflow hookKineticaSqlHookAirflow operatorKineticaSqlOperatorCustom connection type with customized connection UI.Relevant files are:FileDescriptionkinetica_provider/get_provider_info.pyProvider infoexample_dags/kinetica_sql_example.pyExample DAG with operator and hook.kinetica_provider/operator/sql.pyContains KineticaSqlHookkinetica_provider/hooks/sql.pyContains KineticaSqlOperator2. InstallationThis step assumes that you have an existing.whldistribution of the package. You can eitherbuild the distributionor download it from the assets section of theGithub release.$pipinstall./dist/airflow_provider_kinetica-1.0.0-py3-none-any.whl[...]Successfullyinstalledairflow-provider-kinetica-1.0.0You will need to create a default connection namedkinetica_default. You can do this in the web UI or with the following syntax:$airflowconnectionsadd'kinetica_default'\--conn-type'kinetica'\--conn-login'admin'\--conn-password'???'\--conn-host'http://hostname:9191/'3. TestingThis section explains how to setup an environment used for build and test.3.1. Configure Conda environmentTo run Airflow we need a specific version of python with its dependencies and so we will use miniconda.The following steps show how to install miniconda on Linux. You should check theMiniconda documentationfor the most recent install instructions.[~]$wgethttps://repo.anaconda.com/miniconda/Miniconda3-py38_23.3.1-0-Linux-x86_64.sh[~]$bashMiniconda3-py38_23.3.1-0-Linux-x86_64.shAfter installing make sure you are in thebaseconda environment. Next we crate anairflowconda environment.(base)[~]$condacreate--nameairflowpython=3.8(base)[~]$condaactivateairflow(airflow)[~]$3.2. Install AirflowThese steps will show how to configure astandalone Airflow environment.Note: Before starting make sure you have activated theairflowconda envionmnet.Determine the download URL of the airflow installer.(airflow)[~]$AIRFLOW_VERSION=2.6.1(airflow)[~]$PYTHON_VERSION="$(python--version|cut-d" "-f2|cut-d"."-f1-2)"(airflow)[~]$CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"(airflow)[~]$echo$CONSTRAINT_URLhttps://raw.githubusercontent.com/apache/airflow/constraints-2.6.1/constraints-3.8.txtInstall the Airflow package.(airflow)[~]$pipinstall--upgradepip(airflow)[~]$pipinstall"apache-airflow==${AIRFLOW_VERSION}"--constraint"${CONSTRAINT_URL}"3.3. Start Airflow in Standalone modeYou must provide a location that will be used for the$AIRFLOW_HOME. We set this in the conda environment.(airflow)[~]$condaenvconfigvarssetAIRFLOW_HOME=~/fsq-airflow/airflow/standalone(airflow)[~]$condaenvconfigvarslist-nairflowAIRFLOW_HOME=~/fsq-airflow/airflow/standaloneYou must re-activate the environment for the variable to get loaded.(airflow)[~]$condaactivateairflow(airflow)[~]$echo$AIRFLOW_HOME~/fsq-airflow/airflow/standaloneWhen you startup airflow in standalone mode it will copy files into$AIRFLOW_HOMEif they do not already exist. When startup is complete it will show the admin and user password for the webserver.(airflow)[~]$cd$AIRFLOW_HOME(airflow)[standalone]$airflowstandalone[...]standalone|Airflowisready standalone|Loginwithusername:adminpassword:39FrRzqzRYTK3pc9 standalone|AirflowStandaloneisfordevelopmentpurposesonly.Donotusethisinproduction!You can edit theairflow.cfgfile if you need to change any ports.3.4. Install the package in editable mode.When a package is installed for edit the contents of the specified directory get registered with the python environment. This allows for changes to be made without the need for reinstalling.Change to the location of the package and install it as editable.(airflow)[~]$cd~/fsq-airflow/airflow/airflow-provider-kinetica(airflow)[airflow-provider-kinetica]$pipinstall--editable.Now you can restart airflow to see the installed provider. Uninstall the package when you are done.(airflow)[airflow-provider-kinetica]$pythonsetup.pydevelop--uninstall4. BuildingThe conda environment created for testing can also be used for building. You will need thebuildpackage.(airflow)[~]$pipinstallbuildFrom the location of the provider execute the build process.(airflow)[~]$cd~/fsq-airflow/airflow/airflow-provider-kinetica(airflow)[airflow-provider-kinetica]$python-mbuild[...]Successfullybuiltairflow-provider-kinetica-1.0.0.tar.gzandairflow_provider_kinetica-1.0.0-py3-none-any.whlIt will create a "wheel" distribution package and you can use this to install the provider. If you have an editable version of the provider from the above section you should uninstall it first.(airflow)[airflow-provider-kinetica]$ls-1./dist airflow_provider_kinetica-1.0.0-py3-none-any.whl airflow-provider-kinetica-1.0.0.tar.gz(airflow)[airflow-provider-kinetica]$pipinstall./dist/airflow_provider_kinetica-1.0.0-py3-none-any.whl5. See AlsoKinetica DocsKinetica Python APIKinetica SQLAirflow DocsAirflow QuickstartManaging ConnectionsSQL OperatorsBuilding a ProviderAirflow Provider SamplePython Build ModuleSetuptools Wheels
airflow-provider-lakefs
lakeFS airflow providerlakeFS airflow provider enables a smooth integration of lakeFS with airflow's DAGs. "Use the lakeFS provider to create branches, commit objects, wait for files to be written, and more."For usage example, check out theexample DAGWhat is lakeFSlakeFS is an open source layer that delivers resilience and manageability to object-storage based data lakes.With lakeFS you can build repeatable, atomic and versioned data lake operations - from complex ETL jobs to data science and analytics.lakeFS supports AWS S3, Azure Blob Storage and Google Cloud Storage as its underlying storage service. It is API compatible with S3, and works seamlessly with all modern data frameworks such as Spark, Hive, AWS Athena, Presto, etc.For more information see theofficial lakeFS documentation.CapabilitiesDevelopment Environment for DataExperimentation- try tools, upgrade versions and evaluate code changes in isolation.Reproducibility- go back to any point of time to a consistent version of your data lake.Continuous Data IntegrationIngest new data safely by enforcing best practices- make sure new data sources adhere to your lake’s best practices such as format and schema enforcement, naming convention, etc.Metadata validation- prevent breaking changes from entering the production data environment.Continuous Data DeploymentInstantly revert changes to data- if low quality data is exposed to your consumers, you can revert instantly to a former, consistent and correct snapshot of your data lake.Enforce cross collection consistency- provide to consumers several collections of data that must be synchronized, in one atomic, revertible, action.Prevent data quality issues by enablingTesting of production data before exposing it to users / consumers.Testing of intermediate results in your DAG to avoid cascading quality issues.PublishingThe repository include GitHub workflow that is trigger on publish event and will build and push the package to PyPI.Use the following steps to release:Updatesetup.pywith the new package versionUpdateCHANGELOG.mdwith changes for the new releaseUse GitHub release, use semver vX.X.XCommunityStay up to date and get lakeFS support via:Slack(to get help from our team and other users).Twitter(follow for updates and news)YouTube(learn from video tutorials)Contact us(for anything)More informationlakeFS documentationIf you would like to contribute, check out ourcontributing guide.RoadmapLicensinglakeFS is completely free and open source and licensed under theApache 2.0 License.
airflow-provider-logbroker
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
airflow-provider-mesos
Apache Mesos Provider for Apache Airflow 2.xThis provider for Apache Airflow contain the following features:MesosExecuter - A scheduler to run Airflow DAG's on mesosMesosOperator - To executer Airflow tasks on mesos.
airflow-provider-mlflow
AnApache Airflowprovider to interact with MLflow using Operators and Hooks for the following:RegistryDeploymentsPyfunchttps://mlflow.org/docs/latest/index.htmlQuick StartInstall and update using pip:pipinstallairflow-provider-mlflowSetting up Connections:Connection Type: HTTPLocal MLflowHost:http://localhost(if running Airflwo in docker:http://host.docker.internal)Port: 5000Hosted with Username/PasswordConnection Type: HTTPHost: Your MLflow host URLLogin: Your MLflow usernamePassword: Your MLflow passwordDatabricksHost: Your Databricks host URL (https://<instance-name>.cloud.databricks.com)Login: ‘token’Password: Your Databricks tokenExamples can be found in theexample_dagsdirectory of the repo.ChangelogWe followSemantic Versioningfor releases. CheckCHANGELOG.rstfor the latest changes.LicenseApache License 2.0
airflow-provider-nessie
Nessie Airflow ProviderDocumentation:https://projectnessie.github.io/nessie_providerSource Code:https://github.com/projectnessie/nessie_providerUsageTo use in airflow install via pippip install airflow-provider-nessie. SeeNessie Documentationfor instructions on starting and using a Nessie server.Operators and HooksTo interact with Nessie from an airflow DAG you have the following options:Nessie Hook: register as a connection w/ Airflow and store your Nessie url and credentialsCreate reference operator: Create a Branch or Tag as part of an airflow DAGDelete reference operator: Delete a Branch or Tag as part of an airflow DAGCommit operator: commit objects to the Nessie database on a given branchMerge operator: merge one branch into anotherThese can be seen in action by looking at theExample DAGs. Thebasic_nessie.pyDAG shows each operator in action and thespark_nessie_iceberg.pyDAG shows a more complicated example of performing an iceberg transaction in Nessie from the Spark operator.DevelopmentSetup environementYou should havePipenvinstalled. Then, you can install the dependencies with:pipenvinstall--devAfter that, activate the virtual environment:pipenvshellRun unit testsYou can run all the tests with:maketestAlternatively, you can runpytestyourself:pytestFormat the codeExecute the following command to applyisortandblackformatting:makeformatLicenseThis project is licensed under the terms of the Apache Software License 2.0.
airflow-provider-openmldb
Airflow OpenMLDB ProviderOverviewAirflow OpenMLDB Provider supports connecting to OpenMLDB. Specifically, connect to the OpenMLDB API Server.Operators:OpenMLDBLoadDataOperatorOpenMLDBSelectIntoOperatorOpenMLDBDeployOperatorOpenMLDBSQLOperator: the underlying implementation of operators above. Support all sql.Only operators and a hook, no sensors.BuildTo build openmldb provider, follow the steps below:Clone the repo.cdinto provider directory.Runpython3 -m pip install build.Runpython3 -m buildto build the wheel.Find the .whl file in/dist/*.whl.How to useWrite the dag, using openmldb operators, refsimple openmldb operator dag example.Create the connection in airflow, the name isopenmldb_conn_idyou set.Trigger the dag.
airflow-provider-optuna
Optuna
airflow-provider-paradime-dbt
airflow-provider-paradime-dbtThis is the provider for Paradime to run and manage dbt™ jobs in production. The provider enables interaction with Paradime’s Bolt scheduler and management APIs.UsageCreate a connectionGenerate your API key, secret and endpoint from Paradime Workspace settings.Create a connection in Airflow, as shown below.Create a DAGHere is one example:fromairflow.decoratorsimportdagfromparadime_dbt_provider.operators.paradimeimportParadimeBoltDbtScheduleRunArtifactOperator,ParadimeBoltDbtScheduleRunOperatorfromparadime_dbt_provider.sensors.paradimeimportParadimeBoltDbtScheduleRunSensorPARADIME_CONN_ID="your_paradime_conn_id"# Update this to your connection idBOLT_SCHEDULE_NAME="your_schedule_name"# Update this to your schedule name@dag(default_args={"conn_id":PARADIME_CONN_ID},)defrun_schedule_and_download_manifest():# Run the schedule and return the run id as the xcom return valuetask_run_schedule=ParadimeBoltDbtScheduleRunOperator(task_id="run_schedule",schedule_name=BOLT_SCHEDULE_NAME)# Get the run id from the xcom return valuerun_id="{{ task_instance.xcom_pull(task_ids='run_schedule') }}"# Wait for the schedule to complete before continuingtask_wait_for_schedule=ParadimeBoltDbtScheduleRunSensor(task_id="wait_for_schedule",run_id=run_id)# Download the manifest.json file from the schedule run and return the path as the xcom return valuetask_download_manifest=ParadimeBoltDbtScheduleRunArtifactOperator(task_id="download_manifest",run_id=run_id,artifact_path="target/manifest.json")# Get the path to the manifest.json file from the xcom return valueoutput_path="{{ task_instance.xcom_pull(task_ids='download_manifest') }}"task_run_schedule>>task_wait_for_schedule>>task_download_manifestrun_schedule_and_download_manifest()Refer to theexample DAGsin this repository for more examples.
airflow-provider-pulumi
Pulumi Airflow ProviderAn airflow provider to:preview infrastructure resources before deploymentdeploy infrastructure resources via Pulumidestroy infrastructure resourcesThis package currently contains1 hook :airflow_provider_pulumi.hooks.automation.PulumiAutoHook- a hook to setup the Pulumi backend connection.4 operators :airflow_provider_pulumi.operators.base.BasePulumiOperator- the base operator for Pulumi.airflow_provider_pulumi.operators.preview.PulumiPreviewOperator- an operator that previews the deployment of infrastructure resources with Pulumi.airflow_provider_pulumi.operators.up.PulumiUpOperator- an operator that deploys infrastructure resources with Pulumi.airflow_provider_pulumi.operators.destroy.PulumiDestroyOperator- an operator that destroys infrastructure resources with Pulumi.RequirementsThese operators require the Pulumi client to be installed. Use the following script to install the Pulumi client in your Airflow environment:curl-fsSLhttps://get.pulumi.com|shexportPATH="$HOME/.pulumi/bin:$PATH"Quick startpip install airflow-provider-pulumi# example_pulumi_dag.pyfromdatetimeimportdatetimefromairflow.decoratorsimportdagfromairflow_provider_pulumi.operators.destroyimportPulumiDestroyOperatorfromairflow_provider_pulumi.operators.previewimportPulumiPreviewOperatorfromairflow_provider_pulumi.operators.upimportPulumiUpOperator@dag(schedule_interval=None,start_date=datetime(2022,1,1),tags=["example"],)defexample_pulumi():defcreate_s3_bucket():importpulumiimportpulumi_awsasaws# Creates an AWS resource (S3 Bucket)bucket=aws.s3.Bucket("my-bucket")# Exports the DNS name of the bucketpulumi.export("bucket_name",bucket.bucket_domain_name)preview_s3_create_task=PulumiPreviewOperator(task_id="preview_s3_create",pulumi_program=create_s3_bucket,stack_config={"aws:region":"us-west-2"},plugins={"aws":"v5.0.0"},)s3_create_task=PulumiUpOperator(task_id="s3_create",pulumi_program=create_s3_bucket,stack_config={"aws:region":"us-west-2"},plugins={"aws":"v5.0.0"},)s3_destroy_task=PulumiDestroyOperator(task_id="s3_destroy",pulumi_program=create_s3_bucket,stack_config={"aws:region":"us-west-2"},plugins={"aws":"v5.0.0"},)preview_s3_create_task>>s3_create_task>>s3_destroy_taskexample_pulumi_dag=example_pulumi()DevelopmentUnit TestsUnit tests are located attests, the Pulumi client is required to run these tests. Execute withpytest.
airflow-provider-rabbitmq
RabbitMQ Provider for Apache AirflowConfigurationIn the Airflow user interface, configure a connection with theConn Typeset to RabbitMQ. Configure the following fields:Conn Id: How you wish to reference this connection. The default value israbbitmq_default.login: Login for the RabbitMQ server.password: Password for the RabbitMQ server.,port: Port for the RabbitMQ server, typically 5672.host: Host of the RabbitMQ server.vhost: The virtual host you wish to connect to.ModulesRabbitMQ OperatorTheRabbitMQOperatorpublishes a message to your specificed RabbitMQ server.Import into your DAG using:fromrabbitmq_provider.operators.rabbitmqimportRabbitMQOperatorRabbitMQ SensorTheRabbitMQSensorchecks a given queue for a message. Once it has found a message the sensor triggers downstream proccesses in your DAG.Import into your DAG using:fromrabbitmq_provider.sensors.rabbitmqimportRabbitMQSensorTestingTo run unit tests, use:poetryrunpytest.A RabbitMQ instance is required to run the tests. Use the following command:dockerrun--rm-it--hostnamemy-rabbit-p15672:15672-p5672:5672rabbitmq:3-management
airflow-provider-ray
Apache Airflow Provider for RayA provider you can install into your Airflow environment to access custom Ray XCom backends, Ray Hooks, and Ray Operators.🧪 Experimental VersionThis provider is an experimental alpha containing necessary components to orchestrate and schedule Ray tasks using Airflow. It is actively maintained and being developed to bring production-ready workflows to Ray using Airflow. Thie release contains everything needed to begin building these workflows using the Airlfow taskflow API.Current Release:0.2.1RequirementsVisit theRay Project pagefor more info on Ray.⚠️ The server version and client version (build) of Ray MUST be the same.-Python Version >= 3.7-Airflow Version >= 2.0.0-Ray Version == 1.3.0-Filelock >= 3.0.0ModulesRay XCom Backend: Custom XCom backend to assist operators in moving data between tasks using the Ray API with its internal Plasma store, thereby allowing for in-memory distributed processing and handling of large data objects.Ray Hook: Extension ofHttphook that uses the Ray client to provide connections to the Ray Server.Ray Decorator: Task decorator to be used with the task flow API, combining wrapping the existing airflow@taskdecorate withray.remotefunctionality, thereby executing each task on the ray cluster.Configuration and UsageAdd the provider package wheel file to the root directory of your Airflow project.In your AirflowDockerfile, you will need to add an environment variable to specify your custom backend, along with the provider wheel install. Add the following:FROMquay.io/astronomer/ap-airflow:2.0.2-1-buster-onbuildUSERrootRUNpipuninstallastronomer-airflow-version-check-yUSERastroENVAIRFLOW__CORE__XCOM_BACKEND=ray_provider.xcom.ray_backend.RayBackendCheck ap-airflow version, if unsure, change toap-airflow:latest-onbuildWe are using a Ray1.3.0and python version3.7. To get a bleeding edge version of Ray, you can to follow this format to build the wheel url in yourrequirements.txtfile:pipinstallairflow-provider-rayConfigure Ray Locally. To run ray locally, you'll need a minimum 6GB of free memory.To start, in your environment with ray installed, run:(venv)$raystart--num-cpus=8--object-store-memory=7000000000--headIf you have extra resources, you can bump the memory up.You should now be able to open the ray dashboard athttp://127.0.0.1:8265/.Start your Airflow environment and open the UI.In the Airflow UI, add anAirflow Poolwith the following:Pool(name):ray_worker_pool Slots:25In the Airflow UI, add anAirflow Connectionwith the following:ConnId:ray_cluster_connection ConnType:HTTP Host:ClusterIPAddress,withbasicAuthparamsifneeded Port:10001In your Airflow DAG python file, you must include the following in yourdefault_argsdictionary:fromray_provider.xcom.ray_backendimportRayBackend...default_args={'on_success_callback':RayBackend.on_success_callback,'on_failure_callback':RayBackend.on_failure_callback,...}@dag(default_args=default_args,..)defray_example_dag():# do stuffUsing the taskflow API, your airflow task should now use the@ray_taskdecorator for any ray task and add theray_conn_id, parameter astask_args, like:fromray_provider.decoratorsimportray_taskdefault_args={'on_success_callback':RayBackend.on_success_callback,'on_failure_callback':RayBackend.on_failure_callback,...}task_args={"ray_conn_id":"ray_cluster_connection"}...@dag(default_args=default_args,..)defray_example_dag():@ray_task(**task_args)defsum_cols(df:pd.DataFrame)->pd.DataFrame:returnpd.DataFrame(df.sum()).TProject Contributors and MaintainersThis project is built in collaboration betweenAstronomerandAnyscale, with active contributions from:Pete DeJoyDaniel ImbermanRob DeebRichard LiawCharles GreerWill DrevoThis project is formatted viablack:pipinstallblack black.ConnectionsTBD - [Info on building a connection to Ray]
airflow-providers-aliyun-rocketmq
Airflow Provider for Aliyun RocketMQExamplefromaliyun_rocketmq_provider.hooks.aliyun_rocketmqimportAliyunRocketMQHookmessage_push_topic=AliyunRocketMQHook(topic="message-push")message_push_topic.run("helloWorld",fail_silently=True)
airflow-provider-sample
Airflow Sample ProviderGuidelines on building, deploying, and maintaining provider packages that will help Airflow users interface with external systems. Maintained with ❤️ by Astronomer.This repository provides best practices for building, structuring, and deploying Airflow provider packages as independent python modules available on PyPI.Provider repositories must be public on Github and follow the structural and technical guidelines laid out in this Readme. Ensure that all of these requirements have been met before submitting a provider package for community review.Here, you'll find information on requirements and best practices for key aspects of your project:File formattingDevelopmentAirflow integrationDocumentationTestingFormatting StandardsBefore writing and testing the functionality of your provider package, ensure that your project follows these formatting conventions.Package nameThe highest level directory in the provider package should be named in the following format:airflow-provider-<provider-name>Repository structureAll provider packages must adhere to the following file structure:├──LICENSE# A license is required, MIT or Apache is preferred.├──README.md ├──sample_provider# Your package import directory. This will contain all Airflow modules and example DAGs.│├──__init__.py │├──example_dags ││├──__init__.py ││└──sample-dag.py │├──hooks ││├──__init__.py ││└──sample_hook.py │├──operators ││├──__init__.py ││└──sample_operator.py │└──sensors │├──__init__.py │└──sample_sensor.py ├──setup.py# A setup.py file to define dependencies and how the package is built and shipped. If you'd like to use setup.cfg, that is fine as well.└──tests# Unit tests for each module.├──__init__.py├──hooks│├──__init__.py│└──sample_hook_test.py├──operators│├──__init__.py│└──sample_operator_test.py└──sensors├──__init__.py└──sample_sensor_test.pyDevelopment StandardsIf you followed the formatting guidelines above, you're now ready to start editing files to include standard package functionality.Python Packaging ScriptsYoursetup.pyfile should contain all of the appropriate metadata and dependencies required to build your package. Use thesamplesetup.pyfilein this repository as a starting point for your own project.If some of the options for building your package are variables or user-defined, you can specify asetup.cfgfile instead.Managing DependenciesWhen building providers, these guidelines will help you avoid potential for dependency conflicts:It is important that the providers do not include dependencies that conflict with the underlying dependencies for a particular Airflow version. All of the default dependencies included in the core Airflow project can be found in the Airflowsetup.py file.Keep all dependencies relaxed at the upper bound. At the lower bound, specify minor versions (for example,depx >=2.0.0, <3).VersioningUse standard semantic versioning for releasing your package. When cutting a new release, be sure to update all of the relevant metadata fields in your setup file.Building ModulesAll modules must follow a specific set of best practices to optimize their performance with Airflow:All classes should always be able to run without access to the internet.The Airflow Scheduler parses DAGs on a regular schedule. Every time that parse happens, Airflow will execute whatever is contained in theinitmethod of your class. If thatinitmethod contains network requests, such as calls to a third party API, there will be problems due to repeated network calls.Init methods should never call functions which return valid objects only at runtime. This will cause a fatal import error when trying to import a module into a DAG. A common best practice for referencing connectors and variables within DAGs is to useJinja Templating.All operator modules need anexecutemethod.This method defines the logic that the operator will implement.Modules should also take advantage of native Airflow features that allow your provider to:Register custom connection types, which improve the user experience when connecting to your tool.Includeextra-linksthat link your provider back to its page on the Astronomer Registry. This provides users easy access to documentation and example DAGs.Refer to theAirflow Integration Standardssection for more information on how to build in these extra features.Unit testingYour top-leveltests/folder should include unit tests for all modules that exist in the repository. You can write tests in the framework of your choice, but the Astronomer team and Airflow community typically usepytest.You can test this package by running:python3 -m unittestfrom the top-level of the directory.Airflow Integration StandardsAirflow exposes a number of plugins to interface from your provider package. We highly encourage provider maintainers to add these plugins because they significantly improve the user experience when connecting to a provider.Defining an entrypointTo enable custom connections, you first need to define anapache_airflow_providerentrypoint in yoursetup.pyorsetup.cfgfile:entry_points={ "apache_airflow_provider": [ "provider_info=sample_provider.__init__:get_provider_info" ] }Next, you need to add aget_provider_infomethod to the__init__file in your top-level provider folder. This function needs to return certain metadata associated with your package in order for Airflow to use it at runtime:defget_provider_info():return{"package-name":"airflow-provider-sample'","name":"Sample Airflow Provider",# Required"description":"A sample template for airflow providers.",# Required"hook-class-names":["sample_provider.hooks.sample_hook.SampleHook"],"extra-links":["sample_provider.operators.sample_operator.ExtraLink"],"versions":["0.0.1"]# Required}Once you define the entrypoint, you can use native Airflow features to expose custom connection types in the Airflow UI, as well as additional links to relevant documentation.Adding Custom Connection FormsAirflow enables custom connection forms through discoverable hooks. The following is an example of a custom connection form for the Fivetran provider:Add code to the hook class to initiate a discoverable hook and create a custom connection form. The following code defines a hook and a custom connection form:classExampleHook(BaseHook):"""ExampleHook docstring..."""conn_name_attr='example_conn_id'default_conn_name='example_default'conn_type='example'hook_name='Example'@staticmethoddefget_connection_form_widgets()->Dict[str,Any]:"""Returns connection widgets to add to connection form"""fromflask_appbuilder.fieldwidgetsimportBS3PasswordFieldWidget,BS3TextFieldWidgetfromflask_babelimportlazy_gettextfromwtformsimportPasswordField,StringField,BooleanFieldreturn{"extra__example__bool":BooleanField(lazy_gettext('Example Boolean')),"extra__example__account":StringField(lazy_gettext('Account'),widget=BS3TextFieldWidget()),"extra__example__secret_key":PasswordField(lazy_gettext('Secret Key'),widget=BS3PasswordFieldWidget()),}@staticmethoddefget_ui_field_behaviour()->Dict:"""Returns custom field behaviour"""importjsonreturn{"hidden_fields":['port'],"relabeling":{},"placeholders":{'extra':json.dumps({"example_parameter":"parameter",},indent=1,),'host':'example hostname','schema':'example schema','login':'example username','password':'example password','extra__example__account':'example account name','extra__example__secret_key':'example secret key',},}Some notes about using custom connections:get_connection_form_widgets()creates extra fields using flask_appbuilder. Extra fields are defined in the following format:extra__<conn_type>__<field_name>A variety of field types can be created using this function, such as strings, passwords, booleans, and integers.get_ui_field_behaviour()is a JSON schema describing the form field behavior. Fields can be hidden, relabeled, and given placeholder values.To connect a form to Airflow, add the hook class name of a discoverable hook to"hook-class-names"in theget_provider_infomethod as mentioned inDefining an entrypoint.Adding Custom LinksOperators can add custom links that users can click to reach an external source when interacting with an operator in the Airflow UI. This link can be created dynamically based on the context of the operator. The following code example shows how to initiate an extra link within an operator:classExampleLink(BaseOperatorLink):"""Link for ExmpleOperator"""name='Example Link'defget_link(self,operator,dttm):"""Get link to registry page."""registry_link="https://{example}.com"returnregistry_link.format(example='example')classExampleOperator(BaseOperator):"""ExampleOperator docstring..."""operator_extra_links=(ExampleLink(),)To connect custom links to Airflow, add the operator class name to"extra-links"in theget_provider_infomethod mentioned above.Documentation StandardsCreating excellent documentation is essential for explaining the purpose of your provider package and how to use it.Inline Module DocumentationEvery Python module, including all hooks, operators, sensors, and transfers, should be documented inline viasphinx-templated docstrings. These docstrings should be included at the top of each module file and contain three sections separated by blank lines:A one-sentence description explaining what the module does.A longer description explaining how the module works. This can include details such as code blocks or blockquotes. For more information Sphinx markdown directives, read theSphinx documentation.A declarative definition of parameters that you can pass to the module, templated per the example below.For a full example of inline module documentation, see theexample operator in this repository.READMEThe README for your provider package should give users an overview of what your provider package does. Specifically, it should include:High-level documentation about the provider's service.Steps for building a connection to the service from Airflow.What modules exist within the package.An exact set of dependencies and versions that your provider has been tested with.Guidance for contributing to the provider package.Functional Testing StandardsTo build your repo into a python wheel that can be tested, follow the steps below:Clone the provider repo.cdinto provider directory.Runpython3 -m pip install build.Runpython3 -m buildto build the wheel.Find the .whl file in/dist/*.whl.Download theAstro CLI.Create a new project directory, cd into it, and runastro dev initto initialize a new astro project.Ensure the Dockerfile contains the Airflow 2.0 image:FROM quay.io/astronomer/ap-airflow:2.0.0-buster-onbuildCopy the.whlfile to the top level of your project directory.Install.whlin your containerized environment by adding the following to your Dockerfile:RUN pip install --user airflow_provider_<PROVIDER_NAME>-0.0.1-py3-none-any.whlCopy your sample DAG to thedags/folder of your astro project directory.Runastro dev startto build the containers and run Airflow locally (you'll need Docker on your machine).When you're done, runastro dev stopto wind down the deployment. Runastro dev killto kill the containers and remove the local Docker volume. You can also useastro dev killto stop the environment before rebuilding with a new.whlfile.Note: If you are having trouble accessing the Airflow webserver locally, there could be a bug in your wheel setup. To debug, rundocker ps, grab the container ID of the scheduler, and rundocker logs <scheduler-container-id>to inspect the logs.Once you have built and tested your provider package as a Python wheel, you're ready tosend us your repoto be published onThe Astronomer Registry.
airflow-provider-sapiq
SAP IQ Provider for Apache AirflowTable of ContentsInstallationConfigurationLicenseInstallationpip install airflow-provider-sapiqConfigurationIn the Airflow user interface, configure a connection with theConn Typeset to SAP IQ. Configure the following fields:Conn Id: How you wish to reference this connection. The default value issapiq_default.userid: Login for the SAP IQ server.password: Password for the SAP IQ server.,port: Port for the SAP IQ server, typically 2638.host: Host of the SAP IQ server.ModulesSAP IQ OperatorTheSapIqOperatorexequtes SQL query on SAP IQ server.Import into your DAG using:fromsapiq_provider.operators.SapIqOperatorimportSapIqOperatorLicenseairflow-provider-sapiqis distributed under the terms of theApache-2.0license.
airflow-providers-clickhouse
apache-airflow-providers-clickhouseProvider allow connect Apache Airflow to Yandex Clickhouse Database and perform query using ClickhouseOperator
airflow-providers-clickhouse-kh
Failed to fetch description. HTTP Status Code: 404
airflow-provider-servicenow
Failed to fetch description. HTTP Status Code: 404
airflow-providers-hive-zk
airflow-providers-hive-zkProvider for Hive for Airflow 2.X. It is using ZK discovery service to find Hive server.Build and install locally:python3 -m build pip install airflow-providers-hive-zk --no-index --find-links file:///<path>/git/airflow_provider_hive_zk/dist/Start airflow:airflow webserverCheck if provider is registered and if new connection type has appeared.Install:pip install airflow-providers-hive-zk
airflow-provider-sifflet
Sifflet Provider for Apache AirflowThis package provides operators and hook that integrateSiffletinto Apache Airflow. All classes for this provider package are in thesifflet_providerPython package.InstallationYou can install this package on top of an existing Airflow 2.1+ installationpipinstallairflow-provider-siffletThe package supports the following python versions: 3.7, 3.8, 3.9, 3.10ConfigurationIn the Airflow user interface, you can configure a Connection for Sifflet inAdmin->Connections->Add a new record.You will need to fill out the following:Connection Id: sifflet_default Connection Type: Sifflet Sifflet Tenant: <your_tenant_name> (for SaaS deployement) Sifflet Backend URL: <your_backend_url> (for Self-hosted deployment) Sifflet Token: <your_sifflet_access_token><your_sifflet_access_token>: you can find more information on how to generate ithereOne of the following is required:<your_tenant_name>: if you access to Sifflet with "https://abcdef.siffletdata.com", then your tenant would beabcdef<your_backend_url>: full URL to the Sifflet backend on your deployment, for instance:https://sifflet-backend.mycompany.comModulesOperatorsSiffletDbtIngestOperatorSiffletDbtIngestOperatorsends your DBT artifacts to the Sifflet application.Example usage:fromsifflet_provider.operators.dbtimportSiffletDbtIngestOperatorsifflet_dbt_ingest=SiffletDbtIngestOperator(task_id="sifflet_dbt_ingest",input_folder="<path to dbt project folder>",target="prod",project_name="<dbt project name>",)SiffletRunRuleOperatorSiffletRunRuleOperatorRun one or several Sifflet rules - requires rule id(s).Example usage:fromsifflet_provider.operators.ruleimportSiffletRunRuleOperatorsifflet_run_rule=SiffletRunRuleOperator(task_id="sifflet_run_rule",rule_ids=["3e2e2687-cd20-11ec-b38b-06bb20181849","3e19eb3e-cd20-11ec-b38b-06bb20181849","3e1a86f1-cd20-11ec-b38b-06bb20181849","3e2e1fc3-cd20-11ec-b38b-06bb20181849",],error_on_rule_fail=True)
airflow-provider-skypilot
Apache Airflow Provider for SkyPilotA provider you can utilize multiple clouds on Apache Airflow through SkyPilot.InstallationThe SkyPilot provider for Apache Airflow was developed and tested on an environment with the following dependencies installed:Apache Airflow>= 2.6.0SkyPilot>= 0.4.1The installation of the SkyPilot provider may start from the Airflow environment configured with Docker instructed in"Running Airflow in Docker". Base on the docker configuration, add apip installcommand in the Dockerfile and build your own Docker image.RUNpipinstall--userairflow-provider-skypilotThen, make sure that SkyPilot is properly installed and initialized on the same environment. The initialization includes cloud account setup and access verification. Please refer toSkyPilot Installationfor more information.ConfigurationA SkyPilot provider process runs on an Airflow worker, but it stores its metadata into the Airflow master node. This scheme allows a set of consecutive sky tasks runs across multiple workers by sharing the metadata.Following settings in thedocker-compose.yamldefines the data sharing, including cloud credentials, metadata and workspace.x-airflow-common:environment:volumes:-${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags-${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs-${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config-${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins# mount cloud credentials-${HOME}/.aws:/opt/airflow/sky_home_dir/.aws-${HOME}/.azure:/opt/airflow/sky_home_dir/.azure-${HOME}/.config/gcloud:/opt/airflow/sky_home_dir/.config/gcloud-${HOME}/.scp:/opt/airflow/sky_home_dir/.scp# mount sky metadata-${HOME}/.sky:/opt/airflow/sky_home_dir/.sky-${HOME}/.ssh:/opt/airflow/sky_home_dir/.ssh# mount sky working dir-${HOME}/sky_workdir:/opt/airflow/sky_home_dir/sky_workdirThis example mounts the cloud credentials forAWS,Azure,GCP, andSCP, which have been made by SkyPilot could account setup. For SkyPilot metadata, check.sky/and.ssh/are placed in your${HOME}directory and mount them. Additionally, you can mount your own directory likesky_workdir/for user resources including user codes andyamltask definition files for Skypilot execution.Note that all Sky directories are mounted undersky_home_dir/. They will be symbolic-linked to${HOME}/in workers where a SkyPilot provider process actually runs.UsageThe SkyPilot provider includes the following operators:SkyLaunchOperatorSkyExecOperatorSkyDownOperatorSkySSHOperatorSkyRsyncUpOperatorSkyRsyncDownOperatorSkyLaunchOperatorcreates an cloud cluster and executes a Sky task, as shown below:sky_launch_task=SkyLaunchOperator(task_id="sky_launch_task",sky_task_yaml="~/sky_workdir/my_task.yaml",cloud="cheapest",# aws|azure|gcp|scp|ibm ...gpus="A100:1",minimum_cpus=16,minimum_memory=32,auto_down=False,sky_home_dir='/opt/airflow/sky_home_dir',#set by defaultdag=dag)OnceSkyLaunchOperatorcreates a Sky cluster withauto_down=False, the created cluster can be utilized by the other Sky operators. Please refer toan example dagfor multiple Sky operators running on a single Sky cluster.
airflow-providers-lokalise
Airflow Provider LokaliseThis repository provides hook and operator to connect to theLokalise APIusing theLokalise Python SDK.ConnectionHook and operator are using the following parameter to connect to Lokalise API:lokalise_conn_id: name of the connection in Airflowpassword: personal API token to connect to the API. Can be obtained followingthis documentationhost: name of the project in Lokalise.Repo organizationHook is located in thelokalise_provider/hooksfolder.Operator is located in thelokalise_provider/operatorfolder.Tests for hook and operator are located in thetestsfolder.DependenciesPython >= 3.10Airflow >= 2.7python-lokalise-api>=2.1.0Additional dependencies are described in thepyproject.toml file.
airflow-provider-snowservices
Snow Services
airflow-providers-oraclethick-hook
airflow-providers-oraclethickProvider for Oracle DB for Airflow 2.X. Just initializes thick client. Creates connection type OracleThick.Build and install locally:python3 -m build pip install airflow-providers-oraclethick-hook --no-index --find-links file:///<path>/git/airflow_provider_oracle_thick/dist/Start airflow:airflow webserverCheck if provider is registered and if new connection type has appeared.Install:pip install airflow-providers-oraclethick-hook
airflow-providers-prima
No description available on PyPI.
airflow-provider-sqream-blue
apache-airflow-providers-sqream-blueApache Airflow is a popular open source orchestration tool. It allows users to write complex workflows composed of multiple kinds of actions and services using DAGs, and to schedule, debug and monitor workflow runs.Different kind of actions are represented with specialized Python classes, called “Operators”. Each SaaS database vendor can build one or more customized Operators for performing different DB operations (e.g. execute arbitrary SQL statements, schedule DB job runs etc.).This package is an Operator for executing SQL statments on SQream Blue using Python connector.RequirementsPython 3.9+Installing the Airflow-provider-sqream-blueThe Airflow provider sqream-blue is available viaPyPi <https://pypi.org/project/airflow-provider-sqream-blue/>_.Install the connector withpip3:.. code-block:: consolepip3 install airflow-provider-sqream-bluepip3will automatically installs all necessary libraries and modules.How to use the Airflow-provider-sqream-blueCreate a connection -After the installation of the package on the Airflow server, refresh the server and create a new connection. sqream-blue will apper on the connection-type... image:: images/create_connection.png :width: 800Click test and save after enter connection params.Create a dag -Create a python dag file and copy it to dags folder on the airflow server -To find dag folder run this command.. code-block:: consoleairflow config list | grep dags_folderExample of python dag file.. code-block:: pythonimport logging from datetime import timedelta from airflow import DAG from airflow.operators.python_operator import PythonOperator from sqream_blue.operators.sqream_blue import SQreamBlueSqlOperator from sqream_blue.hooks.sqream_blue import SQreamBlueHook from airflow.utils.dates import days_ago logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) with DAG( dag_id='Test_Dag', schedule_interval='0 0 * * *', start_date=days_ago(2), dagrun_timeout=timedelta(minutes=60), template_searchpath=['/home/sqream/'], tags=['Test'] ) as dag: list_operator = SQreamBlueSqlOperator( task_id='create_and_insert', sql=['create or replace table t_a(x int not null)', 'insert into t_a values (1)', 'insert into t_a values (2)'], sqream_blue_conn_id="sqream_blue_connection", dag=dag, ) simple_operator = SQreamBlueSqlOperator( task_id='just_select', sql='select * from t_a', sqream_blue_conn_id="sqream_blue_connection", dag=dag, ) sql_file_operator = SQreamBlueSqlOperator( task_id='sql_file', sql='daniel.sql', sqream_blue_conn_id="sqream_blue_connection", dag=dag, ) def count_python(**context): dwh_hook = SQreamBlueHook(sqream_blue_conn_id="sqream_blue_connection") result = dwh_hook.get_first("select count(*) from public.t_a") logging.info("Number of rows in `public.t_a` - %s", result[0]) count_through_python_operator_query = PythonOperator( task_id="log_row_count", python_callable=count_python) list_operator >> simple_operator >> count_through_python_operator_query >> sql_file_operatorThe execution of the Dag File -.. image:: images/execution_dag.png :width: 600
airflow-providers-siafi
airflow-providers-siafiProvider para interações com o SIAFI e seus sistemas derivadosInstalaçãopipinstallairflow-providers-siafiConteúdoHook e tipo de conexão "Conta do SIAFI"Usofromairflow.providers.siafi.hooks.siafiimportSIAFIHookwithSIAFIHook('id_conexao')ashook:cpf=hook.cpfsenha=hook.senha...