package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
air-df
base_handler -- tornado handler 基类主要提供参数检查,并初始化get、post、delete、option等方法plugins -- 插件集合1.OrmClient-- Sqlalchemy管理器支持类别:Mysql、Sqlserver、Postgresql。__enter__ 方法返回session 对象2.MongoClient-- MongoDB管理器MongoDB 连接管理器3.RedisClient-- Redis管理器Redis管理器,重写push、set(支持自定义文件夹)、get、delete方法4.LocalFaker-- 随机数据生成工具基于Faker,提供姓名、地址、邮箱、银行、公司、职业、md5、bool、密码、手机、身份证生成工具5.AlchemyEncoder-- sqlalchemy 模型转字典sqlalchemy 模型转字典,特殊类型处理类,使用方式 json.dumps(dict, cls=AlchemyEncoder)6.CryptoHelper-- 加密工具提供md5加密、aes加密、aes解密7.Tools-- 常用工具汇总get_birth_frommake_captchatransform_unixtransfrom_datetimemake_loggerget_excel_datasave_xlsexcel_color
airdialogue-essentials
AirDialogueAirDialogue is a benchmark dataset for goal-oriented dialogue generation research. This python library contains a collection of tookits that come with the dataset.AirDialogue paperAirDialogue datasetReference implementation:AirDialogue ModelWhat's NewJul 13,2020: Fixed a bug in BLEU evaluation. The current version gives higher BLEU scores. Support evaluation for different roles and add KL-divergence metric (see--infer_metrics).Jul 12,2020: We update theAirDialogue datasetto version v1.1. We fixed typos, misalignment between KB file and dialogue file. Please download and use the new data.PrerequisitesGeneralpython (verified on 3.7)wgetPython Packagestensorflow (tested on 1.15.0)tqdmnltkflask (for visualization)InstallTo install the bleeding edge from github, usepython setup.py installQuick StartScoringThe official scoring function evaluates the predictive results for a trained model and compare it to the AirDialogue dataset.airdialogue score --true_data PATH_TO_DATA_FILE --true_kb PATH_TO_KB_FILE \ --infer_metrics bleu--infer_metricscan be one of (bleu:all|rouge:all|kl:all|bleu:brief|kl:brief).briefmode gives a single number metric. (bleu|kl) is equivalent to (belu:brief|kl:brief)Context GenerationContext generator generates a valid context-action pair without conversatoin history.airdialogue contextgen \ --output_data PATH_TO_OUTPUT_DATA_FILE \ --output_kb PATH_TO_OUTPUT_KB_FILE \ --num_samples 100PreprocessingAirDialogue proprocess tookie tokenizes dialogue. Preprocess on AirDialogue data requires 50GB of ram to work. Parameter job_type is a set of 5 bits separted by|, which reqpresentstrain|eval|infer|sp-train|sp-eval. Parameter input_type can be eithercontextfor context only data ordialoguefor dialogue data with full history.airdialogue prepro \ --data_file PATH_TO_DATA_FILE \ --kb_file PATH_TO_KB_FILE \ --output_dir "./data/airdialogue/" \ --output_prefix 'train' --job_type '0|0|0|1|0' --input_type contextSimulatorSimulator is built on top of context generator that provides not only a context-action pair but also a full conversation history generated by two templated chatbot agents.airdialogue sim \ --output_data PATH_TO_OUTPUT_DATA_FILE \ --output_kb PATH_TO_OUTPUT_KB_FILE \ --num_samples 100VisualizationVisualization tool displays the content of the raw json file.airdialogue vis --data_path ./data/airdialogue/json/Codalab simulatorTo simulate running the Codalab selfplay workflow, you can run the following script that replicates the bundle workflow for the competition. This requires amodel/scripts/codalab_selfplay_step.shthat can be run asshscripts/codalab_selfplay_step.shout.txtdata.json[kb.json]More details can be found on theAirdialogue competition tutorial worksheeton Codalab.bashairdialogue/codalab/simulate_codalab.sh<path_to_data>/json/dev_data.json<path_to_data>/json/dev_kb.json<model_folder>
airdistance
This package calculates the air distance between two points. You should provide it with the latitude and longitude of the first and second point.
airdistance-between-two-points
Failed to fetch description. HTTP Status Code: 404
airdot
🚀 Airdot DeployerDeploy your ML models inminutes, notweeks.Detailed documentation can be foundhereAirdot Deployer will automatically:Restructure your Python code (from Jupyter Notebook/local IDEs) into modules.Builds a REST API around your code.Conterize the app.Spins up the required hardware (local or K8s or cloud).Monitors for model/data drift and performance (in development)Take your ML model from Local to Production with one-line of codefromairdot.deployerimportDeployerdeployer_obj=Deployer().run(<your-ml-predictor>)Once deployed, your model will be up and running on the intra/internet, accessible to your users. No more worrying about complex server setups or manual configuration. Airdot Deployer does all the heavy lifting for you.curl-XPOST<url>-H'Content-Type: application/json'-d'{"args": "some-value"}'Whether you're a data scientist, developer, or tech enthusiast, Airdot Deployer empowers you to showcase your machine learning prowess and share your creations effortlessly.What does Airdot Deployer supports ?Local Deployment with DockerK8s Deployment with seldon coreWant to try Airdot ? follow setup instructions📋 Setup InstructionsBefore we get started, you'll need to have Docker, Docker Compose, and s2i installed on your machine. If you don't have these installed yet, no worries! Follow the steps below to get them set up:Docker InstallPlease visit the appropriate links to install Docker on your machine:For macOS, visithereFor Windows, visithereFor Linux, visithereS2I installFor Mac You can either follow the installation instructions for Linux (and use the darwin-amd64 link) or you can just install source-to-image with Homebrew:$ brew install source-to-imageFor Linux just run following commandcurl-shttps://api.github.com/repos/openshift/source-to-image/releases/latest|grepbrowser_download_url|greplinux-amd64|cut-d'"'-f4|wget-qi-For Windows please follow instructionhere💻 Airdot Deployer InstallationInstall the Airdot Deployer package using pip:pipinstall"git+https://github.com/airdot-io/airdot-deployer.git@main#egg=airdot"orpipinstallairdot🎯 Let's try outLocal DeploymentsRun following in terminal to setup minio and redis on your machinedockernetworkcreateminio-network&&wgethttps://raw.githubusercontent.com/airdot-io/airdot-deployer/main/docker-compose.yaml&&docker-compose-pairdotupTrain your modelfromsklearn.model_selectionimporttrain_test_splitfromsklearn.linear_modelimportLogisticRegressionfromairdot.deployerimportDeployerfromsklearnimportdatasetsimportpandasaspdimportnumpyasnpiris=datasets.load_iris()iris=pd.DataFrame(data=np.c_[iris['data'],iris['target']],columns=iris['feature_names']+['target'])X=iris.drop(['target'],axis=1)X=X.to_numpy()[:,(2,3)]y=iris['target']X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.5,random_state=42)log_reg=LogisticRegression()log_reg.fit(X_train,y_train)Test your modeldefpredict(value):returnlog_reg.predict(value)Deploy in one step 🤯deployer_obj=Deployer().run(predict)Use your deployed Modelcurl-XPOSThttp://127.0.0.1:8000-H'Content-Type: application/json'-d'{"value": [[4.7, 1.2]]}'Want to stop your deploymentdeployer.stop('predict')# to stop containerDeployment on k8s using seldon-core deploymentNote - This method will use your current cluster and uses seldon-core to deployfromairdotimportDeployerimportpandasaspd# this is using default seldon-deployment configuration.config={'deployment_type':'seldon','bucket_type':'minio','image_uri':'<registry>/get_value_data:latest'}deployer=Deployer(deployment_configuration=config)df2=pd.DataFrame(data=[[10,20],[10,40]],columns=['1','2'])defget_value_data(cl_idx='1'):returndf2[cl_idx].values.tolist()deployer.run(get_value_data)you can also deploy using seldon custom configurationfromairdotimportDeployerimportpandasaspd# this is using default seldon-deployment configuration.config={'deployment_type':'seldon','bucket_type':'minio','image_uri':'<registry>/get_value_data:latest','seldon_configuration':''# your custom seldon configuration}deployer=Deployer(deployment_configuration=config)df2=pd.DataFrame(data=[[10,20],[10,40]],columns=['1','2'])defget_value_data(cl_idx='1'):returndf2[cl_idx].values.tolist()deployer.run(get_value_data)
air-drf-relation
AIR-DRF-RELATIONTable of ContentsInstalationAboutAirRelatedFieldpk_onlyhiddenAirModelSerializeruserextra_kwargshidden_fieldsKwargs by actionsaction_read_only_fieldsaction_hidden_fieldsaction_extra_kwargsPriority extra_kwargsFilter nested querysetsInstalation$ pip install air-drf-relationAboutair-drf-relationadds flexibility and convenience in working with ModelSerializer.AirRelatedFieldUsed to extend the functionality of thePrimaryKeyRelatedFieldclassBookSerializer(ModelSerializer):# author = PrimaryKeyRelatedField(queryset=Author.objects) - default usageauthor=AirRelatedField(AuthorSerializer)city=AirRelatedField(CitySerializer)classMeta:model=Bookfields=('uuid','name','author','city')AirRelatedFieldallows you to get not only pk but also an object with pk, which will be searched.{"name":"demo","author":{"id":1},"city":1}pk_onlyAutomatically AirRelatedField returns a serialized object. If you only want to use pk, you must specify thepk_onlykey.author=AirRelatedField(AuthorSerializer,pk_only=True)hiddenHidden fields are not used for serialization and validation. The data will be returned without fields. Usually used together inAirModelSerializerauthor=AirRelatedField(AuthorSerializer,hidden=True)ImportantYou cannot usehiddenandpk_onlyin ModelSerializer and with extra_kwargsAirModelSerializerUsed to extend the functionality of theModelSerializerclassBookSerializer(AirModelSerializer):# full exampleauthor=AirRelatedField(AuthorSerializer)city=AirRelatedField(AuthorSerializer)classMeta:model=Bookfields=('uuid','name','author','city')hidden_fields=()read_only_fields=()# default read_only_fieldsextra_kwargs={}# default extra_kwargs with support custom keysaction_read_only_fields={'create':{},'_':{}# used for other actions},action_hidden_fields={'create':(),'_':()}action_extra_kwargs={'list':{},'_':{}}nested_save_fields=()userUser is automatically put from therequestif available. You can also set the user manually.classDemoSerializer(AirModelSerializer):classMeta:fields=('id','name')validate_name(self,value):ifnotself.user:returnNonereturnvalueManually set user.serializer=DemoSerializer(data={'name':'demo'},user=request.user)extra_kwargsExtends the standard work withextra_kwargsby adding work with additional attributes. You can also transferextra_kwargsmanually.classBookSerializer(AirModelSerializer):author=AirRelatedField(AuthorSerializer)classMeta:fields=('id','name','author')extra_kwargs={'author':{'pk_only':True},'name':{'hidden':True}}hidden_fieldsHides fields for validation and seralization.classBookSerializer(AirModelSerializer):classMeta:fields=('id','name','author')hidden_fields=('name','author')Kwargs by actionsKwargs by actions is used only when the event. You can pass acions separated by,. For events that don't match, you can use_key. It is used if actionis passed. Action is set automatically from the ViewSet, or it can be passed manually.classDemoViewSet(ModelViewSet):queryset=Demo.objects.all()serializer_class=DemoSerializerdefperform_create(serializer,request):action=serializer.action# action is 'create'serializer.save()@action(methods=['POST'],detail=False)defdemo_action(self,request):serializer=self.get_serializer_class()action=serializer.action# action is 'demo_action'Manually set action.serializer=DemoSerializer(data={'name':'demo'},action='custom_action')action=serializer.action# action is 'custom_action'action_read_only_fieldsSetsread_only_fieldsby action in serializer.classBookSerializer(AirModelSerializer):classMeta:fields=('id','name','author')action_read_only_fields={'create,update':('name','author')}action_hidden_fieldsSetshidden_fieldsby action in serializer.classBookSerializer(AirModelSerializer):classMeta:fields=('id','name','author')action_hidden_fields={'custom_action':('author',),'_':('id',)}action_extra_kwargsExpandextra_kwargsby action in serializer.classBookSerializer(AirModelSerializer):author=AirRelatedField(AuthorSerializer,pk_only=True,null=True)classMeta:fields=('id','name','author')action_extra_kwargs={'create,custom_update':{'author':{'pk_only':False,'null'=True}}}Priority extra_kwargsBelow are the priorities of the extra_kwargs extension in ascending orderextra_kwargsin Metahidden_fieldsaction_hidden_fieldsaction_read_only_fieldsaction_extra_kwargsextra_kwargsmanually transmittedFilter nested querysetsAirModelSerializer allows you to filter the queryset by nested fields.classBookSerializer(AirModelSerializer):city=AirRelatedField(CitySerializer,queryset_function_name='custom_filter')defqueryset_author(self,queryset):returnqueryset.filter(active=True,created_by=self.user)deffilter_city_by_active(self,queryset):returnqueryset.filter(active=True)classMeta:model=Bookfields=('uuid','name','author','city')
airdrop-panda
No description available on PyPI.
airdrops
No description available on PyPI.
airdropteatest
一、安装(python版本建议3.7以上)pipinstalldubborequests二、升级包pipinstall--upgradedubborequests三、示例获取dubbo服务详情# 导入importdubborequests# 获取dubbo服务详情data=dubborequests.search('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')获取服务下的所有方法# 导入importdubborequests# 获取dubbo服务下的所有方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')# 获取dubbo服务指定的方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService','login')通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口importdubborequestsfromdubborequestsimportConfig# 先配置zookeeper中心地址Config.zookeeper_url_list=['192.168.240.15:2181','192.168.240.15:2182','192.168.240.15:2183']invoke_data={"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# 通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口data=dubborequests.zk_invoke(*invoke_data)Telnet命令测试dubbo接口importdubborequestsinvoke_data={"ip":'xxxx',"port":7777,"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# Telnet命令测试dubbo接口data=dubborequests.telnet_invoke(*invoke_data)
airdroptest
一、安装(python版本建议3.7以上)pipinstalldubborequests二、升级包pipinstall--upgradedubborequests三、示例获取dubbo服务详情# 导入importdubborequests# 获取dubbo服务详情data=dubborequests.search('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')获取服务下的所有方法# 导入importdubborequests# 获取dubbo服务下的所有方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')# 获取dubbo服务指定的方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService','login')通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口importdubborequestsfromdubborequestsimportConfig# 先配置zookeeper中心地址Config.zookeeper_url_list=['192.168.240.15:2181','192.168.240.15:2182','192.168.240.15:2183']invoke_data={"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# 通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口data=dubborequests.zk_invoke(*invoke_data)Telnet命令测试dubbo接口importdubborequestsinvoke_data={"ip":'xxxx',"port":7777,"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# Telnet命令测试dubbo接口data=dubborequests.telnet_invoke(*invoke_data)
airdroptesttea
一、安装(python版本建议3.7以上)pipinstalldubborequests二、升级包pipinstall--upgradedubborequests三、示例获取dubbo服务详情# 导入importdubborequests# 获取dubbo服务详情data=dubborequests.search('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')获取服务下的所有方法# 导入importdubborequests# 获取dubbo服务下的所有方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')# 获取dubbo服务指定的方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService','login')通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口importdubborequestsfromdubborequestsimportConfig# 先配置zookeeper中心地址Config.zookeeper_url_list=['192.168.240.15:2181','192.168.240.15:2182','192.168.240.15:2183']invoke_data={"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# 通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口data=dubborequests.zk_invoke(*invoke_data)Telnet命令测试dubbo接口importdubborequestsinvoke_data={"ip":'xxxx',"port":7777,"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# Telnet命令测试dubbo接口data=dubborequests.telnet_invoke(*invoke_data)
airduct
airductSimple Pipeline Scheduler in PythonLinksGithubDocumentationInstalling$ pip install airductor$ poetry add airductQuickstartCreate a file and put into a folder/python-module.fromairductimportschedule,taskschedule(name='ExampleFlow',run_at='* * * * *',flow=[task('e1f1'),[task('e1f2'),task('e1f3',can_fail=True)],[task('e1f4')]])asyncdefe1f1():print('e1f1 - An async function!')defe1f2():print('e1f2 - Regular functions work too')asyncdefe1f3():print('e1f3')asyncdefe1f4():print('e1f4')Run:$ airduct schedule --path /path/to/folderBy default it uses a sqlite in-memory database. If using the in-memory database, it will also automatically run as a worker, in addition to a scheduler. If you wish to use a non in-memory sqlite database, you will need to also run a worker (could be on same box, or separate) See the documentation for more info.
ai-reader
Failed to fetch description. HTTP Status Code: 404
aireamhan
No description available on PyPI.
airelle
No description available on PyPI.
aireplication
ai-replicationCheckout at:https://pypi.org/manage/project/aireplication/releases/Usagefromaireplication.ultils.dataimportTimeSeriesGenerator,Datasetconfig={"dataset_name":"GYEONGGI2955","features":["Amount of Consumption","Temperature"],"prediction_feature":"Amount of Consumption",# Feature to use for prediction"input_width":168,"output_length":1,"train_ratio":0.9}dataset=Dataset(dataset_name=config["dataset_name"])# data = dataset.dataloader.export_a_single_sequence()data=dataset.dataloader.export_the_sequence(config["features"])print("Building time series generator...")tsf=TimeSeriesGenerator(data=data,config=config,normalize_type=1,shuffle=False)# Get modelmodel=get_model(model_name=args.model_name,config=config)# Train modelhistory=model.fit(x=tsf.data_train[0],# [number_recoder, input_len, number_feature]y=tsf.data_train[1],# [number_recoder, output_len, number_feature]validation_data=tsf.data_valid)List of dataset is availableconfig1 = {"dataset_name":"GYEONGGI2955","features":["AmountofConsumption","Temperature"],"prediction_feature":"AmountofConsumption",# Feature to use for prediction"input_width":168,"output_length":1,"train_ratio":0.9}config2 = {"dataset_name":"CNU_ENGINEERING_7","features":["temperatures","humidity","pressure","energy"]# Features to use for trainingprediction_feature:"energy",# Feature to use for prediction"input_width":168,"output_length":1,"train_ratio":0.9}Publishing the packagepipinstalltwine pythonsetup.pysdist twineuploaddist/*- Note: Testing case:twineupload--repositorytestpypidist/*
aireport
No description available on PyPI.
aireq
No description available on PyPI.
airflags
Air flags library to manage Python feature flags
airflint
airflintEnforce Best Practices for all your Airflow DAGs. ⭐⚠️airflint is still in alpha stage and has not been tested with real world Airflow DAGs. Please report any issues you face viaGitHub Issues, thank you. 🙏🧑‍🏫 RulesUse function-level imports instead of top-level imports[^1][^2] (seeTop level Python Code)Use jinja template syntax instead ofVariable.get(seeAirflow Variables)[^1]: There is a PEP forLazy Importstargeted to arrive in Python 3.12 which would supersede this rule.[^2]: To remove top-level imports after runningUseFunctionLevelImportsrule, use a tool such asautoflake.based on officialBest PracticesRequirementsairflint is tested with:Main version (dev)Released version (0.3.1-alpha)Python3.9, 3.10, 3.11.0-alpha - 3.11.03.9, 3.10Apache Airflow>= 2.0.0>= 2.3.0🚀 Get startedTo install it fromPyPIrun:pip install airflintNOTE:It is recommended to install airflint into your existing airflow environment with all your providers included. This wayUseJinjaVariableGetrule can detect alltemplate_fieldsand airflint works as expected.Then just call it like this:pre-commitAlternatively you can add the following repo to yourpre-commit-config.yaml:-repo:https://github.com/feluelle/airflintrev:v0.3.1-alphahooks:-id:airflintargs:["-a"]# Use -a to apply the suggestionsadditional_dependencies:# Add all package dependencies you have in your dags, preferable with version spec-apache-airflow-apache-airflow-providers-cncf-kubernetesTo complete theUseFunctionlevelImportsrule, please add theautoflakehook after theairflinthook, as below:-repo:https://github.com/pycqa/autoflakerev:v1.4hooks:-id:autoflakeargs:["--remove-all-unused-imports","--in-place"]This will remove unused imports.❤️ ContributingI am looking for contributors who are interested in..testing airflint with real world Airflow DAGs and reporting issues as soon as they face themoptimizing the ast traversing for existing rulesadding new rules based on best practices or bottlenecks you have experienced during Airflow DAGs authoringdocumenting about what is being supported in particular by each ruledefining supported airflow versions i.e. some rules are bound to specific Airflow features and versionFor questions, please don't hesitate to open a GitHub issue.
airflow
Old Airflow packagePlease install Airflow usingpip install apache-airflowinstead ofpip install airflow.
airflow-add-ons
# airflow-add-onsAirflow utilities to extend Airflow operators and connectors## Where to get it The source code is currently hosted on GitHub at:https://github.com/pualien/airflow-add-onsBinary installers for the latest released version are available at the [Python package index](https://pypi.org/project/airflow-add-ons/).`sh pip installairflow-add-ons`
airflow-ad-query
Airflow Data Query PluginA user-friendly data query tool for Apache Airflow. With this plugin, you can execute SQL queries against databases connected in your Airflow environment and view the results directly on the Airflow UI.FeaturesConnect to databases using existing Airflow connections.Execute SQL queries and view results directly on the Airflow UI.Save query results as CSV files.UsageInstall the plugin using pip:pipinstallairflow-data-queryAdd the plugin to the Airflow plugins folder:cp-rairflow_data_query$AIRFLOW_HOME/pluginsNavigate to the Airflow UI,and click on "Data Query".Connect to a database using an existing Airflow connection and start executing your SQL queries.ThanksApache AirflowBase on thisLicenseThis project is licensed under the Apache 2.0 License.
airflow-aggua-plugin
Airflow Aggua API - PluginApache Airflow plugin that exposes aggua secure API endpoints similar to the officialAirflow API (Stable) (1.0.0), providing richer capabilities. Apache Airflow version 2.1.0 or higher is necessary.Requirementsapache-airflowmarshmallowInstallationpython3-mpipinstallairflow-aggua-apiAuthenticationAirflow Aggua API plugin uses the same auth mechanism asAirflow API (Stable) (1.0.0). So, by default APIs exposed via this plugin respect the auth mechanism used by your Airflow webserver and also complies with the existing RBAC policies. Note that you will need to pass credentials data as part of the request. Here is a snippet from the official docs when basic authorization is used:curl-XPOST'http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/dags/{dag_id}?update_mask=is_paused'\-H'Content-Type: application/json'\--user"username:password"\-d'{"is_paused": true}'Using the Custom APIAll the supported endpoints are exposed in the below format:http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/aggua/{ENDPOINT_NAME}Following are the names of endpoints which are currently supported.serializedDagsserializedDagsDescription:Get the serialized representation of a DAG.Endpoint:http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/aggua/serializedDagsMethod:GETGET request query parameters :limit (optional) - number - The number of items to return. default = 10.offset (optional) - number - The number of items to skip before starting to collect the result set.Endpoint:http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/aggua/serializedDags/{dag_id}Method:GETGet request path parameter:dag_id - string - the DAG ID.
airflow-alt-ldap
An alternative LDAP backend for airflow=======================================The default LDAP backend works with OpenLDAP if the memberOf overlay isactivated (see http://www.openldap.org/doc/admin24/overlays.html#Reverse%20Group%20Membership%20Maintenance)I.e., users must present the `memberOf` attribute to know what group theybelong to. If your LDAP server only has groups with `memberUid` (or anyother key like `member`) listing the users belonging to the group, thenyou need something different. This is what this module attemps to provide.Installation============Using pip:```pip install airflow-alt-ldap```Configuration=============Activate authentication via this LDAP backend in `airflow.cfg` config:```[webserver]authenticate = Trueauth_backend = airflow-alt-ldap.auth.backend.ldap_auth```Then you can configure that module using the following keys (example conf to be adapted):```uri = ldap://localhost:389user_basedn = ou=people,dc=nexmo,dc=comuser_filter = uid=*user_name_attr = uidgroup_basedn = ou=groups,dc=nexmo,dc=comgroup_member_attr = memberUidgroup_filter = cn=*superuser_filter = cn=admingroupdata_profiler_filter = cn=datagroupbind_user = uid=binddn,dc=example,dc=combind_password = MyAwesomePassword# cacert = /etc/ca/ldap_ca.crt# Set search_scope to one of them: BASE, LEVEL , SUBTREE# Set search_scope to SUBTREE if using Active Directory, and not specifying an Organizational Unitsearch_scope = SUBTREE```
airflow-api
Failed to fetch description. HTTP Status Code: 404
airflow-api-plugin
Airflow Aggua API - PluginApache Airflow plugin that exposes aggua secure API endpoints similar to the officialAirflow API (Stable) (1.0.0), providing richer capabilities. Apache Airflow version 2.1.0 or higher is necessary.Requirementsapache-airflowmarshmallowInstallationpython3-mpipinstallairflow-aggua-apiAuthenticationAirflow Aggua API plugin uses the same auth mechanism asAirflow API (Stable) (1.0.0). So, by default APIs exposed via this plugin respect the auth mechanism used by your Airflow webserver and also complies with the existing RBAC policies. Note that you will need to pass credentials data as part of the request. Here is a snippet from the official docs when basic authorization is used:curl-XPOST'http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/dags/{dag_id}?update_mask=is_paused'\-H'Content-Type: application/json'\--user"username:password"\-d'{"is_paused": true}'Using the Custom APIAll the supported endpoints are exposed in the below format:http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/aggua/{ENDPOINT_NAME}Following are the names of endpoints which are currently supported.serializedDagsserializedDagsDescription:Get the serialized representation of a DAG.Endpoint:http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/aggua/serializedDagsMethod:GETGET request query parameters :limit (optional) - number - The number of items to return. default = 10.offset (optional) - number - The number of items to skip before starting to collect the result set.Endpoint:http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/aggua/serializedDags/{dag_id}Method:GETGet request path parameter:dag_id - string - the DAG ID.
airflow-arcgis-plugin
airflow-arcgisSimple hooks and operators for exporting data from ArcGIS.Import PostgreSQL table data into ArcGIS feature layer or perform incremental updates.WIPFeaturesPostgresToArcGISOperator- exporting/syncing a PostgreSQL table to ArcGISInstallUsing pip:pip3installairflow-arcgis-pluginUsageCreate a connection of typeHTTPin Airflow namedhttp_agoto store your ArcGIS base url (e.g.https://detroitmi.maps.arcgis.com/), username and password.You can also pass in an override connection name in your DAG definition.This plugin is published as a pip package. Refer to the example DAG for available parameters.
airflow-aws-cost-explorer
Airflow AWS Cost Explorer PluginA plugin forApache Airflowthat allows you to exportAWS Cost ExplorerasS3metrics to local file or S3 in Parquet, JSON, or CSV format.System RequirementsAirflow Versions1.10.3 or newerpyarrow or fastparquet (optional, for writing Parquet files)Deployment InstructionsInstall the pluginpip install airflow-aws-cost-explorerOptional for writing Parquet files - Install pyarrow or fastparquetpip install pyarroworpip install fastparquetRestart the Airflow Web ServerConfigure the AWS connection (Conn type = 'aws')Optional for S3 - Configure the S3 connection (Conn type = 's3')OperatorsAWSCostExplorerToS3Operator:param day: Date to be exported as string in YYYY-MM-DD format or date/datetime instance (default: yesterday) :type day: str, date or datetime :param aws_conn_id: Cost Explorer AWS connection id (default: aws_default) :type aws_conn_id: str :param region_name: Cost Explorer AWS Region :type region_name: str :param s3_conn_id: Destination S3 connection id (default: s3_default) :type s3_conn_id: str :param s3_bucket: Destination S3 bucket :type s3_bucket: str :param s3_key: Destination S3 key :type s3_key: str :param file_format: Destination file format (parquet, json or csv default: parquet) :type file_format: str or FileFormat :param metrics: Metrics (default: UnblendedCost, BlendedCost) :type metrics: listAWSCostExplorerToLocalFileOperator:param day: Date to be exported as string in YYYY-MM-DD format or date/datetime instance (default: yesterday) :type day: str, date or datetime :param aws_conn_id: Cost Explorer AWS connection id (default: aws_default) :type aws_conn_id: str :param region_name: Cost Explorer AWS Region :type region_name: str :param destination: Destination file complete path :type destination: str :param file_format: Destination file format (parquet, json or csv default: parquet) :type file_format: str or FileFormat :param metrics: Metrics (default: UnblendedCost, BlendedCost) :type metrics: listAWSBucketSizeToS3Operator:param day: Date to be exported as string in YYYY-MM-DD format or date/datetime instance (default: yesterday) :type day: str, date or datetime :param aws_conn_id: Cost Explorer AWS connection id (default: aws_default) :type aws_conn_id: str :param region_name: Cost Explorer AWS Region :type region_name: str :param s3_conn_id: Destination S3 connection id (default: s3_default) :type s3_conn_id: str :param s3_bucket: Destination S3 bucket :type s3_bucket: str :param s3_key: Destination S3 key :type s3_key: str :param file_format: Destination file format (parquet, json or csv default: parquet) :type file_format: str or FileFormat :param metrics: Metrics (default: bucket_size, number_of_objects) :type metrics: listAWSBucketSizeToLocalFileOperator:param day: Date to be exported as string in YYYY-MM-DD format or date/datetime instance (default: yesterday) :type day: str, date or datetime :param aws_conn_id: Cost Explorer AWS connection id (default: aws_default) :type aws_conn_id: str :param region_name: Cost Explorer AWS Region :type region_name: str :param destination: Destination file complete path :type destination: str :param file_format: Destination file format (parquet, json or csv default: parquet) :type file_format: str or FileFormat :param metrics: Metrics (default: bucket_size, number_of_objects) :type metrics: listExample#!/usr/bin/env python import airflow from airflow import DAG from airflow_aws_cost_explorer import AWSCostExplorerToLocalFileOperator from datetime import timedelta default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': airflow.utils.dates.days_ago(1), 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=30) } dag = DAG('cost_explorer', default_args=default_args, schedule_interval=None, concurrency=1, max_active_runs=1, catchup=False ) aws_cost_explorer_to_file = AWSCostExplorerToLocalFileOperator( task_id='aws_cost_explorer_to_file', day='{{ yesterday_ds }}', destination='/tmp/{{ yesterday_ds }}.parquet', file_format='parquet', dag=dag) if __name__ == "__main__": dag.cli()LinksApache Airflow -https://github.com/apache/airflowApache Arrow -https://github.com/apache/arrowfastparquet -https://github.com/dask/fastparquetAWS Cost Explorer -https://aws.amazon.com/aws-cost-management/aws-cost-explorer/API ReferenceS3 CloudWatch Metrics -https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudwatch-monitoring.html
airflow-aws-executors
Apache Airflow: Native AWS ExecutorsThis is an AWS Executor that delegates every task to a scheduled container on either AWS Batch, AWS Fargate, or AWS ECS.pipinstallairflow-aws-executorsGetting StartedForAWS Batch:Getting Started with AWS Batch ReadMeForAWS ECS/Fargate:Getting Started with AWS ECS/Fargate ReadMeBut Why?There's so much to unpack here.In a Nut-Shell:Pay for what you use.Simplicity in Setup.No new libraries are introduced to AirflowServers require up-keep and maintenance. For example, just one CPU-bound or memory-bound Airflow Task could overload the resources of a server and starve out the celery or scheduler thread;thus causing the entire server to go down. All of these executors don't have this problem.The Case for AWS BatchAWS Batch can be seen as very similar to the Celery Executor, but WITH Autoscaling. AWS will magically provision and take-down instances. AWS will magically monitor each container store their status for ~24 hours. AWS will determine when to autoscale based off of amount of time and number of tasks in queue.In contrast, Celery can scale up, but doesn't have a good scaling-down story (based off of personal experience). If you look at Celery's Docs about Autoscaling you'll find APIs about scaling the number of threads on one server; that doesn't even work. Each Celery workers is the user's responsibility to provision and maintain at fixed capacity. The Celery Backend and worker queue also need attention and maintenance. I've tried autoscaling an ECS cluster on my own with CloudWatch Alarms on SQS, triggering CloudWatch Events, triggering capacity providers, triggering Application Autoscaling groups, and it was a mess that I never got to work properly.The Case for AWS Batch on AWS Fargate, and AWS FargateIf you're on the Fargate executor it may take ~2.5 minutes for a task to pop up, but at least it's a constant O(1) time. This way, the concept of tracking DAG Landing Times becomes unnecessary. If you have more than 2000 concurrent tasks (which is a lot) then you can always contact AWS to provide an increase in this soft-limit.AWS Batch v AWS ECS v AWS Fargate?I almost always recommend that you go the AWS Batch route. Especially since, as of Dec 2020, AWS Batch supports Fargate deployments. So unless you need some very custom flexibility provided by ECS, or have a particular reason to use AWS Fargate directly, then go with AWS Batch.AWS Batch- Is built on top of ECS, but has additional features for Batch-Job management. Including auto-scaling up and down servers on an ECS cluster based on jobs submitted to a queue. Generally easier to configure and setup than either option.AWS Fargate- Is a serverless container orchestration service; comparable to a proprietary AWS version of Kubernetes. Launching a Fargate Task is like saying "I want these containers to be launched somewhere in the cloud with X CPU and Y memory, and I don't care about the server". AWS Fargate is built on top of AWS ECS, and is easier to manage and maintain. However, it provides less flexibility.AWS ECS- Is known as "Elastic Container Service", which is a container orchestration service that uses a designated cluster of EC2 instances that you operate, own, and maintain.BatchFargateECSStart-up per taskCombines both, depending on if the job queue is Fargate serverless2-3 minutes per task; O(1) constant timeInstant 3s, or until capacity is available.MaintenanceYou patch the own, operate, and patch the servers OR Serverless (as of Dec 2020)ServerlessYou patch the own, operate, and patch the serversCapacityAutoscales to configurable Max vCPUs in compute environment~2000 containers. See AWS LimitsFixed. Not auto-scaling.FlexibilityCombines both, depending on if the job queue is Fargate serverlessLow. Can only do what AWS allows in FargateHigh. Almost anything that you can do on an EC2Fractional CPUs?Yes, as of Dec 2020 a task can have 0.25 vCPUs.Yes. A task can have 0.25 vCPUs.Yes. A task can have 0.25 vCPUs.Optional Container RequirementsThis means that you can specify CPU, Memory, env vars, and GPU requirements on a task.AWS BatchSpecifying an executor config will be merged directly into theBatch.submit_job()request kwarg.For example:task=PythonOperator(python_callable=lambda*args,**kwargs:print('hello world'),task_id='say_hello',executor_config=dict(vcpus=1,memory=512),dag=dag)AWS ECS/FargateSpecifying an executor config will be merged into theECS.run_task()request kwargs as a container override for the airflow container.Refer to AWS' documentation for Container Override for a full list of kwargsFor example:task=PythonOperator(python_callable=lambda*args,**kwargs:print('hello world'),task_id='say_hello',executor_config=dict(cpu=256,# 0.25 fractional CPUsmemory=512),dag=dag)Airflow ConfigurationsBatch[batch]regiondescription: The name of AWS Regionmandatory: even with a custom run_task_kwargsexample: us-east-1job_namedescription: The name of airflow jobexample: airflow-job-namejob_queuedescription: The name of AWS Batch Queue in which tasks are submittedexample: airflow-job-queuejob_definitiondescription: The name of the AWS Batch Job Definition; optionally includes revision numberexample: airflow-job-definition or airflow-job-definition:2submit_job_kwargsdescription: This is the default configuration for calling the Batchsubmit_job functionAPI. To change the parameters used to run a task in Batch, the user can overwrite the path to specify another python dictionary. More documentation can be found in theExtensibilitysection below.default: airflow_aws_executors.conf.BATCH_SUBMIT_JOB_KWARGSECS & FARGATE[ecs_fargate]regiondescription: The name of AWS Regionmandatory: even with a custom run_task_kwargsexample: us-east-1clusterdescription: Name of AWS ECS or Fargate clustermandatory: even with a custom run_task_kwargscontainer_namedescription: Name of registered Airflow container within your AWS cluster. This container will receive an airflow CLI command as an additional parameter to its entrypoint. For more info see url to Boto3 docs above.mandatory: even with a custom run_task_kwargstask_definitiondescription: Name of AWS Task Definition.For more info see Boto3.launch_typedescription: Launch type can either be 'FARGATE' OR 'EC2'.For more info see Boto3.default: FARGATEplatform_versiondescription: AWS Fargate is versioned.See this page for more detailsdefault: LATESTassign_public_ipdescription: Assign public ip.For more info see Boto3.security_groupsdescription: Security group ids for task to run in (comma-separated).For more info see Boto3.example: sg-AAA,sg-BBBsubnetsdescription: Subnets for task to run in (comma-separated).For more info see Boto3.example: subnet-XXXXXXXX,subnet-YYYYYYYYrun_task_kwargsdescription: This is the default configuration for calling the ECSrun_task functionAPI. To change the parameters used to run a task in FARGATE or ECS, the user can overwrite the path to specify another python dictionary. More documentation can be found in theExtensibilitysection below.default: airflow_aws_executors.conf.ECS_FARGATE_RUN_TASK_KWARGSNOTE: Modify airflow.cfg or export environmental variables. For example:AIRFLOW__ECS_FARGATE__REGION="us-west-2"ExtensibilityThere are many different ways to schedule an ECS, Fargate, or Batch Container. You may want specific container overrides, environmental variables, subnets, retries, etc. This project doesnotattempt to wrap around the AWS API. These technologies are ever evolving, and it would be impossible to keep up with AWS's innovations. Instead, it allows the user to offer their own configuration in the form of Python dictionaries, which are then directly passed to Boto3'srun_taskorsubmit_jobfunction as **kwargs. This allows for maximum flexibility and little maintenance.AWS BatchIn this example we will modify the defaultsubmit_job_kwargsconfig. Note, however, there is nothing that's stopping us from completely overriding it and providing our own config. If we do so, be sure to specify the mandatory Airflow configurations in the section above.For example:# exporting env vars in this way is like modifying airflow.cfgexportAIRFLOW__BATCH__SUBMIT_JOB_KWARGS="custom_module.CUSTOM_SUBMIT_JOB_KWARGS"# filename: AIRFLOW_HOME/plugins/custom_module.pyfromairflow_aws_executors.confimportBATCH_SUBMIT_JOB_KWARGSfromcopyimportdeepcopy# Add retries & timeout to default configCUSTOM_SUBMIT_JOB_KWARGS=deepcopy(BATCH_SUBMIT_JOB_KWARGS)CUSTOM_SUBMIT_JOB_KWARGS['retryStrategy']={'attempts':3}CUSTOM_SUBMIT_JOB_KWARGS['timeout']={'attemptDurationSeconds':24*60*60*60}"I need more levers!!! I should be able to make changes to how the API gets called at runtime!"classCustomBatchExecutor(AwsBatchExecutor):def_submit_job_kwargs(self,task_id,cmd,queue,exec_config)->dict:submit_job_api=super()._submit_job_kwargs(task_id,cmd,queue,exec_config)ifqueue=='long_tasks_queue':submit_job_api['retryStrategy']={'attempts':3}submit_job_api['timeout']={'attemptDurationSeconds':24*60*60*60}returnsubmit_job_apiAWS ECS/FargateIn this example we will modify the defaultsubmit_job_kwargs. Note, however, there is nothing that's stopping us from completely overriding it and providing our own config. If we do so, be sure to specify the mandatory Airflow configurations in the section above.For example:# exporting env vars in this way is like modifying airflow.cfgexportAIRFLOW__BATCH__SUBMIT_JOB_KWARGS="custom_module.CUSTOM_SUBMIT_JOB_KWARGS"# filename: AIRFLOW_HOME/plugins/custom_module.pyfromairflow_aws_executors.confimportECS_FARGATE_RUN_TASK_KWARGSfromcopyimportdeepcopy# Add environmental variables to contianer overridesCUSTOM_RUN_TASK_KWARGS=deepcopy(ECS_FARGATE_RUN_TASK_KWARGS)CUSTOM_RUN_TASK_KWARGS['overrides']['containerOverrides'][0]['environment']=[{'name':'CUSTOM_ENV_VAR','value':'enviornment variable value'}]"I need more levers!!! I should be able to make changes to how the API gets called at runtime!"classCustomFargateExecutor(AwsFargateExecutor):def_run_task_kwargs(self,task_id,cmd,queue,exec_config)->dict:run_task_api=super()._run_task_kwargs(task_id,cmd,queue,exec_config)ifqueue=='long_tasks_queue':run_task_api['retryStrategy']={'attempts':3}run_task_api['timeout']={'attemptDurationSeconds':24*60*60*60}returnrun_task_apiIssues & BugsPlease file a ticket in GitHub for issues. Be persistent and be polite.Contribution & DevelopmentThis repository uses Github Actions for CI, pytest for Integration/Unit tests, and isort+pylint for code-style. Pythonic Type-Hinting is encouraged. From the bottom of my heart, thank you to everyone who has contributed to making Airflow better.
airflow_aws_shared_secrets
airflow-aws-shared-secretsSecretsManagerBackend with cross-account access.Expected properties where:shared_account: account_id from your core aws-accountaws_region: aws_region where the core secrets are stored in{"connections_prefix":"airflow/connections/${environment}","connections_prefix_shared":"airflow/core/connections/${environment}","shared_account":"123456789012","aws_region":"eu-central-1"}UsageWe recommend setting up the SharedSecretsManager in theAirflow Helmby configuring the following config values.### [secrets]AIRFLOW__SECRETS__BACKEND:'airflow_aws_shared_secrets.secret_manager.SharedSecretsManagerBackend'AIRFLOW__SECRETS__BACKEND_KWARGS:'{"connections_prefix": "airflow/connections/${environment}", "connections_prefix_shared" : "airflow/core/connections/${environment}", "shared_account": "<my_core_aws_account_id>", "aws_region": "eu-central-1"}'Checkconfigurations-reffor more Airflow configuration possibilities.
airflow-azure-xcom-backend
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
airflow-backfill-plugin
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
airflow-bigabig-core
Failed to fetch description. HTTP Status Code: 404
airflow-bigquerylogger
BigQuery logger handler for AirflowInstallationpip install airflow-bigqueryloggerConfigurationAIRFLOW__CORE__REMOTE_LOGGING='true'AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER='gs://bucket/path'AIRFLOW__CORE__REMOTE_LOG_CONN_ID='gcs_log'AIRFLOW__CORE__LOGGING_CONFIG_CLASS='bigquerylogger.config.LOGGING_CLASS'AIRFLOW__CORE__LOG_BIGQUERY_DATASET='dataset.table'AIRFLOW__CORE__LOG_BIGQUERY_LIMIT=50Google Cloud BigQueryRows that were written to a table recently via streaming (using the tabledata.insertall method) cannot be modified using UPDATE, DELETE, or MERGE statements. I recommend setting up a table retention!CreditsThanks to Bluecore engineering team forthis usefull article.
airflow-caching-google-secret-manager-backend
Airflow Caching Secret Manager BackendWarning:This project is unmaintained.
airflow-cdk
airflow-cdkThis project makes it simple to deploy airflow via ECS fargate using the aws cdk in Python.UsageThere are two main ways that this package can be used.Standalone PackageFor those already familiar with the aws cdk, add this project as a dependency i.e.pip install airflow-cdkand/or add torequirement.txtand use theFargateAirflowconstruct like so.fromaws_cdkimportcorefromairflow_cdkimportFargateAirflowapp=core.App()FargateAirflow(app,"airflow-cdk",postgres_password="replacethiswithasecretpassword")app.synth()cdk deployThat's it.CloningYou can also clone this repository and alter theFargateAirflowconstruct to your heart's content.That also provides you an added benefit of utilizing thetasks.pytasks withinvoketo do things like create new dags easily i.e.inv new-dagYou would then also easily be able to use the existing docker-compose for local development with some minor modifications for your setup.The easiest way to get started would be just a one-line change to theapp.pyexample above and to thedocker-compose.ymlfile.fromaws_cdkimportcorefromairflow_cdkimportFargateAirflowapp=core.App()FargateAirflow(app,"airflow-cdk",postgres_password="replacethiswithasecretpassword"),# this is the only change to make when cloningbase_image=aws_ecs.ContainerImage.from_asset(".")app.synth()Then, in thedocker-compose.ymlfile, simply delete, comment out, or change the image name for theimage: knowsuchagency/airflow-cdkline inx-airflow.Now the same container that would be created bydocker-compose buildwill be deployed to ECS for your web, worker, and scheduler images bycdk deploy.ComponentsThe following aws resources will be deployed as ecs tasks within the same cluster and vpc by default:an airflow webserver taskand an internet-facing application load-balanceran airflow scheduler taskan airflow worker task(note) it will auto-scale based on cpu and memory usage up to a total of 16 instances at a time by default starting from 1a rabbitmq brokeran application load balancer that will allow you to log in to the rabbitmq management console with the default user/pw guest/guestan rds instancean s3 bucket for logsWhy is this awesome?Apart from the fact that we're able to describe our infrastructure using the same language and codebase we use to author our dags?Since we're using cloudformation under-the-hood, whenever we change a part of our code or infrastructure, only those changes that are different from our last deployment will be deployed.Meaning, if all we do is alter the code we want to run on our deployment, we simply re-build and publish our docker container (which is done for us if we useaws_ecs.ContainerImage.from_asset(".")) prior tocdk deploy!Existing users of airflow will know how tricky it can be to manage deployments when you want to distinguish between pushing changes to your codebase i.e. dags and actual infrastructure deployments.We just have to be careful not to deploy while we have some long-running worker task we don't want to interrupt since fargate will replace those worker instances with new ones running our updated code. Now there's basically no distinction.NotesBefore runningcdk destroy, you will want to empty the s3 bucket that's created otherwise the command may fail at that stage and the bucket can be left in a state that makes it difficult to delete later onTODOscreate a custom component to deploy airflow to an ec2 clusterimprove documentation(possibly) subsume theairflow stable helm chartas a cdk8s chartContributions Welcome!
airflow-census
No description available on PyPI.
airflow-census-jeetendra
No description available on PyPI.
airflow-clickhouse-plugin
Airflow ClickHouse Plugin🔝 The most popularApache Airflowplugin for ClickHouse, ranked in the top 1% of downloadson PyPI. Based on awesomemymarilyn/clickhouse-driver.This plugin provides two families of operators: richerclickhouse_driver.Client.execute-basedand standardizedcompatible with Python DB API 2.0.Both operators' families are fully supported and covered with tests for different versions of Airflow and Python.clickhouse-driverfamilyClickHouseOperatorClickHouseHookClickHouseSensorThese operators are based onmymarilyn/clickhouse-driver'sClient.executemethod and arguments. They offer a full functionality ofclickhouse-driverand are recommended if you are starting fresh with ClickHouse in Airflow.FeaturesSQL Templating: SQL queries and other parameters are templated.Multiple SQL Queries: execute run multiple SQL queries within a singleClickHouseOperator. The result of the last query is pushed to XCom (configurable bydo_xcom_push).Logging: executed queries are logged in a visually pleasing format, making it easier to track and debug.Efficient Native ClickHouse Protocol: Utilizes efficientnativeClickHouse TCP protocol, thanks toclickhouse-driver.Does not support HTTP protocol.Custom Connection Parameters: Supports additional ClickHouseconnection parameters, such as various timeouts,compression,secure, through the AirflowConnection.extraproperty.See reference and examplesbelow.Installation and dependenciespip install -U airflow-clickhouse-pluginDependencies: onlyapache-airflowandclickhouse-driver.Python DB API 2.0 familyOperators:ClickHouseSQLExecuteQueryOperatorClickHouseSQLColumnCheckOperatorClickHouseSQLTableCheckOperatorClickHouseSQLCheckOperatorClickHouseSQLValueCheckOperatorClickHouseSQLIntervalCheckOperatorClickHouseSQLThresholdCheckOperatorClickHouseBranchSQLOperatorClickHouseDbApiHookClickHouseSqlSensorThese operators combineclickhouse_driver.dbapiwithapache-airflow-providers-common-sql. While they have limited functionality compared toClient.execute(not all arguments are supported), they provide a standardized interface. This is useful when porting Airflow pipelines to ClickHouse from another SQL provider backed bycommon.sqlAirflow package, such as MySQL, Postgres, BigQuery, and others.The feature set of this version is fully based oncommon.sqlAirflow provider: refer to itsreferenceandexamplesfor details.An example is also availablebelow.Installation and dependenciesAddcommon.sqlextra when installing the plugin:pip install -U airflow-clickhouse-plugin[common.sql]— to enable DB API 2.0 operators.Dependencies:apache-airflow-providers-common-sql(usually pre-packed with Airflow) in addition toapache-airflowandclickhouse-driver.Python and Airflow versions supportDifferent versions of the plugin support different combinations of Python and Airflow versions. WeprimarilysupportAirflow 2.0+ and Python 3.8+. If you need to use the plugin with older Python-Airflow combinations, pick a suitable plugin version:airflow-clickhouse-plugin versionAirflow versionPython version1.1.0>=2.0.0,<2.8.0~=3.81.0.0>=2.0.0,<2.7.0~=3.80.11.0~=2.0.0,>=2.2.0,<2.7.0~=3.70.10.0,0.10.1~=2.0.0,>=2.2.0,<2.6.0~=3.70.9.0,0.9.1~=2.0.0,>=2.2.0,<2.5.0~=3.70.8.2>=2.0.0,<2.4.0~=3.70.8.0,0.8.1>=2.0.0,<2.3.0~=3.60.7.0>=2.0.0,<2.2.0~=3.60.6.0~=2.0.1~=3.6>=0.5.4,<0.6.0~=1.10.6>=2.7 or >=3.5.*>=0.5.0,<0.5.4==1.10.6>=2.7 or >=3.5.*~=means compatible release, seePEP 440for an explanation.Previous versions of the plugin might requirepandasextra:pip install airflow-clickhouse-plugin[pandas]==0.11.0. Check out earlier versions ofREADME.mdfor details.UsageTo see examplesscroll down. To run them,create an Airflow connection to ClickHouse.ClickHouseOperator referenceTo importClickHouseOperatorusefrom airflow_clickhouse_plugin.operators.clickhouse import ClickHouseOperator.Supported arguments:sql(templated, required): query (if argument is a singlestr) or multiple queries (iterable ofstr). Supports files with.sqlextension.clickhouse_conn_id: Airflow connection id. Connection schema is describedbelow. Default connection id isclickhouse_default.Arguments ofclickhouse_driver.Client.executemethod:parameters(templated): passedparamsof theexecutemethod. (Renamed to avoid name conflict with Airflow tasks'paramsargument.)dictforSELECTqueries.list/tuple/generator forINSERTqueries.If multiple queries are provided viasqlthen theparametersare passed toallof them.with_column_types(not templated).external_tables(templated).query_id(templated).settings(templated).types_check(not templated).columnar(not templated).For the documentation of these arguments, refer toclickhouse_driver.Client.executeAPI reference.database(templated): if present, overridesschemaof Airflow connection.Other arguments (including a requiredtask_id) are inherited from AirflowBaseOperator.Result of thelastquery is pushed to XCom (disable usingdo_xcom_push=Falseargument).In other words, the operator simply wrapsClickHouseHook.executemethod.Seeexamplebelow.ClickHouseHook referenceTo importClickHouseHookusefrom airflow_clickhouse_plugin.hooks.clickhouse import ClickHouseHook.Supported kwargs of constructor (__init__method):clickhouse_conn_id: Airflow connection id. Connection schema is describedbelow. Default connection id isclickhouse_default.database: if present, overridesschemaof Airflow connection.DefinesClickHouseHook.executemethod which simply wrapsclickhouse_driver.Client.execute. It has all the same arguments, except of:sql(instead ofexecute'squery): query (if argument is a singlestr) or multiple queries (iterable ofstr).ClickHouseHook.executereturns a result of thelastquery.Also, the hook definesget_conn()method which returns an underlyingclickhouse_driver.Clientinstance.Seeexamplebelow.ClickHouseSensor referenceTo importClickHouseSensorusefrom airflow_clickhouse_plugin.sensors.clickhouse import ClickHouseSensor.This class wrapsClickHouseHook.executemethodinto anAirflow sensor. Supports all the arguments ofClickHouseOperatorand additionally:is_success: a callable which accepts a single argument — a return value ofClickHouseHook.execute. If a return value ofis_successis truthy, the sensor succeeds. By default, the callable isbool: i.e. if the return value ofClickHouseHook.executeis truthy, the sensor succeeds. Usually,executeis a list of records returned by query: thus, by default it is falsy if no records are returned.is_failure: a callable which accepts a single argument — a return value ofClickHouseHook.execute. If a return value ofis_failureis truthy, the sensor raisesAirflowException. By default,is_failureisNoneand no failure check is performed.Seeexamplebelow.How to create an Airflow connection to ClickHouseAs atypeof a new connection, chooseSQLiteor any other SQL database. There isnospecial ClickHouse connection type yet, so we use any SQL as the closest one.All the connection attributes are optional: default host islocalhostand other credentialshave defaultsdefined byclickhouse-driver. If you use non-default values, set them according to theconnection schema.If you use a secure connection to ClickHouse (this requires additional configurations on ClickHouse side), setextrato{"secure":true}. Allextraconnection parameters are passed toclickhouse_driver.Clientas-is.ClickHouse connection schemaclickhouse_driver.Clientis initialized with attributes stored in AirflowConnection attributes:Airflow Connection attributeClient.__init__argumenthosthostport(int)portschemadatabaseloginuserpasswordpasswordextra**kwargsdatabaseargument ofClickHouseOperator,ClickHouseHook,ClickHouseSensor, and others overridesschemaattribute of the Airflow connection.Extra argumentsYou may set non-standard arguments ofclickhouse_driver.Client, such as timeouts,compression,secure, etc. using Airflow'sConnection.extraattribute. The attribute should contain a JSON object which will bedeserializedand all of its properties will be passed as-is to theClient.For example, if Airflow connection containsextra='{"secure": true}'then theClient.__init__will receivesecure=Truekeyword argument in addition to other connection attributes.CompressionYou should install specific packages to support compression. For example, for lz4:pip3installclickhouse-cityhashlz4Then you should includecompressionparameter in airflow connection uri:extra='{"compression":"lz4"}'. You can get additional information about extra options fromofficial documentation of clickhouse-driver.Connection URI with compression will look likeclickhouse://login:password@host:port/?compression=lz4.Seeofficial documentationto learn more about connections management in Airflow.Default ValuesIf some Airflow connection attribute is not set, it is not passed toclickhouse_driver.Client. In such cases, the plugin uses a default value from the correspondingclickhouse_driver.Connectionargument. For instance,userdefaults to'default'.This means that the plugin itself does not define any default values for the ClickHouse connection. You may fully rely on default values of theclickhouse-driverversion you use.The only exception ishost: if the attribute of Airflow connection is not set then'localhost'is used.Default connectionBy default, the plugin uses Airflow connection with id'clickhouse_default'.ExamplesClickHouseOperator examplefromairflowimportDAGfromairflow_clickhouse_plugin.operators.clickhouseimportClickHouseOperatorfromairflow.operators.pythonimportPythonOperatorfromairflow.utils.datesimportdays_agowithDAG(dag_id='update_income_aggregate',start_date=days_ago(2),)asdag:ClickHouseOperator(task_id='update_income_aggregate',database='default',sql=('''INSERT INTO aggregateSELECT eventDt, sum(price * qty) AS income FROM salesWHERE eventDt = '{{ ds }}' GROUP BY eventDt''','''OPTIMIZE TABLE aggregate ON CLUSTER {{ var.value.cluster_name }}PARTITION toDate('{{ execution_date.format('%Y-%m-01') }}')''','''SELECT sum(income) FROM aggregateWHERE eventDt BETWEEN'{{ execution_date.start_of('month').to_date_string() }}'AND '{{ execution_date.end_of('month').to_date_string() }}'''',# result of the last query is pushed to XCom),# query_id is templated and allows to quickly identify query in ClickHouse logsquery_id='{{ ti.dag_id }}-{{ ti.task_id }}-{{ ti.run_id }}-{{ ti.try_number }}',clickhouse_conn_id='clickhouse_test',)>>PythonOperator(task_id='print_month_income',python_callable=lambdatask_instance:# pulling XCom value and printing itprint(task_instance.xcom_pull(task_ids='update_income_aggregate')),)ClickHouseHook examplefromairflowimportDAGfromairflow_clickhouse_plugin.hooks.clickhouseimportClickHouseHookfromairflow.providers.sqlite.hooks.sqliteimportSqliteHookfromairflow.operators.pythonimportPythonOperatorfromairflow.utils.datesimportdays_agodefsqlite_to_clickhouse():sqlite_hook=SqliteHook()ch_hook=ClickHouseHook()records=sqlite_hook.get_records('SELECT * FROM some_sqlite_table')ch_hook.execute('INSERT INTO some_ch_table VALUES',records)withDAG(dag_id='sqlite_to_clickhouse',start_date=days_ago(2),)asdag:dag>>PythonOperator(task_id='sqlite_to_clickhouse',python_callable=sqlite_to_clickhouse,)Important note: don't try to insert values usingch_hook.execute('INSERT INTO some_ch_table VALUES (1)')literal form.clickhouse-driverrequiresvalues forINSERTquery to be provided viaparametersdue to specifics of the native ClickHouse protocol.ClickHouseSensor examplefromairflowimportDAGfromairflow_clickhouse_plugin.sensors.clickhouseimportClickHouseSensorfromairflow_clickhouse_plugin.operators.clickhouseimportClickHouseOperatorfromairflow.utils.datesimportdays_agowithDAG(dag_id='listen_warnings',start_date=days_ago(2),)asdag:dag>>ClickHouseSensor(task_id='poke_events_count',database='monitor',sql="SELECT count() FROM warnings WHERE eventDate = '{{ ds }}'",is_success=lambdacnt:cnt>10000,)>>ClickHouseOperator(task_id='create_alert',database='alerts',sql='''INSERT INTO events SELECT eventDate, count()FROM monitor.warnings WHERE eventDate = '{{ ds }}'''',)DB API 2.0: ClickHouseSqlSensor and ClickHouseSQLExecuteQueryOperator examplefromairflowimportDAGfromairflow_clickhouse_plugin.sensors.clickhouse_dbapiimportClickHouseSqlSensorfromairflow_clickhouse_plugin.operators.clickhouse_dbapiimportClickHouseSQLExecuteQueryOperatorfromairflow.utils.datesimportdays_agowithDAG(dag_id='listen_warnings',start_date=days_ago(2),)asdag:dag>>ClickHouseSqlSensor(task_id='poke_events_count',hook_params=dict(schema='monitor'),sql="SELECT count() FROM warnings WHERE eventDate = '{{ ds }}'",success=lambdacnt:cnt>10000,conn_id=None,# required by common.sql SqlSensor; use None for default)>>ClickHouseSQLExecuteQueryOperator(task_id='create_alert',database='alerts',sql='''INSERT INTO events SELECT eventDate, count()FROM monitor.warnings WHERE eventDate = '{{ ds }}'''',)How to run testsUnit tests:python3 -m unittest discover -t tests -s unitIntegration tests require access to a ClickHouse server. Here is how to set up a local test environment using Docker:Run ClickHouse server in a local Docker container:docker run -p 9000:9000 --ulimit nofile=262144:262144 -it clickhouse/clickhouse-serverRun tests with Airflow connection details setvia environment variable:PYTHONPATH=src AIRFLOW_CONN_CLICKHOUSE_DEFAULT=clickhouse://localhost python3 -m unittest discover -t tests -s integrationStop the container after running the tests to deallocate its resources.Run all (unit&integration) tests with ClickHouse connection defined:PYTHONPATH=src AIRFLOW_CONN_CLICKHOUSE_DEFAULT=clickhouse://localhost python3 -m unittest discover -s testsGitHub ActionsGitHub Actionis configured for this project.Run all tests inside DockerStart a ClickHouse server inside Docker:docker exec -it $(docker run --rm -d clickhouse/clickhouse-server) bashThe above command will openbashinside the container.Install dependencies into container and run tests (execute inside container):apt-getupdate apt-getinstall-ypython3python3-pipgitmake gitclonehttps://github.com/whisklabs/airflow-clickhouse-plugin.gitcdairflow-clickhouse-plugin python3-mpipinstall-rrequirements.txtPYTHONPATH=srcAIRFLOW_CONN_CLICKHOUSE_DEFAULT=clickhouse://localhostpython3-munittestdiscover-stestsStop the container.ContributorsCreated by Anton Bryzgalov,@bryzgaloff, originally atWhisk, SamsungInspired by Viktor Taranenko,@viktortnk(Whisk, Samsung)Community contributors:Danila Ganchar,@d-gancharMikhail,@gladerAlexander Chashnikov,@ne1r0nSimone Brundu,@saimon46@gkargStanislav Morozov,@r3b-fishSergey Bychkov,@SergeyBychkov@was-avMaxim Tarasov,@MaximTar@dvnrvnGiovanni Corsetti,@CorsettiSDmytro Zhyzniev,@1ng4liptAnton Bezdenezhnykh,@GaMeRaM
airflow-code-editor
Airflow Code Editor PluginA plugin forApache Airflowthat allows you to edit DAGs in browser. It provides a file managing interface within specified directories and it can be used to edit, upload, and download your files. If git support is enabled, the DAGs are stored in a Git repository. You may use it to view Git history, review local changes and commit.System RequirementsAirflow Versions1.10.3 or newergit Versions (git is not required if git support is disabled)2.0 or newerScreenshotsFile ManagerEditorSearchGit HistoryGit WorkspaceInstall InstructionsDocker ImagesFor the ease of deployment, use the production-ready reference container image. The image is based on the reference images for Apache Airflow.You can find the following images there:andreax79/airflow-code-editor:latest- the latest released Airflow Code Editor image with the latest Apache Airflow versionandreax79/airflow-code-editor:2.7.0- the latest released Airflow Code Editor with specific Airflow versionandreax79/airflow-code-editor:2.7.0-7.5.0- specific version of Airflow and Airflow Code EditorInstalling from PyPIInstall the pluginpipinstallairflow-code-editorInstall optional dependenciesblack - Black Python code formatterfs-s3fs - S3FS Amazon S3 Filesystemfs-gcsfs - Google Cloud Storage Filesystem... other filesystems supported by PyFilesystem - seehttps://www.pyfilesystem.org/page/index-of-filesystems/pipinstallblackfs-s3fsfs-gcsfsRestart the Airflow Web ServerOpen Admin - DAGs Code EditorConfig OptionsYou can set options editing the Airflow's configuration file or setting environment variables. You can edit yourairflow.cfgadding any of the following settings in the [code_editor] section. All the settings are optional.enabledenable this plugin (default: True).git_enabledenable git support (default: True). If git is not installed, disable this option.git_cmdgit command (path)git_default_argsgit arguments added to each call (default: -c color.ui=true)git_author_namehuman-readable name in the author/committer (default logged user first and last names)git_author_emailemail for the author/committer (default: logged user email)git_init_repoinitialize a git repo in DAGs folder (default: True)root_directoryroot folder (default: Airflow DAGs folder)line_lengthPython code formatter - max line length (default: 88)string_normalizationPython code formatter - if true normalize string quotes and prefixes (default: False)mount,mount1, ... configure additional folder (mount point) - format: name=xxx,path=yyyignored_entriescomma-separated list of entries to be excluded from file/directory list (default: .*,__pycache__)[code_editor] enabled = True git_enabled = True git_cmd = /usr/bin/git git_default_args = -c color.ui=true git_init_repo = False root_directory = /home/airflow/dags line_length = 88 string_normalization = False mount = name=data,path=/home/airflow/data mount1 = name=logs,path=/home/airflow/logs mount2 = name=data,path=s3://exampleMount Options:name: mount name (destination)path: local path or PyFilesystem FS URLs - seehttps://docs.pyfilesystem.org/en/latest/openers.htmlExample:name=ftp_server,path=ftp://user:[email protected]/privatename=data,path=s3://examplename=tmp,path=/tmpYou can also set options with the following environment variables:AIRFLOW__CODE_EDITOR__ENABLEDAIRFLOW__CODE_EDITOR__GIT_ENABLEDAIRFLOW__CODE_EDITOR__GIT_CMDAIRFLOW__CODE_EDITOR__GIT_DEFAULT_ARGSAIRFLOW__CODE_EDITOR__GIT_AUTHOR_NAMEAIRFLOW__CODE_EDITOR__GIT_AUTHOR_EMAILAIRFLOW__CODE_EDITOR__GIT_INIT_REPOAIRFLOW__CODE_EDITOR__ROOT_DIRECTORYAIRFLOW__CODE_EDITOR__LINE_LENGTHAIRFLOW__CODE_EDITOR__STRING_NORMALIZATIONAIRFLOW__CODE_EDITOR__MOUNT, AIRFLOW__CODE_EDITOR__MOUNT1, AIRFLOW__CODE_EDITOR__MOUNT2, ...AIRFLOW__CODE_EDITOR__IGNORED_ENTRIESExample:export AIRFLOW__CODE_EDITOR__STRING_NORMALIZATION=True export AIRFLOW__CODE_EDITOR__MOUNT='name=data,path=/home/airflow/data' export AIRFLOW__CODE_EDITOR__MOUNT1='name=logs,path=/home/airflow/logs' export AIRFLOW__CODE_EDITOR__MOUNT2='name=tmp,path=/tmp'Development InstructionsFork the repoClone it on the local machinegitclonehttps://github.com/andreax79/airflow-code-editor.gitcdairflow-code-editorCreate dev imagemakedev-imageSwitch node versionnvmuseMake changes you need. Build npm package with:makenpm-buildYou can start Airflow webserver with:makewebserverRun testsmaketestCommit and push changesCreatepull requestto the original repoLinksApache AirflowCodemirror, In-browser code editorGit WebUI, A standalone local web based user interface for git repositoriesBlack, The Uncompromising Code Formatterpss, power-tool for searching source filesVue.jsVue-good-table, data table for VueJSVue-tree, TreeView control for VueJSVue-universal-modal Universal modal plugin for Vue@3Vue-simple-context-menuSplitpanesAxios, Promise based HTTP client for the browser and node.jsPyFilesystem2, Python's Filesystem abstraction layerAmazon S3 PyFilesystemGoogle Cloud Storage PyFilesystem
airflow-common-operators
airflow-common-operatorsCommon Operators / Tasks forApache Airflow
airflow-commons
airflow-commonsA python package that contains common functionalities for airflowInstallationUse the package manager pip to install airflow-commons.pipinstallairflow-commonsModulesbigquery_operator: With this module you can manage your Google BigQuery operations.mysql_operator: Using this module, you can connect to your MySQL data source and manage your data operations.s3_operator: This operator connects to your s3 bucket and lets you manage your bucket.glossary: This module consists of constants used across projectsql_resources: Template BigQuery and MySQL queries such as merge, delete, select etc. are located here.utils: Generic methods like connection, querying etc. are implemented in this module.UsageSample deduplication code works like:fromairflow_commonsimportbigquery_operatorbigquery_operator.deduplicate(service_account_file="path_to_file",start_date="01-01-2020 14:00:00",end_date="01-01-2020 15:00:00",project_id="bigquery_project_id",source_dataset="source_dataset",source_table="source_table",target_dataset="target_dataset",target_table="target_table",oldest_allowable_target_partition="01-01-2015 00:00:00",primary_keys=["primary_keys"],time_columns=["time_columns"],allow_partition_pruning=True,)
airflow-config
airflow-configApache Airflowutilities for for configuration of many DAGs and DAG environments
airflow-connection-plugin
Templating for Airflow connectionsThe connection plugin contains anAirflowmacro fortemplatingconnections in tasks. You can use it like this:# prints 'mysql' {{ macros.connection_plugin.get_conn('airflow_db').host }}connection_plugin.get_connreturns theConnection objectthat you can interact with as described in the documentation.Installationpip install airflow-connection-pluginDemoTo start the docker container simply run the following command in the root directory:cd example && docker-compose upAfter that you can reach the airflow frontend viahttp://localhost:8080. You will find an example DAG that demonstrates how to retrieve different connection information.Attention: Be especially careful when using passwords in templates.
airflow-connections-manager
Airflow connections managerInterface to access Airflow connections. Actually it works in Airflow environments with the API enabled with Basic AuthenticationUsageYou must have the following environment variables declaredAIRFLOW_API_URL=<your Airflow API url like http://localhost:8080/api/v1> AIRFLOW_API_TOKEN=<your Airflow Basic Auth token like Basic YXRtaW46YHRt8W4=>Samplefrom airflow_connections_manager import AirflowConnectionsManager connections = AirflowConnectionsManager.list_connections()Building for PyPi deploymenthttps://packaging.python.org/en/latest/tutorials/packaging-projects/python -m pip install --upgrade build python -m build python -m twine upload --repository testpypi dist/*
airflow-contrib
Failed to fetch description. HTTP Status Code: 404
airflowconversion
Python Utility tool for migration of workflows from Oozie to AirflowInputs required :Input folder path (Where the XML files to be converted are stored.)Output folder path (Where the converted python files are to be stored)Queue name (for the DAG)How to use this library:from airflowconversion.ParseXML import conversionconversion(r"<input_folder_path>", r"<output_folder_path>",'<queue_name>')
airflowctl
airflowctlairflowctlis a command-line tool for managing Apache Airflow™ projects. It provides a set of commands to initialize, build, start, stop, and manage Airflow projects. Withairflowctl, you can easily set up and manage your Airflow projects, install specific versions of Apache Airflow, and manage virtual environments.The main goal ofairflowctlis for first-time Airflow users to install and setup Airflow using a single command and for existing Airflow users to manage multiple Airflow projects with different Airflow versions on the same machine.FeaturesProject Initialization with Connections & Variables:Initialize a new Airflow project with customizable project name, Apache Airflow version, and Python version. It also allows you to manage Airflow connections and variables.Automatic Virtual Environment Management:Automatically create and manage virtual environments for your Airflow projects, even for Python versions that are not installed on your system.Airflow Version Management:Install and manage specific versions of Apache Airflow.Background Process Management:Start and stop Airflow in the background with process management capabilities.Live Logs Display:Continuously display live logs of background Airflow processes with optional log filtering.Table of ContentsInstallationQuickstartUsageStep 1: Initialize a New ProjectStep 2: Build the ProjectStep 3: Start AirflowStep 4: Monitor LogsStep 5: Stop AirflowStep 6: List Airflow ProjectsStep 7: Show Project InfoStep 8: Running Airflow CommandsStep 9: Changing Airflow configurationUsing with other Airflow toolsAstro CLIInstallationpipinstallairflowctlQuickstartTo initialize a new Airflow project with the latest airflow version, build a Virtual environment and run the project, run the following command:airflowctlinitmy_airflow_project--build-startThis will start Airflow and display the logs in the terminal. You can access the Airflow UI athttp://localhost:8080. To stop Airflow, pressCtrl+C.UsageStep 1: Initialize a New ProjectTo create a new Apache Airflow project, use the init command. This command sets up the basic project structure, including configuration files, directories, and sample DAGs.airflowctlinit<project_name>--airflow-version<version>--python-version<version>Example:airflowctlinitmy_airflow_project--airflow-version2.6.3--python-version3.8This creates a new project directory with the following structure:my_airflow_project ├──.env ├──.gitignore ├──dags │└──example_dag_basic.py ├──plugins ├──requirements.txt └──settings.yamlDescription of the files and directories:.envfile contains the environment variables for the project..gitignorefile contains the default gitignore settings.dagsdirectory contains the sample DAGs.pluginsdirectory contains the sample plugins.requirements.txtfile contains the project dependencies.settings.yamlfile contains the project settings, including the project name, Airflow version, Python version, and virtual environment path.In our examplesettings.yamlfile would look like this:# Airflow version to be installedairflow_version:"2.6.3"# Python version for the projectpython_version:"3.8"# Path to a virtual environment to be used for the projectmode:virtualenv:venv_path:"PROJECT_DIR/.venv"# Airflow connectionsconnections:# Example connection# - conn_id: example# conn_type: http# host: http://example.com# port: 80# login: user# password: pass# schema: http# extra:# example_extra_field: example-value# Airflow variablesvariables:# Example variable# - key: example# value: example-value# description: example-descriptionEdit thesettings.yamlfile to customize the project settings.Step 2: Build the ProjectThe build command creates the virtual environment, installs the specified Apache Airflow version, and sets up the project dependencies.Run the build command from the project directory:cdmy_airflow_project airflowctlbuildThe CLI relies onpyenvto download and install a Python version if the version is not already installed.Example, if you have Python 3.8 installed but you specify Python 3.7 in thesettings.yamlfile, the CLI will install Python 3.7 usingpyenvand create a virtual environment with Python 3.7 first.Optionally, you can choose custom virtual environment path in case you have already installed apache-airflow package and other dependencies. Pass the existing virtualenv path using--venv_pathoption to theinitcommand or insettings.yamlfile. Make sure the existing virtualenv has same airflow and python version as yoursettings.yamlfile states.Step 3: Start AirflowTo start Airflow services, use the start command. This command activates the virtual environment and launches the Airflow web server and scheduler.Example:airflowctlstartmy_airflow_projectYou can also start Airflow in the background with the--backgroundflag:airflowctlstartmy_airflow_project--backgroundStep 4: Monitor LogsTo monitor logs from the background Airflow processes, use the logs command. This command displays live logs and provides options to filter logs for specific components.Exampleairflowctllogsmy_airflow_projectTo filter logs for specific components:# Filter logs for schedulerairflowctllogsmy_airflow_project-s# Filter logs for webserverairflowctllogsmy_airflow_project-w# Filter logs for triggererairflowctllogsmy_airflow_project-t# Filter logs for scheduler and webserverairflowctllogsmy_airflow_project-s-wStep 5: Stop AirflowTo stop Airflow services if they are still running, use the stop command.Example:airflowctlstopmy_airflow_projectStep 6: List Airflow ProjectsTo list all Airflow projects, use the list command.Example:airflowctllistStep 7: Show Project InfoTo show project info, use the info command.Example:# From the project directoryairflowctlinfo# From outside the project directoryairflowctlinfomy_airflow_projectStep 8: Running Airflow commandsTo run Airflow commands, use theairflowctl airflowcommand. All the commands afterairflowctl airfloware passed to the Airflow CLI.:# From the project directoryairflowctlairflow<airflow_command>Example:$airflowctlairflowversion2.6.3You can also runairflowctl airflow --helpto see the list of available commands.$airflowctlairflow--help Usage:airflowctlairflow[OPTIONS]COMMAND[ARGS]...RunAirflowcommands. PositionalArguments:GROUP_OR_COMMANDGroups:celeryCelerycomponentsconfigViewconfigurationconnectionsManageconnectionsdagsManageDAGsdbDatabaseoperationsjobsManagejobskubernetesToolstohelpruntheKubernetesExecutorpoolsManagepoolsprovidersDisplayprovidersrolesManagerolestasksManagetasksusersManageusersvariablesManagevariablesCommands:cheat-sheetDisplaycheatsheetdag-processorStartastandaloneDagProcessorinstanceinfoShowinformationaboutcurrentAirflowandenvironmentkerberosStartakerberosticketrenewerpluginsDumpinformationaboutloadedpluginsrotate-fernet-keyRotateencryptedconnectioncredentialsandvariablesschedulerStartaschedulerinstancestandaloneRunanall-in-onecopyofAirflowsync-permUpdatepermissionsforexistingrolesandoptionallyDAGstriggererStartatriggererinstanceversionShowtheversionwebserverStartaAirflowwebserverinstance Options:-h,--helpshowthishelpmessageandexitExample:# Listing dags$airflowctlairflowdagslist dag_id|filepath|owner|paused==================+======================+=========+=======example_dag_basic|example_dag_basic.py|airflow|True# Running standalone$airflowctlairflowstandaloneOr you can activate the virtual environment first and then run the commands as shown below.Example:# From the project directorysource.venv/bin/activate# Source all the environment variablessource.env airflowversionTo add a new DAG, add the DAG file to thedagsdirectory.To edit an existing DAG, edit the DAG file in thedagsdirectory. The changes will be reflected in the Airflow web server.Step 9: Changing Airflow Configurationsairflowctlby default uses SQLite as the backend database andSequentialExecutoras the executor. However, if you want to use other databases or executors, you can stop the project and either a) edit theairflow.cfgfile or b) add environment variables to the.envfile.Example:# Stop the projectairflowctlstopmy_airflow_project# Changing the executor to LocalExecutor# Change the database to PostgreSQL if you already have it installedecho"AIRFLOW__DATABASE__SQL_ALCHEMY_CONN=postgresql+psycopg2://airflow:airflow@localhost:5432/airflow">>.envecho"AIRFLOW__CORE__EXECUTOR=LocalExecutor">>.env# Start the projectairflowctlstartmy_airflow_projectCheck theAirflow documentationfor all the available Airflow configurations.Using Local Executor with SQLiteFor Airflow >= 2.6, you can runLocalExecutorwithsqliteas the backend database by adding the following environment variable to the.envfile:_AIRFLOW__SKIP_DATABASE_EXECUTOR_COMPATIBILITY_CHECK=1AIRFLOW__CORE__EXECUTOR=LocalExecutorThis should automatically happen for you when you runairflowctl airflowcommand if Airflow version==2.6.*.[!WARNING] Sqlite is not recommended for production use. Use it only for development and testing only.Other CommandsFor more information and options, you can use the--helpflag with each command.Using with other Airflow toolsairflowctlcan be used with other Airflow projects as long as the project structure is the same.Astro CLIairflowctlcan be used withAstro CLIprojects too.Whileairflowctlis a tool that allows you to run Airflow locally using virtual environments, Astro CLI allows you to run Airflow locally using docker.airflowctlcan read theairflow_settings.yamlfile generated by Astro CLI for reading connections & variables. It will then re-use it assettingsfile forairflowctl.For example, if you have an Astro CLI project:Run theairflowctl init . --build-startcommand to initializeairflowctlfrom the project directory. Pressyto continue when prompted.It will then ask you for the Airflow version, enter the version you are using, by default uses the latest Airflow version, press enter to continueIt will use the installed Python version as the project's python version. If you want to use a different Python version, you can specify it in theairflow_settings.yamlfile in thepython_versionfield.# From the project directory$cdastro_project $airflowctlinit.--build-start Directory/Users/xyz/astro_projectisnotempty.Continue?[y/N]:y Project/Users/xyz/astro_projectaddedtotracking. Airflowprojectinitializedin/Users/xyz/astro_project DetectedAstroproject.UsingAstrosettingsfile(/Users/kaxilnaik/Desktop/proj1/astro_project/airflow_settings.yaml).'airflow_version'notfoundinairflow_settings.yamlfile.WhatistheAirflowversion?[2.6.3]: Virtualenvironmentcreatedat/Users/xyz/astro_project/.venv ... ...If you see an error like the following, removeairflow.cfgfile from the project directory and removeAIRFLOW_HOMEfrom.envfile if it exists and try again.Error:theremightbeaproblemwithyourprojectstartingup. Thewebserverhealthchecktimedoutafter1m0sbutyourprojectwillcontinuetryingtostart. Run'astro dev logs --webserver | --scheduler'fordetails.LicenseThis project is licensed under the terms of theApache 2.0 License
airflow-cust-base
No description available on PyPI.
airflow-customs-by-novigi
Novigi Custom Airflow Operators ,hooks and pluginsThis Repo mainly contains custom airflow operators and hooks which were writen by Novigi Pty LtdVersion No - 1.0.10Bitbucket Link -https://bitbucket.org/novigi/nov20011-airflow-common-extensions/src/master/How do I get set up?Open a terminal and just type "sudo pip install airflow_customs_by_novigi"Who do I talk [email protected]
airflow-cyberark-secrets-backend
airflow-cyberark-secrets-backendThis is a secrets backend for CyberArk CCP (central credential provider) for the Apache Airflow platform. It will allow one to pull connections and variables from their CyberArk safes via the CCP.This library has been tested with Airflow 1.10.14.Documentation for CyberArk CCP can be foundhere.Documentation for Airflow secrets backends can be foundhereUsagepip install airflow-cyberark-secrets-backendUpdate yourairflow.cfgwith the following[secrets] backend = airflow_cyberark_secrets_backend.CyberArkSecretsBackend backend_kwargs = {"app_id": "/files/var.json", "ccp_url": "/files/conn.json", "safe": "", "verify": "/path/to/ssl/cert.pem" }The backend_kwargs:app_id : The application ID for CCPccp_url : The host URL for CCP AIM, excluding query paramssafe : The secrets safeverify : The SSL cert path to for CCP SSL, can be False for disable, can be env varCYBERARK_SSL, defaultFalseThis library expects and requires your CyberArk response to have the the following properties (will be mapped mapped to Airflow keys). This map is a band-aid required from the little configuration CyberArk PAM (11.xx) allows.AccountDescription : svc_accountApplicationName : schemaAddress : hostComment : extraContent : passwordLogonDomain : loginPort : portAccountDescription : svc_account field is used to fetch password from rotating secret where the fetched secret is statis, i.e. if you fetchsecret1which is static, if you specify the CCP URL forsecret2which rotates it will fetch metadata forsecret1and fill in password fromsecret2in its responseDevelopmentPRs welcomed.The following will install in editable mode with all required development tools.pipinstall-e'.[dev]'Please format (black) and lint (pylint) before submitting PR.
airflow-dag
airflow-dagA tool to manage Airflow dags.InstallationYou can usepipto installairflow-dag:$ pip install airflow-dagUsageYou can use thebuildcommand to convert a yaml config to an Airflow dag:$ airflow-dag build -t examples/ -c examples/notebook.yml -o examples/out$ airflow-dag build --help Usage: airflow-dag build [OPTIONS] Convert a yaml config to an Airflow dag. Options: -t, --template-dir TEXT Path to dag templates -c, --config TEXT Path to dag config -o, --output-dir TEXT Output path --help Show this message and exit.If a template path is not provided,airflow-dagwill look into thedefault templates.You can define your own dag templates too, and put them in atemplatesdirectory in Airflow's home folder. The dag yaml configs can be placed in aconfigsdirectory in the same home folder, and the output path can then be the Airflow dags folder. The usage will look like:$ airflow-dag build -t airflow/templates -c airflow/configs/dag.yml -o airflow/dagsVersioningairflow-dagusesSemantic Versioning. For the available versions, see the tags on the GitHub repository.LicenseThis project is licensed under the Apache License, see theLICENSEfile for details.
airflow-dag-artifact
Welcome toairflow_dag_artifactDocumentationA lot of serverless AWS Service supports versioning and alias for deployment. It made the blue / green deployment, canary deployment and rolling back super easy.AWS Lambda Versioning and AliasAWS StepFunction Versioning and AliasAWS SageMaker Model Registry VersioningHowever, Airflow DAG does not support this feature yet. This library provides a way to manage Airflow DAG versioning and alias so you can deploy Airflow DAG with confidence.Pleaseread this tutorialto learn how to use this library.It also has nativeAWS MWAAsupport for DAG deployment automation, with the DAG versioning manage, which is not official supported by Apache Airflow. Pleaseread this exampleto learn how to use this library with AWS MWAA.Installairflow_dag_artifactis released on PyPI, so all you need is to:$pipinstallairflow-dag-artifactTo upgrade to latest version:$pipinstall--upgradeairflow-dag-artifact
airflow-dag-deployer
Installationpip install airflow-dag-deployerDeploy dags with commandlineDags can be deployed as zip archive or independent python file prefixed by project name.Deploying dags as a zip archivedeploydag --project=<project_name> --source=<dags_dir> --destination=<airflow_home> --method=zipDeploying dags as a filedeploydag --project=<project_name> --source=<dags_dir> --destination=<airflow_home> --method=fileDeploying dags with config file for different environmentsSet up of different deployment environment (dev/test/prod)Have adeploydag.json(or any filename with json setting) file like this:{"dev":{"project":"testproject","source":"dags","destination":"airflowhome","method":"zip"}}Run command like this:deploydag --config=deploydag.json --env=dev
airflowdaggenerator
What is AirflowDAGGenerator?Dynamically generates Python Airflow DAG file based on given Jinja2 Template and YAML configuration to encourage reusable code. It also validates the correctness (by checking DAG contains cyclic dependency between tasks, invalid tasks, invalid arguments, typos etc.) of the generated DAG automatically by leveraging airflow DagBag, therefore it ensures the generated DAG is safe to deploy into Airflow.Why is it useful?Most of the time the Data processing DAG pipelines are same except the parameters like source, target, schedule interval etc. So having a dynamic DAG generator using a templating language can greatly benefit when you have to manage a large number of pipelines at enterprise level. Also it ensures code re-usability and standardizing the DAG, by having a standardized template. It also improves the maintainability and testing effort.How is it Implemented?By leveraging the de-facto templating language used in Airflow itself, that is Jinja2 and the standard YAML configuration to provide the parameters specific to a use case while generating the DAG.RequirementsPython 3.6 or laterNote: Tested on 3.6, 3.7 and 3.8 python environments, see tox.ini for detailsHow to use this Package?First install the package using:pipinstallairflowdaggeneratorAirflow Dag Generator should now be available as a command line tool to execute. To verify runairflowdaggenerator-hAirflow Dag Generator can also be run as follows:python-mairflowdaggenerator-hSample Usage:If you have installed the package then:airflowdaggenerator\-config_yml_pathpath/to/config_yml_file\-config_yml_file_nameconfig_yml_file\-template_pathpath/to/jinja2_template_file\-template_file_namejinja2_template_file\-dag_pathpath/to/generated_output_dag_py_file\-dag_file_namegenerated_output_dag_py_fileORpython-mairflowdaggenerator\-config_yml_pathpath/to/config_yml_file\-config_yml_file_nameconfig_yml_file\-template_pathpath/to/jinja2_template_file\-template_file_namejinja2_template_file\-dag_pathpath/to/generated_output_dag_py_file\-dag_file_namegenerated_output_dag_py_fileIf you have cloned the project source code then you have sample jinja2 template and YAML configuration file present under tests/data folder, so you can test the behaviour by opening a terminal window under project root directory and run the following command:python-mairflowdaggenerator\-config_yml_path./tests/data\-config_yml_file_namedag_properties.yml\-template_path./tests/data\-template_file_namesample_dag_template.py.j2\-dag_path./tests/data/output\-dag_file_nametest_dag.pyAnd you can see that test_dag.py is created under ./tests/data/output folder.TroubleshootingIn case you get some error while generating the dag using this package like (sqlite3.OperationalError)…, then please execute following command:airflowinitdb
airflow-dataform-parser
Failed to fetch description. HTTP Status Code: 404
airflow-data-validation
No description available on PyPI.
airflow-db-logger
Please see readme.md @https://github.com/LamaAni/AirflowDBLogger
airflow-dbt
airflow-dbtThis is a collection ofAirflowoperators to provide easy integration withdbt.fromairflowimportDAGfromairflow_dbt.operators.dbt_operatorimport(DbtSeedOperator,DbtSnapshotOperator,DbtRunOperator,DbtTestOperator)fromairflow.utils.datesimportdays_agodefault_args={'dir':'/srv/app/dbt','start_date':days_ago(0)}withDAG(dag_id='dbt',default_args=default_args,schedule_interval='@daily')asdag:dbt_seed=DbtSeedOperator(task_id='dbt_seed',)dbt_snapshot=DbtSnapshotOperator(task_id='dbt_snapshot',)dbt_run=DbtRunOperator(task_id='dbt_run',)dbt_test=DbtTestOperator(task_id='dbt_test',retries=0,# Failing tests would fail the task, and we don't want Airflow to try again)dbt_seed>>dbt_snapshot>>dbt_run>>dbt_testInstallationInstall from PyPI:pipinstallairflow-dbtIt will also need access to thedbtCLI, which should either be on yourPATHor can be set with thedbt_binargument in each operator.UsageThere are five operators currently implemented:DbtDocsGenerateOperatorCallsdbt docs generateDbtDepsOperatorCallsdbt depsDbtSeedOperatorCallsdbt seedDbtSnapshotOperatorCallsdbt snapshotDbtRunOperatorCallsdbt runDbtTestOperatorCallsdbt testEach of the above operators accept the following arguments:profiles_dirIf set, passed as the--profiles-dirargument to thedbtcommandtargetIf set, passed as the--targetargument to thedbtcommanddirThe directory to run thedbtcommand infull_refreshIf set toTrue, passes--full-refreshvarsIf set, passed as the--varsargument to thedbtcommand. Should be set as a Python dictionary, as will be passed to thedbtcommand as YAMLmodelsIf set, passed as the--modelsargument to thedbtcommandexcludeIf set, passed as the--excludeargument to thedbtcommandselectIf set, passed as the--selectargument to thedbtcommanddbt_binThedbtCLI. Defaults todbt, so assumes it's on yourPATHverboseThe operator will log verbosely to the Airflow logswarn_errorIf set toTrue, passes--warn-errorargument todbtcommand and will treat warnings as errorsTypically you will want to use theDbtRunOperator, followed by theDbtTestOperator, as shown earlier.You can also use the hook directly. Typically this can be used for when you need to combine thedbtcommand with another task in the same operators, for example runningdbt docsand uploading the docs to somewhere they can be served from.Building LocallyTo install from the repository: First it's recommended to create a virtual environment:python3-mvenv.venvsource.venv/bin/activateInstall usingpip:pipinstall.TestingTo run tests locally, first create a virtual environment (seeBuilding Locallysection)Install dependencies:pipinstall.pytestRun the tests:pytesttests/Code styleThis project usesflake8.To check your code, first create a virtual environment (seeBuilding Locallysection):pipinstallflake8 flake8airflow_dbt/tests/setup.pyPackage managementIf you use dbt's package manager you should include all dependencies before deploying your dbt project.For Docker users, packages specified inpackages.ymlshould be included as part your docker image by callingdbt depsin yourDockerfile.Amazon Managed Workflows for Apache Airflow (MWAA)If you use MWAA, you just need to update therequirements.txtfile and addairflow-dbtanddbtto it.Then you can have your dbt code inside a folder{DBT_FOLDER}in the dags folder on S3 and configure the dbt task like below:dbt_run=DbtRunOperator(task_id='dbt_run',dbt_bin='/usr/local/airflow/.local/bin/dbt',profiles_dir='/usr/local/airflow/dags/{DBT_FOLDER}/',dir='/usr/local/airflow/dags/{DBT_FOLDER}/')License & ContributingThis is available as open source under the terms of theMIT License.Bug reports and pull requests are welcome on GitHub athttps://github.com/gocardless/airflow-dbt.GoCardless ♥ open source. If you do too, comejoin us.
airflow-dbt-cta
airflow-dbtNOTE: this repository was forked fromhttps://github.com/gocardless/airflow-dbtin order to release an updated version to PyPi.This is a collection ofAirflowoperators to provide easy integration withdbt.fromairflowimportDAGfromairflow_dbt_cta.operators.dbt_operatorimport(DbtSeedOperator,DbtSnapshotOperator,DbtRunOperator,DbtTestOperator,DbtCleanOperator,DbtBuildOperator,)fromairflow.utils.datesimportdays_agodefault_args={'dir':'/srv/app/dbt','start_date':days_ago(0)}withDAG(dag_id='dbt',default_args=default_args,schedule_interval='@daily')asdag:dbt_seed=DbtSeedOperator(task_id='dbt_seed',)dbt_snapshot=DbtSnapshotOperator(task_id='dbt_snapshot',)dbt_run=DbtRunOperator(task_id='dbt_run',)dbt_build=DbtBuildOperator(task_id='dbt_build',)dbt_test=DbtTestOperator(task_id='dbt_test',retries=0,# Failing tests would fail the task, and we don't want Airflow to try again)dbt_clean=DbtCleanOperator(task_id='dbt_clean',)dbt_seed>>dbt_snapshot>>dbt_run>>dbt_build>>dbt_test>>dbt_cleanInstallationInstall from PyPI:pipinstallairflow-dbtIt will also need access to thedbtCLI, which should either be on yourPATHor can be set with thedbt_binargument in each operator.UsageThere are six operators currently implemented:DbtDocsGenerateOperatorCallsdbt docs generateDbtDepsOperatorCallsdbt depsDbtSeedOperatorCallsdbt seedDbtSnapshotOperatorCallsdbt snapshotDbtRunOperatorCallsdbt runDbtTestOperatorCallsdbt testDbtCleanOperatorCallsdbt cleanDbtBuildOperatorCallsdbt buildEach of the above operators accept the following arguments:envIf set as a kwarg dict, passed the given environment variables as the arguments to the dbt taskprofiles_dirIf set, passed as the--profiles-dirargument to thedbtcommandtargetIf set, passed as the--targetargument to thedbtcommanddirThe directory to run thedbtcommand infull_refreshIf set toTrue, passes--full-refreshvarsIf set, passed as the--varsargument to thedbtcommand. Should be set as a Python dictionary, as will be passed to thedbtcommand as YAMLmodelsIf set, passed as the--modelsargument to thedbtcommandexcludeIf set, passed as the--excludeargument to thedbtcommandselectIf set, passed as the--selectargument to thedbtcommandselectorIf set, passed as the--selectorargument to thedbtcommanddbt_binThedbtCLI. Defaults todbt, so assumes it's on yourPATHverboseThe operator will log verbosely to the Airflow logswarn_errorIf set toTrue, passes--warn-errorargument todbtcommand and will treat warnings as errorsTypically you will want to use theDbtRunOperator, followed by theDbtTestOperator, as shown earlier.You can also use the hook directly. Typically this can be used for when you need to combine thedbtcommand with another task in the same operators, for example runningdbt docsand uploading the docs to somewhere they can be served from.Building LocallyTo install from the repository: First it's recommended to create a virtual environment:python3-mvenv.venvsource.venv/bin/activateInstall usingpip:pipinstall.TestingTo run tests locally, first create a virtual environment (seeBuilding Locallysection)Install dependencies:pipinstall.pytestRun the tests:pytesttests/Code styleThis project usesflake8.To check your code, first create a virtual environment (seeBuilding Locallysection):pipinstallflake8 flake8airflow_dbt/tests/setup.pyPackage managementIf you use dbt's package manager you should include all dependencies before deploying your dbt project.For Docker users, packages specified inpackages.ymlshould be included as part your docker image by callingdbt depsin yourDockerfile.Amazon Managed Workflows for Apache Airflow (MWAA)If you use MWAA, you just need to update therequirements.txtfile and addairflow-dbtanddbtto it.Then you can have your dbt code inside a folder{DBT_FOLDER}in the dags folder on S3 and configure the dbt task like below:dbt_run=DbtRunOperator(task_id='dbt_run',dbt_bin='/usr/local/airflow/.local/bin/dbt',profiles_dir='/usr/local/airflow/dags/{DBT_FOLDER}/',dir='/usr/local/airflow/dags/{DBT_FOLDER}/')Templating and parsing environments variablesIf you would like to run DBT using custom profile definition template with environment-specific variables, like for example profiles.yml using jinja:<profile_name>:outputs:<source>:database:"{{env_var('DBT_ENV_SECRET_DATABASE')}}"password:"{{env_var('DBT_ENV_SECRET_PASSWORD')}}"schema:"{{env_var('DBT_ENV_SECRET_SCHEMA')}}"threads:"{{env_var('DBT_THREADS')}}"type:<type>user:"{{env_var('USER_NAME')}}_{{env_var('ENV_NAME')}}"target:<source>You can pass the environment variables via theenvkwarg parameter:importos...dbt_run=DbtRunOperator(task_id='dbt_run',env={'DBT_ENV_SECRET_DATABASE':'<DATABASE>','DBT_ENV_SECRET_PASSWORD':'<PASSWORD>','DBT_ENV_SECRET_SCHEMA':'<SCHEMA>','USER_NAME':'<USER_NAME>','DBT_THREADS':os.getenv('<DBT_THREADS_ENV_VARIABLE_NAME>'),'ENV_NAME':os.getenv('ENV_NAME')})License & ContributingThis is available as open source under the terms of theMIT License.Bug reports and pull requests are welcome on GitHub athttps://github.com/gocardless/airflow-dbt.GoCardless ♥ open source. If you do too, comejoin us.
airflow-dbt-dinigo
airflow-dbtThis is a collection ofAirflowoperators to provide easy integration withdbt.fromairflowimportDAGfromairflow_dbt.operators.dbt_operatorimport(DbtSeedOperator,DbtSnapshotOperator,DbtRunOperator,DbtTestOperator)fromairflow.utils.datesimportdays_agodefault_args={'dir':'/srv/app/dbt','start_date':days_ago(0)}withDAG(dag_id='dbt',default_args=default_args,schedule_interval='@daily')asdag:dbt_seed=DbtSeedOperator(task_id='dbt_seed',)dbt_snapshot=DbtSnapshotOperator(task_id='dbt_snapshot',)dbt_run=DbtRunOperator(task_id='dbt_run',)dbt_test=DbtTestOperator(task_id='dbt_test',retries=0,# Failing tests would fail the task, and we don't want Airflow to try again)dbt_seed>>dbt_snapshot>>dbt_run>>dbt_testInstallationInstall from PyPI:pipinstallairflow-dbtIt will also need access to thedbtCLI, which should either be on yourPATHor can be set with thedbt_binargument in each operator.UsageThere are five operators currently implemented:DbtDocsGenerateOperatorCallsdbt docs generateDbtDepsOperatorCallsdbt depsDbtSeedOperatorCallsdbt seedDbtSnapshotOperatorCallsdbt snapshotDbtRunOperatorCallsdbt runDbtTestOperatorCallsdbt testEach of the above operators accept the arguments inhere (dbt_command_config). The main ones being:profiles_dirIf set, passed as the--profiles-dirargument to thedbtcommandtargetIf set, passed as the--targetargument to thedbtcommanddirThe directory to run thedbtcommand infull_refreshIf set toTrue, passes--full-refreshvarsIf set, passed as the--varsargument to thedbtcommand. Should be set as a Python dictionary, as will be passed to thedbtcommand as YAMLmodelsIf set, passed as the--modelsargument to thedbtcommandexcludeIf set, passed as the--excludeargument to thedbtcommandselectIf set, passed as the--selectargument to thedbtcommanddbt_binThedbtCLI. Defaults todbt, so assumes it's on yourPATHverboseThe operator will log verbosely to the Airflow logswarn_errorIf set toTrue, passes--warn-errorargument todbtcommand and will treat warnings as errorsTypically you will want to use theDbtRunOperator, followed by theDbtTestOperator, as shown earlier.You can also use the hook directly. Typically this can be used for when you need to combine thedbtcommand with another task in the same operators, for example runningdbt docsand uploading the docs to somewhere they can be served from.A more advanced example:If want to run yourdbtproject other tan in the airflow worker you can use theDbtCloudBuildHookand apply it to theDbtBaseOperatoror simply use the providedDbtCloudBuildOperator:fromairflow_dbt.hooksimportDbtCloudBuildHookfromairflow_dbt.operatorsimportDbtBaseOperator,DbtCloudBuildOperatorDbtBaseOperator(task_id='provide_hook',command='run',use_colors=False,config={'profiles_dir':'./jaffle-shop','project_dir':'./jaffle-shop',},dbt_hook=DbtCloudBuildHook(gcs_staging_location='gs://my-bucket/compressed-dbt-project.tar.gz'))DbtCloudBuildOperator(task_id='default_hook_cloudbuild',gcs_staging_location='gs://my-bucket/compressed-dbt-project.tar.gz',command='run',use_colors=False,config={'profiles_dir':'./jaffle-shop','project_dir':'./jaffle-shop',},)You can either define the dbt params/config/flags in the operator or you can group them into aconfigparam. They both have validation, but only the config has templating. The following two tasks are equivalent:fromairflow_dbt.operators.dbt_operatorimportDbtBaseOperatorDbtBaseOperator(task_id='config_param',command='run',config={'profiles_dir':'./jaffle-shop','project_dir':'./jaffle-shop','dbt_bin':'/usr/local/airflow/.local/bin/dbt','use_colors':False})DbtBaseOperator(task_id='flat_config',command='run',profiles_dir='./jaffle-shop',project_dir='./jaffle-shop',dbt_bin='/usr/local/airflow/.local/bin/dbt',use_colors=False)Building LocallyTo install from the repository: First it's recommended to create a virtual environment:python3-mvenv.venvsource.venv/bin/activateInstall usingpip:pipinstall.TestingTo run tests locally, first create a virtual environment (seeBuilding Locallysection)Install dependencies:pipinstall.pytestRun the tests:pytesttests/Code styleThis project usesflake8.To check your code, first create a virtual environment (seeBuilding Locallysection):pipinstallflake8 flake8airflow_dbt/tests/setup.pyPackage managementIf you use dbt's package manager you should include all dependencies before deploying your dbt project.For Docker users, packages specified inpackages.ymlshould be included as part your docker image by callingdbt depsin yourDockerfile.Amazon Managed Workflows for Apache Airflow (MWAA)If you use MWAA, you just need to update therequirements.txtfile and addairflow-dbtanddbtto it.Then you can have your dbt code inside a folder{DBT_FOLDER}in the dags folder on S3 and configure the dbt task like below:fromairflow_dbt.operators.dbt_operatorimportDbtRunOperatordbt_run=DbtRunOperator(task_id='dbt_run',dbt_bin='/usr/local/airflow/.local/bin/dbt',profiles_dir='/usr/local/airflow/dags/{DBT_FOLDER}/',dir='/usr/local/airflow/dags/{DBT_FOLDER}/')License & ContributingThis is available as open source under the terms of theMIT License.Bug reports and pull requests are welcome on GitHub athttps://github.com/gocardless/airflow-dbt.GoCardless ♥ open source. If you do too, comejoin us.
airflow-dbt-doc-plugin
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
airflow-dbt-python
airflow-dbt-pythonA collection ofAirflowoperators, hooks, and utilities to executedbtcommands.Read thedocumentationfor examples, installation instructions, and more details.InstallationRequirementsBefore usingairflow-dbt-python, ensure you meet the following requirements:Adbtproject usingdbt-coreversion 1.4.0 or later.An Airflow environment using version 2.2 or later.If using any managed service, like AWS MWAA, ensure your environment is created with a supported version of Airflow.If self-hosting, Airflow installation instructions can be found in theirofficial documentation.Running Python 3.8 or later in your Airflow environment.WarningEven though we don't impose any upper limits on versions of Airflow anddbt, it's possible that new versions are not supported immediately after release, particularly fordbt. We recommend testing the latest versions before upgrading andreporting any issues.NoteOlder versions of Airflow anddbtmay work withairflow-dbt-python, although we cannot guarantee this. Our testing pipeline runs the latestdbt-corewith the latest Airflow release, and the latest version supported byAWS MWAA.From PyPIairflow-dbt-pythonis available inPyPIand can be installed withpip:pipinstallairflow-dbt-pythonAs a convenience, somedbtadapters can be installed by specifying extras. For example, if requiring thedbt-redshiftadapter:pipinstallairflow-dbt-python[redshift]From this repoairflow-dbt-pythoncan also be built from source by cloning this GitHub repository:gitclonehttps://github.com/tomasfarias/airflow-dbt-python.gitcdairflow-dbt-pythonAnd installing withPoetry:poetryinstallIn AWS MWAAAddairflow-dbt-pythonto yourrequirements.txtfile and edit your Airflow environment to use this newrequirements.txtfile, or upload it as a plugin.Read thedocumentationfor more a more detailed AWS MWAA installation breakdown.In other managed servicesairflow-dbt-pythonshould be compatible with most or all Airflow managed services. Consult the documentation specific to your provider.If you notice an issue when installingairflow-dbt-pythonin a specific managed service, please open anissue.Featuresairflow-dbt-pythonaims to make dbt afirst-class citizenof Airflow by supporting additional features that integrate both tools. As you would expect,airflow-dbt-pythoncan run all your dbt workflows in Airflow with the same interface you are used to from the CLI, but without being a mere wrapper:airflow-dbt-pythondirectly communicates with internaldbt-coreclasses, bridging the gap between them and Airflow's operator interface. Essentially, we are attempting to usedbtas a library.As this integration was completed, several features were developed toextend the capabilities of dbtto leverage Airflow as much as possible. Can you think of a waydbtcould leverage Airflow that is not currently supported? Let us know in aGitHub issue!Independent task executionAirflow executesTasksindependent of one another: even though downstream and upstream dependencies between tasks exist, the execution of an individual task happens entirely independently of any other task execution (see:Tasks Relationships).In order to work with this constraint,airflow-dbt-pythonruns each dbt command in atemporary and isolated directory. Before execution, all the relevant dbt files are copied from supported backends, and after executing the command any artifacts are exported. This ensures dbt can work with any Airflow deployment, including most production deployments as they are usually runningRemote Executorsand do not guarantee any files will be shared by default between tasks, since each task may run in a completely different environment.Download dbt files from a remote storageThe dbt parametersprofiles_dirandproject_dirwould normally point to a directory containing aprofiles.ymlfile and a dbt project in the local environment respectively (defined by the presence of adbt_project.ymlfile).airflow-dbt-pythonextends these parameters to also accept an URL pointing to a remote storage.Currently, we support the following remote storages:AWS S3(identified by as3scheme).Remote git repositories, like those stored in GitHub (bothhttpsandsshschemes are supported).If a remote URL is used forproject_dir, then this URL must point to a location in your remote storage containing adbtproject to run. Adbtproject is identified by the prescence of adbt_project.yml, and contains all yourresources. All of the contents of this remote location will be downloaded and made available for the operator. The URL may also point to an archived file containing all the files of a dbt project, which will be downloaded, uncompressed, and made available for the operator.If a remote URL is used forprofiles_dir, then this URL must point to a location in your remote storage that contains aprofiles.ymlfile. Theprofiles.ymlfile will be downloaded and made available for the operator to use when running. Theprofiles.ymlmay be part of yourdbtproject, in which case this argument may be ommitted.This feature is intended to work in line with Airflow'sdescription of the task concept:Tasks don’t pass information to each other by default, and run entirely independently.We interpret this as meaning a task should be responsible of fetching all thedbtrelated files it needs in order to run independently, as already described inIndependent Task Execution.Push dbt artifacts to XComEach dbt execution produces one or moreJSON artifactsthat are valuable to produce meta-metrics, build conditional workflows, for reporting purposes, and other uses.airflow-dbt-pythoncan push these artifacts toXComas requested via thedo_xcom_push_artifactsparameter, which takes a list of artifacts to push.Use Airflow connections as dbt targets (without a profiles.yml)Airflow connectionsallow users to manage and store connection information, such as hostname, port, username, and password, for operators to use when accessing certain applications, like databases. Similarly, adbtprofiles.ymlfile stores connection information under each target key.airflow-dbt-pythonbridges the gap between the two and allows you to use connection information stored as an Airflow connection by specifying the connection id as thetargetparameter of any of thedbtoperators it provides. What's more, if using an Airflow connection, theprofiles.ymlfile may be entirely omitted (although keep in mind aprofiles.ymlfile contains a configuration block besides target connection information).See an example DAGhere.MotivationAirflow running in a managed environmentAlthoughdbtis meant to be installed and used as a CLI, we may not have control of the environment where Airflow is running, disallowing us the option of usingdbtas a CLI.This is exactly what happens when usingAmazon's Managed Workflows for Apache Airflowor MWAA: although a list of Python requirements can be passed, the CLI cannot be found in the worker's PATH.There is a workaround which involves using Airflow'sBashOperatorand running Python from the command line:fromairflow.operators.bashimportBashOperatorBASH_COMMAND="python -c 'from dbt.main import main; main()' run"operator=BashOperator(task_id="dbt_run",bash_command=BASH_COMMAND,)But it can get cumbersome when appending all potential arguments adbt runcommand (or other subcommand) can take.That's whereairflow-dbt-pythoncomes in: it abstracts the complexity of interfacing withdbt-coreand exposes one operator for eachdbtsubcommand that can be instantiated with all the corresponding arguments that thedbtCLI would take.An alternative toairflow-dbtthat works without thedbtCLIThe alternativeairflow-dbtpackage, by default, would not work if thedbtCLI is not in PATH, which means it would not be usable in MWAA. There is a workaround via thedbt_binargument, which can be set to"python -c 'from dbt.main import main; main()' run", in similar fashion as theBashOperatorexample. Yet this approach is not without its limitations:airflow-dbtworks by wrapping thedbtCLI, which makes our code dependent on the environment in which it runs.airflow-dbtdoes not support the full range of arguments a command can take. For example,DbtRunOperatordoes not have an attribute forfail_fast.airflow-dbtdoes not offer access todbtartifacts created during execution.airflow-dbt-pythondoes so by pushing any artifacts toXCom.UsageCurrently, the followingdbtcommands are supported:cleancompiledebugdepsdocs generatelsparserunrun-operationseedsnapshotsourcetestExamplesAll example DAGs are tested against the latest Airflow version. Some changes, like modifyingimportstatements or changing types, may be required for them to work in other versions.importdatetimeasdtimportpendulumfromairflowimportDAGfromairflow_dbt_python.operators.dbtimport(DbtRunOperator,DbtSeedOperator,DbtTestOperator,)args={"owner":"airflow",}withDAG(dag_id="example_dbt_operator",default_args=args,schedule="0 0 * * *",start_date=pendulum.today("UTC").add(days=-1),dagrun_timeout=dt.timedelta(minutes=60),tags=["example","example2"],)asdag:dbt_test=DbtTestOperator(task_id="dbt_test",selector_name="pre-run-tests",)dbt_seed=DbtSeedOperator(task_id="dbt_seed",select=["/path/to/first.csv","/path/to/second.csv"],full_refresh=True,)dbt_run=DbtRunOperator(task_id="dbt_run",select=["/path/to/models"],full_refresh=True,fail_fast=True,)dbt_test>>dbt_seed>>dbt_runMore examples can be found in theexamples/directory and thedocumentation.DevelopmentSee thedevelopment documentationfor a more in-depth dive into setting up a development environment, running the test-suite, and general commentary on working onairflow-dbt-python.TestingTests are run withpytest, can be located intests/. To run them locally, you may usePoetry:poetryrunpytesttests/-vvLicenseThis project is licensed under the MIT license. SeeLICENSE.
airflow-dbt-winwin
airflow-dbtThis is a collection ofAirflowoperators to provide easy integration withdbt.fromairflowimportDAGfromairflow_dbt.operators.dbt_operatorimport(DbtSeedOperator,DbtSnapshotOperator,DbtRunOperator,DbtTestOperator,DbtCleanOperator,)fromairflow.utils.datesimportdays_agodefault_args={'dir':'/srv/app/dbt','start_date':days_ago(0)}withDAG(dag_id='dbt',default_args=default_args,schedule_interval='@daily')asdag:dbt_seed=DbtSeedOperator(task_id='dbt_seed',)dbt_snapshot=DbtSnapshotOperator(task_id='dbt_snapshot',)dbt_run=DbtRunOperator(task_id='dbt_run',)dbt_test=DbtTestOperator(task_id='dbt_test',retries=0,# Failing tests would fail the task, and we don't want Airflow to try again)dbt_clean=DbtCleanOperator(task_id='dbt_clean',)dbt_seed>>dbt_snapshot>>dbt_run>>dbt_test>>dbt_cleanInstallationInstall from PyPI:pipinstallairflow-dbtIt will also need access to thedbtCLI, which should either be on yourPATHor can be set with thedbt_binargument in each operator.UsageThere are five operators currently implemented:DbtDocsGenerateOperatorCallsdbt docs generateDbtDepsOperatorCallsdbt depsDbtSeedOperatorCallsdbt seedDbtSnapshotOperatorCallsdbt snapshotDbtRunOperatorCallsdbt runDbtTestOperatorCallsdbt testDbtCleanOperatorCallsdbt cleanEach of the above operators accept the following arguments:envIf set as a kwarg dict, passed the given environment variables as the arguments to the dbt taskprofiles_dirIf set, passed as the--profiles-dirargument to thedbtcommandtargetIf set, passed as the--targetargument to thedbtcommanddirThe directory to run thedbtcommand infull_refreshIf set toTrue, passes--full-refreshvarsIf set, passed as the--varsargument to thedbtcommand. Should be set as a Python dictionary, as will be passed to thedbtcommand as YAMLmodelsIf set, passed as the--modelsargument to thedbtcommandexcludeIf set, passed as the--excludeargument to thedbtcommandselectIf set, passed as the--selectargument to thedbtcommandselectorIf set, passed as the--selectorargument to thedbtcommanddbt_binThedbtCLI. Defaults todbt, so assumes it's on yourPATHverboseThe operator will log verbosely to the Airflow logswarn_errorIf set toTrue, passes--warn-errorargument todbtcommand and will treat warnings as errorsTypically you will want to use theDbtRunOperator, followed by theDbtTestOperator, as shown earlier.You can also use the hook directly. Typically this can be used for when you need to combine thedbtcommand with another task in the same operators, for example runningdbt docsand uploading the docs to somewhere they can be served from.Building LocallyTo install from the repository: First it's recommended to create a virtual environment:python3-mvenv.venvsource.venv/bin/activateInstall usingpip:pipinstall.TestingTo run tests locally, first create a virtual environment (seeBuilding Locallysection)Install dependencies:pipinstall.pytestRun the tests:pytesttests/Code styleThis project usesflake8.To check your code, first create a virtual environment (seeBuilding Locallysection):pipinstallflake8 flake8airflow_dbt/tests/setup.pyPackage managementIf you use dbt's package manager you should include all dependencies before deploying your dbt project.For Docker users, packages specified inpackages.ymlshould be included as part your docker image by callingdbt depsin yourDockerfile.Amazon Managed Workflows for Apache Airflow (MWAA)If you use MWAA, you just need to update therequirements.txtfile and addairflow-dbtanddbtto it.Then you can have your dbt code inside a folder{DBT_FOLDER}in the dags folder on S3 and configure the dbt task like below:dbt_run=DbtRunOperator(task_id='dbt_run',dbt_bin='/usr/local/airflow/.local/bin/dbt',profiles_dir='/usr/local/airflow/dags/{DBT_FOLDER}/',dir='/usr/local/airflow/dags/{DBT_FOLDER}/')Templating and parsing environments variablesIf you would like to run DBT using custom profile definition template with environment-specific variables, like for example profiles.yml using jinja:<profile_name>:outputs:<source>:database:"{{env_var('DBT_ENV_SECRET_DATABASE')}}"password:"{{env_var('DBT_ENV_SECRET_PASSWORD')}}"schema:"{{env_var('DBT_ENV_SECRET_SCHEMA')}}"threads:"{{env_var('DBT_THREADS')}}"type:<type>user:"{{env_var('USER_NAME')}}_{{env_var('ENV_NAME')}}"target:<source>You can pass the environment variables via theenvkwarg parameter:importos...dbt_run=DbtRunOperator(task_id='dbt_run',env={'DBT_ENV_SECRET_DATABASE':'<DATABASE>','DBT_ENV_SECRET_PASSWORD':'<PASSWORD>','DBT_ENV_SECRET_SCHEMA':'<SCHEMA>','USER_NAME':'<USER_NAME>','DBT_THREADS':os.getenv('<DBT_THREADS_ENV_VARIABLE_NAME>'),'ENV_NAME':os.getenv('ENV_NAME')})License & ContributingThis is available as open source under the terms of theMIT License.Bug reports and pull requests are welcome on GitHub athttps://github.com/gocardless/airflow-dbt.GoCardless ♥ open source. If you do too, comejoin us.
airflow-declarative
Airflow declarative DAGs via YAML.Compatibility:Python 2.7 / 3.5+Airflow 1.10.4+Key FeaturesDeclarative DAGs in plain text YAML helps a lot to understand how DAG will looks like. Made for humans, not programmers.It makes extremely hard to turn your DAGs into code mess. Even if you make complicated YAMLs generator the result would be readable for humans.No more guilty about coupling business logic with task management system (Airflow). They now could coexists separated.Static analysis becomes a trivial task.It’s a good abstraction to create your own scheduler/worker compatible with original Airflow one.ExamplesChecktests/dagsdirectory for example of DAGs which will works and which won’t. Usesrc/airflow_declarative/schema.pymodule for the reference about YAML file schema. It should be self descriptive.Don’t be shy to experiment:trafaret-configwill help you to understand what had gone wrong and why and where.UsageWe provide support for two installation options:As a complementary side package for the upstream Airflow.As a built-in Airflow functionality using patches for Airflow.Upstream AirflowThe idea is to put a Python script to thedags_folderwhich would load the declarative dags via airflow_declarative. More details:Installation using Upstream Airflow.importosimportairflow_declarative# Assuming that the yaml dags are located in the same directory# as this Python module:root=os.path.dirname(__file__)dags_list=[airflow_declarative.from_path(os.path.join(root,item))foriteminos.listdir(root)ifitem.endswith((".yml",".yaml"))]globals().update({dag.dag_id:dagfordagsindags_listfordagindags})Patched AirflowWe provide ready to use patches in thepatchesdirectory. To use them you will need to apply a patch to a corresponding Airflow version and then build it yourself. More details:Installation using Patched Airflow.
airflow-diagrams
airflow-diagramsAuto-generated Diagrams from Airflow DAGs. 🔮 🪄This project aims to easily visualise yourAirflowDAGs on service level from providers like AWS, GCP, Azure, etc. viadiagrams.BeforeAfter🚀 Get startedTo install it fromPyPIrun:pip install airflow-diagramsNOTE:Make sure you haveGraphvizinstalled.Then just call it like this:Examples of generated diagrams can be found in theexamplesdirectory.🤔 How it Worksℹ️ It connects, by using the officialApache Airflow Python Client, to your Airflow installation to retrieve all DAGs (in case you don't specify anydag_id) and all Tasks for the DAG(s).🪄 It processes every DAG and its Tasks and 🔮 tries to find a diagram node for every DAGs task, by usingFuzzy String Matching, that matches the most. If you are unhappy about the match you can also provide amapping.ymlfile to statically map from Airflow task to diagram node.🎨 It renders the results into a python file which can then be executed to retrieve the rendered diagram. 🎉❤️ ContributingContributions are very welcome. Please go ahead and raise an issue if you have one or open a PR. Thank you.
airflow-ditto
DittoDitto is a framework which allows you to do transformations to an Airflow DAG, to convert it into another DAG which is flow-isomorphic with the original DAG. i.e. it will orchestrate a flow of operators which yields the same results, but was just transformed to run in another environment or platform. The framework was built to transform EMR DAGs to run on Azure HDInsight, but you can extend the rich API for any other kind of transformation. In fact you can transform DAGs such that the result is not isomorphic too if you want (although at that point you're better off writing a whole new DAG).The purpose of the framework is to allow you to maintain one codebase and be able to run your airflow DAGs on different execution environments (e.g. on different clouds, or even different container frameworks - spark on YARN vs kubernetes). It is not meant for a one-time transformation, but for continuous and parallel DAG deployments, although you can use it for that purpose too.At the heart, Ditto is a graph manipulation library, which extendable APIs for the actual transformation logic. It does come with out of the box support for EMR to HDInsight transformation though.Installationpip install airflow-dittoA quick exampleDitto is created for conveniently transforming a large number of DAGs which follow a similar pattern quickly. Here's how easy it is to use Ditto:ditto=ditto.AirflowDagTransformer(DAG(dag_id='transformed_dag',default_args=DEFAULT_DAG_ARGS),transformer_resolvers=[ClassTransformerResolver({SlackAPIOperator:TestTransformer1,S3CopyObjectOperator:TestTransformer2,BranchPythonOperator:TestTransformer3,PythonOperator:TestTransformer4})])new_dag=ditto.transform(original_dag)You can put the above call in any python file which is visible to airflow and the resultant dag loads up thanks to how airflow's dagbag finds DAGs.Source DAG(airflow view)Transformed DAGRead the detailed documentationhere
airflow-django
No description available on PyPI.
airflow-docker
airflow-dockerDescriptionAn opinionated implementation of exclusively using airflow DockerOperators for all Operators.Default Operatorfromairflow_docker.operatorimportOperatortask=Operator(image='some-image:latest',...)Default Sensorfromairflow_docker.operatorimportSensorsensor=Sensor(image='some-image:latest',...)Task Codefromairflow_docker_helperimportclientclient.sensor(True)Branch OperatorDag Taskfromairflow_docker.operatorimportBranchOperatorbranching_task=BranchOperator(image='some-image:latest',...)Task Codefromairflow_docker_helperimportclientclient.branch_to_tasks(['task1','task2'])Short Circuit OperatorDag Taskfromairflow_docker.operatorimportShortCircuitOperatorshort_circuit=ShortCircuitOperator(image='some-image:latest',...)Task Codefromairflow_docker_helperimportclientclient.short_circuit()# This task will short circuit if this function gets calledContext UsageDag Taskfromairflow_docker.operatorimportOperatortask=Operator(image='some-image:latest',provide_context=True,...)Task Codefromairflow_docker_helperimportclientcontext=client.context()ConfigurationThe following operator defaults can be set under theairflowdockernamespace:force_pull (boolean true/false)auto_remove (boolean true/false)network_modeFor example, to setforce_pullto False by default set the following environment variable like so:exportAIRFLOW__AIRFLOWDOCKER__FORCE_PULL=falsePluginThis package works as an airflow plugin as well. When installed and running airflow, dags can import like sofromairflow.{type,like"operators","sensors"}.{namespecificedinsidethepluginclass}import*i.e.fromairflow.operators.airflow_dockerimportOperatorTestsWe also ship anairflowdocker/testerimage to verify the integrity of your DAG definitions before committing them.One can run the tests against your own dags like so:dockerrun-it-v"${pwd}/dags:/airflow/dags"airflowdocker/testeror else see theairflow-docker-composeproject which ships with atestsubcommand for precisely this purpose.
airflow-docker-compose
airflow-docker-composeDescriptionA reasonably light wrapper arounddocker-composeto make it simple to start a local airflow instance in docker.Usageairflow-docker-compose--help airflow-docker-composeupConfigurationNote, this library assumes thedocker-composeutility is available in your path.In order to use this tool, you should have a localdagsfolder containing your dags. You should also have apyproject.tomlfile which minimally looks like[tool.airflow-docker-compose]docker-network='network-name'In order to set airflow configuration, you can use theairflow-environment-variableskey. This allows you to set anyairflow.cfgvariables like so:[tool.airflow-docker-compose]airflow-environment-variables={AIRWFLOW_WORKER_COUNT=4AIRFLOW__AIRFLOWDOCKER__FORCE_PULL='false'}
airflow-docker-helper
Airflow Docker HelperDescriptionA light sdk to be used by the operators in airflow-docker and in task code to participate in host/container communication.Installationpipinstallairflow-docker-helperUsageSensorfromairflow_docker_helperimportclientifsensed:client.sensor(True)else:client.sensor(False)Short Circuitfromairflow_docker_helperimportclientifshould_short_circuit:client.short_circuit()BranchingYou can pass a list...fromairflow_docker_helperimportclientbranch_to_task_ids=['foo','bar']client.branch_to_tasks(branch_to_task_ids)... or a string.fromairflow_docker_helperimportclientclient.branch_to_tasks('some-other-task')TestingThis library ships with a test client that mocks out all io and filesystem calls. This client also provides all of the relevant mocked out files to allow for assertions around the io.Some higher level assertions are provided. These assertions are based on the lower level file mocks.fromairflow_docker_helper.testingimporttest_clientclient=test_client()client.assert_not_short_circuited()# Passesclient.short_circuit()client.assert_short_circuited()# Passesclient.sensor(True)client.assert_sensor_called_with(True)# Passesclient.assert_sensor_called_with(False)# Failsclient.assert_branched_to_tasks([])# Passesclient.branch_to_tasks(['foo','bar'])client.assert_branched_to_tasks(['bar','foo'])# PassesFor power users, the mocks may be used directly:>>>fromairflow_docker_helper.testingimporttest_client>>>client=test_client()>>>client.branch_to_tasks(['foo','bar'])>>>client._mock_branch_to_tasks_file.mock_calls[call('./__AIRFLOW_META__/branch_operator.txt','wb'),call().__enter__(),call().write(b'["foo", "bar"]'),call().__exit__(None,None,None)]>>>client.short_circuit()>>>client._mock_short_circuit_file.mock_calls[call('./__AIRFLOW_META__/short_circuit.txt','wb'),call().__enter__(),call().write(b'false'),call().__exit__(None,None,None)]>>>client.sensor(True)>>>client._mock_sensor_file.mock_calls[call('./__AIRFLOW_META__/sensor.txt','wb'),call().__enter__(),call().write(b'true'),call().__exit__(None,None,None)]
airflow-duckdb
Airflow DuckDB on KubernetesDuckDBis an in-memory analytical database to run analytical queries on large data sets.Apache Airflowis an open-source platform for developing, scheduling, and monitoring batch-oriented workflows.Apache Airflow is not an ETL tool, but more of a workflow scheduler that can be used to schedule and monitor ETL jobs. Airflow users create DAGs to schedule Spark, Hive, Athena, Trino, BigQuery, and other ETL jobs to process their data.By using DuckDB with Airflow, the users can run analytical queries on local or remote large data sets and store the results without the need to use these ETL tools.To use DuckDB with Airflow, the users can use the PythonOperator with the DuckDB Python library, the BashOperator with the DuckDB CLI, or one of the available Airflow operators that support DuckDB (e.g.airflow-provider-duckdbdeveloped by Astronomer). All of these operators will be running in the worker pod and limited by its resources, for that reason, some users use the Kubernetes Executor to run the tasks in a dedicated Kubernetes pod to request more resources when needed.Setting up Kubernetes Executor could be a bit challenging for some users, especially maintaining the workers docker image. This project provides an alternative solution to run DuckDB with Airflow using the KubernetesPodOperator.How to useThe developed operator is completely based on the KubernetesPodOperator, so it needs cncf-kubernetes provider to be installed in the Airflow environment (preferably the latest version to profit from all the features).Install the packageTo use the operator, you need to install the package in your Airflow environment. You can install the package using pip:pipinstallairflow-duckdbUse the operatorThe operators supports all the parameters of the KubernetesPodOperator, and it has some additional parameters to simplify the usage of DuckDB.Here is an example of how to use the operator:withDAG("duckdb_dag",...)asdag:DuckDBPodOperator(task_id="duckdb_task",query="SELECT MAX(col1) AS FROM READ_PARQUET('s3://my_bucket/data.parquet');",do_xcom_push=True,s3_fs_config=S3FSConfig(access_key_id="{{ conn.duckdb_s3.login }}",secret_access_key="{{ conn.duckdb_s3.password }}",),container_resources=k8s.V1ResourceRequirements(requests={"cpu":"1","memory":"8Gi"},limits={"cpu":"1","memory":"8Gi"},),)FeaturesThe current version of the operator supports the following features:Running one or more DuckDB queries in a Kubernetes podConfiguring the pod resources (requests and limits) to run the queriesConfiguring the S3 credentials securely with a Kubernetes secret to read and write data from/to S3 (AWS S3, MinIO or GCS with S3 compatibility)Using Jinja templating to configure the queryLoading the queries from a filePushing the query result to XComThe project also provides a Docker image with DuckDB CLI and some extensions to use it with Airflow.
airflow-e2e
airflow-e2eThis packages aims to set up the scripts to run Airflow DAGs E2E tests.Installationpipinstallairflow-e2eUsagePre-requisitesBefore generating and running the E2E test scripts, the following folders and files are required to be present in your repository:A folder that contains the Airflow DAGs under testA folder that contains the E2E test suite(s)Optionally, we can have arequirements.txtfile at the root of your repository, which contains all Python packages required by your Airflow scheduler and workers to perform the tasks under tests.In addition, we can optionally have arequirements-dev.txtfile at the root of your repository, which contains all the Python packages required by the test runner to run your E2E test suites.Generating the test scriptsTo generate the Airflow E2E test scripts, run the following command at the root of your repository:Generating Airflow E2E test scripts withoutrequirements.txtandrequirements-dev.txt:airflow-e2e--dagsdags/--teststests/e2eIf you have packages to be installed in the Airflow services:airflow-e2e--dagsdags/--teststests/e2e--with-custom-airflow-packagesIf you have packages to be installed in the test runner service:airflow-e2e--dagsdags/--teststests/e2e--with-custom-test-packagesIf you would like to have a MongoDB service to be set up together:airflow-e2e--dagsdags/--teststests/e2e--with-mongoThis will generate adockerfolder at the root of your repository, and it will contain the following files:<root_of_repository> |- docker/ |- airflow_connections_and_variables_seeder/ | |- connections.yml | |- variables.json |- .envrc |- docker-compose.yml |- docker-compose-dev.yml |- docker-compose-extras.yml |- docker-compose-manual-testing.yml |- docker-compose-tests.ymlIn addition, for your convenience, the followingmakecommands are printed on the console, should you be interested to use them:clean:source./docker/.envrc&&\docker-compose\-f./docker/docker-compose.yml\-f./docker/docker-compose-dev.yml\-f./docker/docker-compose-tests.yml\-f./docker/docker-compose-extras.yml\down--remove-orphans--volumesdev:cleansource./docker/.envrc&&\docker-compose\-f./docker/docker-compose.yml\-f./docker/docker-compose-dev.yml\-f./docker/docker-compose-extras.yml\up-dwait_for_airflow_web_to_be_healthy:until[$$(dockerinspect-f'{{.State.Health.Status}}'airflow-web)="healthy"];do\sleep1;\doneseeded_dev:devwait_for_airflow_web_to_be_healthydockerexecairflow-schedulersh-c\"airflow connections import /tmp/seed/connections.yaml && airflow variables import /tmp/seed/variables.json"e2e:source./docker/.envrc&&\docker-compose\-f./docker/docker-compose.yml\-f./docker/docker-compose-tests.yml\-f./docker/docker-compose-extras.yml\up--exit-code-fromtest-runnerSetting up the E2E testsA.envrcfile is generated in thedocker/folder as well. Replace the values of the fields with the placeholder<SECRET_STRING_TO_BE_FILLED_IN>with actual values of your choice. Please remember to add the following to your source code versioning tool ignore file (.gitignorefor Git, for example):.envrc*Even though we may be using dummy credentials for our tests, we should still be vigilant when it comes to committing secrets.Running the E2E testsTo run the E2E tests, you can run the following command:source./docker/.envrc&&\docker-compose\-f./docker/docker-compose.yml\-f./docker/docker-compose-tests.yml\up--exit-code-fromtest-runnerOr, if you have copied the convenientmakecommand from before, you can run:makee2eLicenseGNU GENERAL PUBLIC LICENSE v3TestingTo run the tests, run the following command at the root of the repository:maketestChangelogRefer toCHANGELOG.md
airflow-ecr-plugin
Airflow AWS ECR PluginThis plugin exposes an operator that refreshes ECR login token at regular intervals.AboutAmazon ECRis a AWS managed Docker registry to host private Docker container images. Access to Docker repositories hosted on ECR can be controlled with resource based permissions using AWS IAM.To push/pull images, Docker client must authenticate to ECR registry as an AWS user. An authorization token can be generated using AWS CLIget-login-passwordcommand that can be passed todocker logincommand to authenticate to ECR registry. For instructions on setting up ECR and obtaining login token to authenticate Docker client, clickhere.The authorization token obtained usingget-login-passwordcommand is only valid for 12 hours and Docker client needs to authenticate with fresh token after every 12 hours to make sure it can access Docker images hosted on ECR. Moreover, ECR registries are region specific and separate token should be obtained to authenticate to each registry.The whole process can be quite cumbersome when combined with Apache Airflow. Airflow'sDockerOperatoracceptsdocker_conn_idparameter that it uses to authenticate and pull images from private repositories. In case this private registry is ECR, a connection can be created with login token obtained fromget-login-passwordcommand and the corresponding ID can be passed toDockerOperator. However, since the token is only valid for 12 hours,DockerOperatorwill fail to fetch images from ECR once token is expired.This plugin implementsRefreshEcrDockerConnectionOperatorAirflow operator that can automatically update the ECR login token at regular intervals.InstallationPypipipinstallairflow-ecr-pluginPoetrypoetryaddairflow-ecr-plugin@latestGetting StartedOnce installed, plugin can be loaded viasetuptools entrypointmechanism.Update your package's setup.py as below:fromsetuptoolsimportsetupsetup(name="my-package",...entry_points={'airflow.plugins':['aws_ecr = airflow_ecr_plugin:AwsEcrPlugin']})If you are using Poetry, plugin can be loaded by adding it under[tool.poetry.plugin."airflow.plugins"]section as below:[tool.poetry.plugins."airflow.plugins"]"aws_ecr"="airflow_ecr_plugin:AwsEcrPlugin"Once plugin is loaded, same will be available for import in python modules.Now create a DAG to refresh ECR tokens,fromdatetimeimporttimedeltaimportairflowfromairflow.operatorsimportaws_ecrDEFAULT_ARGS={"depends_on_past":False,"retries":0,"owner":"airflow",}REFRESH_ECR_TOKEN_DAG=airflow.DAG(dag_id="Refresh_ECR_Login_Token",description=("Fetches the latest token from ECR and updates the docker ""connection info."),default_args=DEFAULT_ARGS,schedule_interval=<token_refresh_interval>,# Set start_date to past date to make sure airflow picks up the tasks for# execution.start_date=airflow.utils.dates.days_ago(2),catchup=False,)# Add below operator for each ECR connection to be refreshed.aws_ecr.RefreshEcrDockerConnectionOperator(task_id=<task_id>,ecr_docker_conn_id=<docker_conn_id>,ecr_region=<ecr_region>,aws_conn_id=<aws_conn_id>,dag=REFRESH_ECR_TOKEN_DAG,)Placeholder parameters in above code snippet are defined below:token_refresh_interval: Time interval to refresh ECR login tokens. This should be less than 12 hours to prevent any access issues.task_id: Unique ID for this task.docker_conn_id: The Airflow Docker connection ID corresponding to ECR registry, that will be updated when this operator runs. The same connection ID should be passed toDockerOperatorthat pulls image from ECR registry. If connection does not exist in Airflow DB, operator will automatically create it.ecr_region: AWS region of ECR registry.aws_conn_id: Airflow connection ID corresponding to AWS user credentials that will be used to authenticate and retrieve new login token from ECR. This user should at minimum haveecr:GetAuthorizationTokenpermissions.Known IssuesIf you are running Airflow v1.10.7 or earlier, the operator will fail due to:AIRFLOW-3014.The work around is to update Airflowconnectiontablepasswordcolumn length to 5000 characters.AcknowledgementsThe operator is inspired fromBrian Campbell's post onUsing Airflow's Docker operator with ECR.
airflow-ecs-fargate-executor
AWS ECS and Fargate ExecutorThis is an AWS Executor that delegates every task to a scheduled container on either AWS ECS or AWS Fargate. By default, AWS Fargate will let you run 2000 simultaneous containers, with each container representing 1 Airflow Task.pipinstallairflow-ecs-fargate-executorGetting StartedIn your$AIRFLOW_HOME/pluginsfolder create a file calledecs_fargate_plugin.py.fromairflow.plugins_managerimportAirflowPluginfromairflow_ecs_fargate_executorimportEcsFargateExecutorclassEcsFargatePlugin(AirflowPlugin):"""AWS ECS & AWS FARGATE Plugin"""name="aws_ecs_plugin"executors=[EcsFargateExecutor]For more information on any of these execution parameters, see the link below:https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecs.html#ECS.Client.run_taskFor boto3 credential management, seehttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.htmlHow It WorksEverytime Apache Airflow wants to run a task, this executor will use Boto3'sECS.run_task()function to schedule a container on an existing cluster. Then every scheduler heartbeat the executor will check the status of every running container, and sync it with Airflow.But Why?Pay for what you use. With Celery, there is no predefined concept of auto-scaling. Therefore the number of worker servers one must constantly provision, pay for, and maintain is a static number. Due to the sporadic nature of batch-processing, most of the time most of these servers are not in use. However, during peak hours, these servers become overloaded.Servers require up-keep and maintenance. For example, just one cpu-bound or memory-bound Airflow Task could overload the resources of a server and starve out the celery or scheduler thread;thus causing the entire server to go down. This executor mitigates this risk.Simplicity in Setup.No new libaries are introducedIf you're on the Fargate executor it may take 4 minutes for a task to pop up, but at least it's a contant number. This way, the concept of tracking DAG Landing Times becomes unneccessary. If you have more than 2000 concurrent tasks (which is a lot) then you can always contact AWS to provide an increase in this soft-limit.AWS ECS v AWS Fargate?AWS Fargate- Is a serverless container orchistration service; comparable to a proprietary AWS version of Kubernetes. Launching a Fargate Task is like saying "I want these containers to be launched somewhere in the cloud with X CPU and Y memory, and I don't care about the server". AWS Fargate is built on top of AWS ECS, and is easier to manage and maintain. However, it provides less flexibility.AWS ECS- Is known as "Elastic Container Service", which is a container orchistration service that uses a designated cluster of EC2 instances that you operate, own, and maintain.I almost always recommend that you go the AWS Fargate route unless you need the custom flexibility provided by ECS.ECSFargateStart-up per taskInstantaneous if capacity available2-4 minutes per task; O(1) constant timeMaintenanceYou patch the own, operate, and patchServerlessCapacityDepends on number of machines with available space in cluster~2000 containers. See AWS LimitsFlexibilityHighLowAirflow Configurations[ecs_fargate]regiondescription: The name of AWS Regionmandatory: even with a custom run_task_templateexample: us-east-1clusterdescription: Name of AWS ECS or Fargate clustermandatory: even with a custom run_task_templatecontainer_namedescription: Name of registered Airflow container within your AWS cluster. This container will receive an airflow CLI command as an additional parameter to its entrypoint. For more info see url to Boto3 docs above.mandatory: even with a custom run_task_templatetask_definitiondescription: Name of AWS Task Definition. For more info see url to Boto3 docs above.launch_typedescription: Launch type can either be 'FARGATE' OR 'EC2'. For more info see url to Boto3 docs above.default: FARGATEplatform_versiondescription: AWS Fargate is versioned.See this page for more detailsdefault: LATESTassign_public_ipdescription: Assign public ip. For more info see url to Boto3 docs above.security_groupsdescription: Security group ids for task to run in (comma-separated). For more info see url to Boto3 docs above.subnetsdescription: Subnets for task to run in (comma-separated). For more info see url to Boto3 docs above.example: subnet-XXXXXXXX,subnet-YYYYYYYYrun_task_templatedescription: This is the default configuration for calling the ECSrun_taskfunction API (see url above). To change the parameters used to run a task in FARGATE or ECS, the user can overwrite the path to specify another jinja-templated JSON. More documentation can be found in theExtensibilitysection below.mandatory: even with a custom run_task_templatedefault: default_aws_ecs.DEFAULT_AWS_ECS_CONFIGNOTE: Modify airflow.cfg or export environmental variables. For example:AIRFLOW__ECS_FARGATE__REGION="us-west-2"ExtensibilityThere are many different ways to run an ECS or Fargate Container. You may want specific container overrides, environmental variables, subnets, etc. This project does not attempt to wrap around the AWS API. Instead, it allows the user to offer their own configuration in the form of Python dictionary, which are then passed in to Boto3's run_task function as **kwargs.In this example we will modify the DEFAULT_AWS_ECS_CONFIG. Note, however, there is nothing that's stopping us from complete overriding it and providing our own config. If we do so, the only manditory Airflow Configurations areregion,cluster,container_name, andrun_task_template.For example:exportAIRFLOW__AWS_ECS__RUN_TASK_TEMPLATE="aws_ecs_configs.AWS_ECS_CONFIG"# filename: AIRFLOW_HOME/plugins/aws_ecs_config.pyfromaws_ecs_default_configsimportDEFAULT_AWS_ECS_CONFIG# Add environmental variables to contianer overridesAWS_ECS_CONFIG=DEFAULT_AWS_ECS_CONFIGAWS_ECS_CONFIG['overrides']['containerOverrides'][0]['environment']=['SOME_ENV_A','SOME_ENV_B']Custom Container RequirementsThis means that you can specify CPU, Memory, and GPU requirements on a task.task=PythonOperator(python_callable=lambda*args,**kwargs:print('hello world'),task_id='say_hello',executor_config=dict(cpu=256,memory=512),dag=dag)
airflow-env-patch
Patch airflowenv_varsset env variables to - os.environ - spark.yarn.appMasterEnv - spark.executorEnv
airflow-exporter
Airflow prometheus exporterExposes dag and task based metrics from Airflow to a Prometheus compatible endpoint.Compatibility with Airflow versions>=2.0Current version is compatible with Airflow 2.0+<=1.10.14, >=1.10.3Version v1.3.2 is compatibleNote: Airflow 1.10.14 with Python 3.8 usersYou should installimportlib-metadatapackage in order for plugin to be loaded. See#85for details.<1.10.3Versionv0.5.4is compatibleInstallpipinstallairflow-exporterThat's it. You're done.Exporting extra labels to PrometheusIt is possible to add extra labels to DAG-related metrics by providinglabelsdict to DAGparams.Exampledag=DAG('dummy_dag',schedule_interval=timedelta(hours=5),default_args=default_args,catchup=False,params={'labels':{'env':'test'}})Labelenvwith valuetestwill be added to all metrics related todummy_dag:airflow_dag_status{dag_id="dummy_dag",env="test",owner="owner",status="running"} 12.0MetricsMetrics will be available athttp://<your_airflow_host_and_port>/admin/metrics/airflow_task_statusLabels:dag_idtask_idownerstatusValue: number of tasks in a specific status.airflow_dag_statusLabels:dag_idownerstatusValue: number of dags in a specific status.airflow_dag_run_durationLabels:dag_id: unique identifier for a given DAGValue: duration in seconds of the longest DAG Run for given DAG. This metric is not available for DAGs that have already finished.airflow_dag_last_statusLabels:dag_idownerstatusValue: 0 or 1 depending on wherever the current state of eachdag_idisstatus.LicenseDistributed under the BSD license. SeeLICENSEfor more information.
airflow-extended-api
中文版文档Airflow Extended API PluginAirflow Extended API, which exportairflow CLI commandas REST-ful API to extend the ability of airflow official API.This plugin is available for airflow 2.x Version and extensible, as you can easily define your own API to execute any Airflow CLI command so that it fits your demand.Current Supported CommandsThe following commands are supported now, and more is coming.airflow dags backfillairflow tasks runairflow tasks clearPlugin InstallInstall the plugin viapippipinstallairflow-extended-apiRestart the Airflow Web ServerOpen Airflow UI inDocs - Extended API OpenAPIorhttp://localhost:8080/to view extended API details in swagger UI.UsageExamplescurl request example:curl-XPOST--user"airflow:airflow"https://localhost:8080/api/extended/clear-H"Content-Type: application/json"-d'{"dagName": "string","downstream": true,"endDate": "2019-08-24T14:15:22Z","jobName": "string","startDate": "2019-08-24T14:15:22Z","username": "Extended API"}'Response Schema:{"executed_command":"string","exit_code":0,"output_info":["string"],"error_info":["string"]}curl without Credentials dataNote that you will need to pass credentials' data in--user "{username}:{password}"format, or you will get an Unauthorized error.curl-XPOSThttp://127.0.0.1:8080/api/extended/clear-H"Content-Type: application/json"-d'{"dagName": "string","downstream": true,"endDate": "2019-08-24T14:15:22Z","jobName": "string","startDate": "2019-08-24T14:15:22Z","username": "Extended API"}'response{"detail":null,"status":401,"title":"Unauthorized","type":"https://airflow.apache.org/docs/apache-airflow/2.2.5/stable-rest-api-ref.html#section/Errors/Unauthenticated"}curl with wrong CLI Commandcurl-XPOST--user"airflow:airflow"http://127.0.0.1:8080/api/extended/clear-H"Content-Type: application/json"-d'{"dagName": "string","downstream": true,"endDate": "2019-08-24T14:15:22Z","jobName": "string","startDate": "2019-08-24T14:15:22Z","username": "Extended API"}'response{"error_info":["Traceback (most recent call last):"," File \"/home/airflow/.local/bin/airflow\", line 8, in <module>"," sys.exit(main())"," File \"/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py\", line 48, in main"," args.func(args)"," File \"/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py\", line 48, in command"," return func(*args, **kwargs)"," File \"/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py\", line 92, in wrapper"," return f(*args, **kwargs)"," File \"/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/task_command.py\", line 506, in task_clear"," dags = get_dags(args.subdir, args.dag_id, use_regex=args.dag_regex)"," File \"/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py\", line 203, in get_dags"," return [get_dag(subdir, dag_id)]"," File \"/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py\", line 193, in get_dag"," f\"Dag {dag_id!r} could not be found; either it does not exist or it failed to parse.\"","airflow.exceptions.AirflowException: Dag 'string' could not be found; either it does not exist or it failed to parse.",""],"executed_command":"airflow tasks clear string -e 2019-08-24T14:15:22+00:00 -s 2019-08-24T14:15:22+00:00 -t string -y -d","exit_code":1,"output_info":["[\u001b[34m2022-04-22 10:05:50,538\u001b[0m] {\u001b[34mdagbag.py:\u001b[0m500} INFO\u001b[0m - Filling up the DagBag from /opt/airflow/dags\u001b[0m",""]}Project Plansupport custom configurationLinks and ReferencesAirflow configuration documentationAirflow CLI command documentationThis project was inspired by the following projects:andreax79/airflow-code-editorairflow-plugins/airflow_api_pluginContact email: Eric [email protected]
airflow-extended-metrics
Airflow ExtensionsExtended MetricsThis package is to help integrate any project with extended metrics. It currently supports Google's Stackdriver API as a way to monitor metrics. There is also easy integration between the API and an Airflow project.
airflow-extension-metrics
No description available on PyPI.
airflow-extension-triggers
No description available on PyPI.
airflow-faculty-plugin
No description available on PyPI.
airflow-file-to-bq
No description available on PyPI.
airflow-framework
Failed to fetch description. HTTP Status Code: 404
airflow-fs
airflow-fsairflow-fs is Python package that provides hooks and operators for manipulating files across a variety of file systems using Apache Airflow.Why airflow-fs?Airflow-fs implements a single interface for different file system hooks, in contrast to Airflows builtin file system hooks/operators. This approach allows us to interact with files independently of the underlying file system, using a common set of operators for performing general operations such as copying and deleting files.Currently, airflow-fs supports the following file systems: local, FTP, HDFS, S3 and SFTP. Support for additional file systems can be added by implementing an additional file system hook that adheres to the required hook interface. See the documentation for more details.DocumentationDetailed documentation is available at:https://jrderuiter.github.io/airflow-fs.LicenseThis software is freely available under the MIT license.HistoryVersion 0.1.0Initial release supporting local, FTP, HDFS, S3 and SFTP file systems.
airflow-gcpsecretmanager-adapter
airflow-gcpsecretmanager-adapterOverviewThis repository contains a Python package to allow Airflow variables that do not align with GCP Secret Manager naming convention to be used with the secrets backend. GCP Secret Manager has constraints on the characters that may be used when naming a secret (alphanumerics, hyphens and underscores only), however variables in Airflow are not restricted in any way. Therefore in some cases it may be necessary to transform the name of the Airflow variable to a format acceptable for Secret Manager to use. For example the Airflow variable name:global::myvaris not permitted as a GCP secret name, so this must instead be translated to:global-myvarwhichispermitted.This package can be installed as an alternative secrets backend within Airflow, as described here:https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html#roll-your-own-secrets-backendInstallationpipinstallairflow-gcpsecretmanager-adapterCurrent replacementsSource charactersReplaced with::-:-DeploymentFollow the instructions here:https://packaging.python.org/en/latest/tutorials/packaging-projects/to build and deploy a new version of the adapter library.
airflow-gitlab-webhook
Airflow Gitlab Webhook PluginDescriptionA plugin forApache Airflowthat exposes REST endpoint forGitlab Webhooks.System RequirementsAirflow Versions1.10.2 or newerDeployment InstructionsInstall the pluginpip install airflow-gitlab-webhookUpdate the airflow.cfg configuration file adding thegitlab_pluginsection[gitlab_plugin] repository_url = http://example.com/mike/diaspora.git token = 62b32508-b1ad-44d2-97d1-80021a8d7576 dag = tutorial (Optional) Configure other repositories repository_url1 = http://example.com/bla.git token1 = my-secret dag1 = git_updaterepository_url = Gitlab repository URLtoken = Optional Secure Tokendag = DAG to be run when the push even is receivedConfigure Gitlab Webook (push event) for the repositoryhttps://docs.gitlab.com/ee/user/project/integrations/webhooks.htmlRestart the Airflow Web ServerEndpointspushGitlab Push EventPOST - https://{HOST}:{PORT}/webhooks/gitlab/push
airflow-google-ads-api-report-fetcher
Using gaarf in AirflowIf you want to use Apache Airflow to run any gaarf-based projects you can useairflow-google-ads-api-report-fetcherpackage.Install it withpip install airflow-google-ads-api-report-fetcher- it will makeairflow_gaarflibrary available.Install the latest development version withpip install -e git+https://github.com/google/ads-api-report-fetcher.git#egg=airflow-google-ads-api-report-fetcher\&subdirectory=py/airflow_gaarfThe library comes with two operators -GaarfOperatorandGaarfBqOperatorwhich can be used to simplify executinggoogle_ads_queriesandbq_queriesrespectively.SetupConnectionsTemplate pipeline expects two type of connections - go toAdmin - Connections, add new connection (typeGeneric) and inExtraadd the values specified below:google_ads_default{"google_ads_client": {"developer_token": "", "client_id": "", "client_secret": "", "refresh_token": "", "login_customer_id": "", "client_customer_id": "", "use_proto_plus": "true" } }gcp_conn_id{"cloud": {"project_id": "your-project"} }ExamplesOnce the above connections were setup you may proceed to configuring DAG.examplesfolder contains several DAGs you might find useful:01_gaarf_console_reader_console_writer.py- simple DAG which consist of a singleGaarfOperatorwhat fetches data from an inline query and outputs results to the console.02_gaarf_file_reader_csv_writer.py- DAG that reads query from a file (can be local or remote storage) and save results to CSV.03_gaarf_read_solution_directory- DAG that reads queries from a directory with queries and for reach query builds its own task.
airflow-google-cloud-run-plugin
airflow-google-cloud-run-pluginAirflow plugin for orchestratingGoogle Cloud Run jobs.FeaturesEasier to use alternative toKubernetesPodOperatorSecurely use sensitive data stored in Google Cloud Secrets ManagerCreate tasks with isolated dependenciesEnables polyglot workflowsResourcesCore OperatorsCloudRunJobOperatorCRUD-Based OperatorsCloudRunCreateJobOperatorCloudRunGetJobOperator🔜CloudRunUpdateJobOperator🔜CloudRunDeleteJobOperatorCloudRunListJobsOperator🔜HooksCloudRunJobHookSensorsCloudRunJobExecutionSensor🔜UsageSimple Job LifecyclefromairflowimportDAGfromairflow_google_cloud_run_plugin.operators.cloud_runimportCloudRunJobOperatorwithDAG(dag_id="example_dag")asdag:job=CloudRunJobOperator(task_id="example-job",name="example-job",location="us-central1",project_id="example-project",image="gcr.io/gcp-runtimes/ubuntu_18_0_4",command=["echo"],cpu="1000m",memory="512Mi",create_if_not_exists=True,delete_on_exit=True)CRUD Job LifecyclefromairflowimportDAGfromairflow_google_cloud_run_plugin.operators.cloud_runimport(CloudRunJobOperator,CloudRunCreateJobOperator,CloudRunDeleteJobOperator,)withDAG(dag_id="example_dag")asdag:create_job=CloudRunCreateJobOperator(task_id="create",name="example-job",location="us-central1",project_id="example-project",image="gcr.io/gcp-runtimes/ubuntu_18_0_4",command=["echo"],cpu="1000m",memory="512Mi")run_job=CloudRunJobOperator(task_id="run",name="example-job",location="us-central1",project_id="example-project")delete_job=CloudRunDeleteJobOperator(task_id="delete",name="example-job",location="us-central1",project_id="example-project")create_job>>run_job>>delete_jobUsing Environment VariablesfromairflowimportDAGfromairflow_google_cloud_run_plugin.operators.cloud_runimportCloudRunJobOperator# Simple environment variableFOO={"name":"FOO","value":"not_so_secret_value_123"}# Environment variable from Secret ManagerBAR={"name":"BAR","valueFrom":{"secretKeyRef":{"name":"super_secret_password","key":"1"# or "latest" for latest secret version}}}withDAG(dag_id="example_dag")asdag:job=CloudRunJobOperator(task_id="example-job",name="example-job",location="us-central1",project_id="example-project",image="gcr.io/gcp-runtimes/ubuntu_18_0_4",command=["echo"],args=["$FOO","$BAR"],env_vars=[FOO,BAR],cpu="1000m",memory="512Mi",create_if_not_exists=True,delete_on_exit=True)Improvement SuggestionsAdd support for Cloud Run servicesNicer user experience for defining args and commandsUse approach from other GCP operators once this issue is resolvedhttps://github.com/googleapis/python-run/issues/64Add operators for all CRUD operationsAdd run sensor (seelink)Enable volume mounts (seeTaskSpec)Allow user to configure resource requirementsrequests( seeResourceRequirements)Add remaining container options (seeContainer)Allow non-default credentials and for user to specify service account ( seelink)Allow failure threshold. If more than one task is specified, user should be allowed to specify number of failures allowedAdd custom links for log URIsAdd wrapper class for easier environment variable definition. Similar toSecretfrom Kubernetes provider ( seelink)Add slight time padding between job create and runAdd ability to choose to replace the job with new config values if values have changed
airflow-gpg-plugin
Airflow GPG PluginAirflow plugin with hooks and operators to work with GPG encryption and decryption.InstallationUse the package managerpipto install foobar.pipinstallairflow-gpg-pluginUsageAdd an airflow connection from shell.loginis the email address in the GPG key.passwordis the passphrase of the GPG key.airflowconnectionsadd'gpg_default_conn'\--conn-type'gpg'\--conn-login'[email protected]'\--conn-password'gpgexamplepassphrase'\--conn-host''\--conn-port''\--conn-extra'{"key_file": "tests/resources/gpgexamplepassphrase.asc"}'Using operators to encrypt and decrypt files.importosfromdatetimeimportdatetimefromairflowimportDAGfromairflow_gpg_plugin.operators.gpg_decrypt_file_operatorimportGPGDecryptFileOperatorfromairflow_gpg_plugin.operators.gpg_encrypt_file_operatorimportGPGEncryptFileOperatorgpg_conn_id="gpg_default_conn"dag=DAG(dag_id="gpg_example",start_date=datetime(2021,1,1),schedule_interval=None)encrypt=GPGEncryptFileOperator(task_id="gpg_encrypt",dag=dag,conn_id=gpg_conn_id,input_file_path=os.curdir+"/README.md",output_file_path=os.curdir+"/README.md.gpg")decrypt=GPGDecryptFileOperator(task_id="gpg_decrypt",dag=dag,conn_id=gpg_conn_id,input_file_path=os.curdir+"/README.md.gpg",output_file_path=os.curdir+"/README.md.txt")encrypt>>decryptContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT
airflow-grpc
Airflow Grpc OperatorFork on Airflow 2.0Fix Airflow 1.10.x use grpc operatorRequirements:apache-airflow grpcio protobufHow to install:pip install airflow-grpcHow to use:airflow 1.10.xfrom airflow_grpc.grpc_operator import GrpcOperator args = { 'owner': 'Airflow', 'start_date': days_ago(1), } dag = DAG( dag_id='dag_id', default_args=args, schedule_interval=None ) def callback(response: Any, **context): return response run_this = GrpcOperator(task_id='task_id', dag=dag, grpc_conn_id='your_grpc_connection_id_on_admin_connections', stub_class=YOUR_GRPC_STUB_CLASS, call_func='your_grpc_stub_function', request_data_func=YOUR_GRPC_MESSAGE_FOR_REQUEST, response_callback=YOUR_RESPOSNE_METHOD, xcom_task_id='XCOM_TASK_ID', data=YOUR_REQUEST_DATA_DICT )airflow 2.0.0First must implement the stub_class methodimport xxx_pb2_grpc, xxx_pb2 from utils.grpc_hook import BaseGrpcHook class xxxGrpcHook(BaseGrpcHook): def stub_class(self, channel): return xxx_pb2_grpc.xxxServiceStub(channel)Use the implemented classxxxGrpcHook().run('MethodName', {'request': xxx_pb2.xxxRequestMethod(**data_dict)})
airflow-hdinsight
airflow-hdinsightA set of airflow hooks, operators and sensors to allow airflow DAGs to operate with the Azure HDInsight platform, for cluster creation and monitoring as well as job submission and monitoring. Also included are some enhanced Azure Blob and Data Lake sensors.This project is both an amalgamation and enhancement of existing open source airflow extensions, plus new extensions to solve the problem.Installationpip install airflow-hdinsightExtensionsairflowhdiTypeNameWhat it doesHookAzureHDInsightHookUses the HDInsightManagementClient from theHDInsight SDK for Pythonto expose several operations on an HDInsight cluster - get cluster state, create, delete.OperatorAzureHDInsightCreateClusterOperatorUse the AzureHDInsightHook to create a clusterOperatorAzureHDInsightDeleteClusterOperatorUse the AzureHDInsightHook to delete a clusterOperatorConnectedAzureHDInsightCreateClusterOperatorExtends the AzureHDInsightCreateClusterOperator to allow fetching of the security credentials and cluster creation spec from an airflow connectionOperatorAzureHDInsightSshOperatorUses the AzureHDInsightHook and SSHHook to run an SSH command on the master node of the given HDInsight clusterSensorAzureHDInsightClusterSensorA sensor to monitor the provisioning state or running state (can switch between either mode) of a given HDInsight cluster. Uses the AzureHDInsightHook.SensorWasbWildcardPrefixSensorAn enhancement to theWasbPrefixSensorto support sensing on a wildcard prefixSensorAzureDataLakeStorageGen1WebHdfsSensorUses airflow'sAzureDataLakeHookto sense a glob path (which implicitly supports wildcards) on ADLS Gen 1. ADLS Gen 2 is not yet supported in airflow.airflowlivyTypeNameWhat it doesHookLivyBatchHookUses the Apache LivyBatch APIto submit spark jobs to a livy server, get batch state, verify batch state by quering either the spark history server or yarn resource manager, spill the logs of the spark job post completion, etc.OperatorLivyBatchOperatorUses the LivyBatchHook to submit a spark job to a livy serverSensorLivyBatchSensorUses the LivyBatchHook to sense termination and verify completion, spill logs of a spark job submitted earlier to a livy serverOrigins of the HDinsight operator workThe HDInsight operator work is loosely inspired fromalikemalocalan/airflow-hdinsight-operators, however that has a huge number of defects, as to why it wasnever acceptedto bemergedinto airflow in the first place. This project solves all of those issues and more, and is frankly a full rewrite.Origins of the livy workThe livy batch operator is based on the work bypanovvv's projectairfllow-livy-operators. It does some necessary changes:Seperates the operator into a hook (LivyBatchHook), an operator (LivyBatchOperator) and a sensor (LivyBatchSensor)Adds additional verification and log spilling to the sensor (the original sensor does not)Removes additional verifiation and log spilling from the operator - hence alllowing a async pattern akin to the EMR add step operator and step sensor.Creates livy, spark and YARN airflow connections dynamically from an Azure HDInsight connectionReturns the batch ID from the operator so that a sensor can use it after being passed through XComChanges logging to LoggingMixin callsAllows templatization of fieldsState of airflow livy operators in the wild..As it stands today (June of 2020), there are multiple airflow livy operator projects out there:panovvv/airflow-livy-operators: the project which this project bases its work ontheofficial livy providerin airflow 2.0, with a backport available for airflow 1.1.x: alas the official provider has very limited functionality - it does not spill the job's logs, and it does not do additional verification for job completion using spark history server or yarn resource manager, amongst other limitationsrssanders3/airflow-spark-operator-plugin: this is the oldest livy operator, which only supports livy sessions and not batches. there's a copy of this inalikemalocalan/airflow-hdinsight-operators.
airflow-helper
About the Airflow HelperIt’s pretty fresh. Docs maybe not clear yet, keep calm ! I will update them soon :)Airflow Helperis a tool that currently allows setting up Airflow Variables, Connections, and Pools from a YAML configuration file. Support yaml inheritance & can obtain all settings from existed Airflow Server!In the future, it can be extended with other helpful features. I’m open to any suggestions and feature requests. Just open an issue and describe what you want.MotivationThis project allows to set up Connections & Variables & Pools for Airflow from yaml config. And export them to one config file.Yeah, I know, I know… secrets backend …But I want to have all variables on my local machine toooo without need to connect to any secrets backend. And on tests also!So I want to have some tool with that I can define ones all needed connections & variables in config file & forget about them during init new environment on local machine or running tests in CI.Some of functionality looks like ‘duplicated’ airflow normal cli, but no.. I tried to use for, example,airflow connections exportcommand, but it is export dozend of default connections, that I’m not interested in - and I don’t want them, I want only those connections, that created by me.Airflow Versions SupportsYou can see the github pipeline, that test library opposite each Airflow Version. I can only guarantee that 100% library works with Apache Airflow versions that are added on the CI/CD pipeline, but with big chance it works with all 2.x Apache Airflow versions.How to useInstallationWith Python in virtualenv from PyPi:https://pypi.org/project/airflow-helper/pip install airflow-helperairflow-helper --versionWith docker image from Docker Hub:https://hub.docker.com/repository/docker/xnuinside/airflow-helper/#pullimagedocker pull xnuinside/airflow-helper:latest#samplehowtoruncommanddocker run -it xnuinside/airflow-helper:latest --helpExample, how to use in docker-compose: example/docker-compose-example.yamlDefault settingsAll arguments that required in cli or Python code have ‘default’ setting, you can check all of them in file ‘airflow_helper/settings.py’Airflow Helper settings & flagsYou can configure how you want to use config - overwrite existed variables/connections/pools with values from config or just skip them, or raise error if already exist.In cli (or as arguments in Python main class, if you use helper directly from python) exist several useful flags, that you can use:airflow-helper load [OPTIONS] [FILE_PATH]#options:--url TEXT Apache Airflow full url to connect. You can provide it or host & port separately. [default: None]--host TEXT Apache Airflow server host form that obtain existed settings [default: http://localhost] --port TEXT Apache Airflow server port form that obtain existed settings [default: 8080] --user -u TEXT Apache Airflow user with read rights [default: airflow] --password -p TEXT Apache Airflow user password [default: airflow] --overwrite -o Overwrite Connections & Pools if they already exists --skip-existed -se Skip `already exists` errors --help -h Show this message and exit.airflow-helper create [OPTIONS] COMMAND [ARGS]#commands:from-server Create config with values from existed Airflow Server new Create new empty config#options--help -h Show this message and exit.What if I already have Airflow server with dozens of variables??Obtain current Variables, Connections, Pools from existed serverNote: you should provide host url with protocol like: ‘https://path-to-your-airflow-server.com’ if protocol not in url, it will add ‘http://’ as default protocolGenerate config from existed Airflow Server - it is simple. Just provide creds with read access to existed Airflow Server like. We use Airflow REST API under the hood, so we need:- server host & port or just url in format 'http://path-to-airflow:8080' - user login - user passwordAnd use Airflow Helper:From cli# to get help airflow-helper create -h # to use command airflow-helper create path/where/to/save/airflow_settings.yaml --host https://your-airflow-host --port 8080 -u airflow-user -p airflow-passwordFrom python codefromairflow_helperimportRemoteConfigObtainter# by default it will save config in file airflow_settings.yamlRemoteConfigObtainter(user='airflow_user',password='airflow_user_pass',url='https://path-to-airflow:8080').dump_config()# but you can provide your own path like:RemoteConfigObtainter(user='airflow_user',password='airflow_user_pass',url='https://path-to-airflow:8080').dump_config(file_path='any/path/to/future/airflow_config.yaml')It will create airflow_settings.yaml with all Variables, Pools & Connections inside!Define config from ScratchYou can init empty config with cliairflow-helper create new path/airflow_settings.yamlIt will create empty sample-file with pre-defined config values.Define airflow_settings.yaml file. You can check examples as a files in example/ folder in this git repo (check ‘Config keys’ to see that keys are allowed - or check example/ folder)About connections: Note that ‘type’ it is not Name of Connection type. It is type id check them here -https://github.com/search?q=repo%3Aapache%2Fairflow%20conn_type&type=codeairflow:connections:-conn_type:fsconnection_id:fs_defaulthost:localhostlogin:fs_defaultport:nullpools:-description:Default poolinclude_deferred:falsename:default_poolslots:120-description:''include_deferred:truename:deferredslots:0variables:-description:nullkey:variable-namevalue:"variable-value"Run Airflow Helper to load configRequired settings:path to config file (by default it searchairflow_settings.yamlfile)Airflow Server address (by default it tries to connect to localhost:8080)Airflow user login (with admin rights that allowed to set up Pools, Variables, Connections)Airflow user password (for login upper)2.1 Run Airflow Helper from cli#togethelpairflow-helper load -h#toloadconfigairflow-helper load path/to/airflow_settings.yaml --host https://your-airflow-host --port 8080 -u airflow-user -p airflow-password 2.2. Run Airflow Helper from Python Codefromairflow_helperimportConfigUploader# you can provide only url or host & portConfigUploader(file_path=file_path,url=url,host=host,port=port,user=user,password=password).upload_config_to_server()Inheritance (include one config in another)I love inheritance. So you can use it too. If you have some base vars/pools/connections for all environments and you don’t want copy-paste same settings in multiple files - just useinclude:property at the start of your config.Note, thatincludeallows you to include a list of files, they will be inherit one-by-one in order that you define underincludearg from the top to the bottom.Example:Define your ‘base’ config, for example: airflow_settings_base.yamlconnections:-conn_type:fsconnection_id:fs_defaulthost:localhostlogin:fs_defaultport:nullpools:-description:Default poolinclude_deferred:falsename:default_poolslots:120Now create your dev-env config : airflow_settings_dev.yaml (names can be any that you want) and use ‘include:’ property inside itinclude:-"airflow_settings_base.yaml"# here put only dev-special variables/connections/poolsairflow:variables:passThis mean that final config that will be uploaded to server will contain base settings + settings that you defined directly in airflow_settings_dev.yaml configLibrary ConfigurationAirflow Helper uses a bunch of ‘default’ settings under the hood. Because library uses pydantic-settings, you can also overwrite those configurations settings with environment variables or with monkey patch python code.To get full list of possible default settings - check file airflow_helper/settings.py.If you never heard about pydantic-settings - checkhttps://docs.pydantic.dev/latest/concepts/pydantic_settings/.Example, to overwrite default airflow host you should provide environment variable with prefixAIRFLOW_HELPER_and nameHOST, so variable name should looks likeAIRFLOW_HELPER_HOSTTODODocumentation websiteGetting Variables, Pools, Connections directly from Airflow DB (currently available only with Airflow REST API)Load configs from S3 and other cloud object storagesLoad configs from gitCreate overwrite mode for settings uploadChangelog0.2.0Added check for variables - now if variable already exists on server Airflow Helper will raise error if you tries to overwrite it from the config. To overwrite existed Variables, Connections, Pools - use flag ‘–overwrite’ or argument with same name, if you use Airflow Helper from Python.Added flag –skip-existed to avoid raise error if variables/connections/pools exists already on Airflow Server - it will just add new one from config file.0.1.2Do not fail if some sections from config are not exists0.1.1Overwrite option added toairflow-helperloadcommand
airflow-hop-plugin
Hop Airflow pluginThis is an Apache Hop plugin for Apache Airflow in order to orchestrate Apache Hop pipelines and workflows from Airflow.RequirementsBefore setting up the plugin you must have completely set up your Hop environment:Configured Hop ServerConfigured remote pipeline configurationConfigured remote workflow configurationTo do so, go to the metadata window in Hop UI by pressing Ctrl+Shift+M or click theicon.Double clickHop serverto create a new configuration.Double clickPipeline Run Configurationto create a new configuration.Double clickWorkflow Run Configurationto create a new configuration.Set up guideThe following content will be a "how to set up the plugin" guide plus some requirements and restraints when it comes to its usage.1. Generate metadata.jsonFor the correct configuration of this plugin a file containing all Hop's metadata must be created inside each project directory. This can be done by exporting it from Hop UI.Please note that this process must be repeated each time the metadata of a project is modified.2. Install the pluginThe first step in order to get this plugin working is to install the package using the following command:pip install airflow-hop-plugin3. Hop Directory StructureDue to some technical limitations it's really important for the Hop home directory to have the following structure.hop # This is the hop home directory ├── ... ├── config │ ├── hop-config.json │ ├── example_environment.json # This is where you should save your environment files │   ├── metadata │   │   └── ... │   └── projects │ ├── ... │ └── example_project # This is how your project's directory should look │ ├── metadata.json │ ├── metadata │ │ └── ... │ ├── example_directory │ │ └── example_workflow.hwl │ └── example_pipeline.hpl ├── ...Moreover, please remember to save all projects inside the "projects" directory and set a path relative to the hop home directory when configuring them like shown in the following picture:4. Create an Airflow ConnectionTo correctly use the operators you must create a newAirflow connection. There are multiple ways to do so and whichever you want can be used, but it should have these values for the following attributes:Connection ID: 'hop_default'Connection Type: 'http'Login: apache_hop_usernamePassword: apache_hop_passwordHost: apache_hop_serverPort: apache_hop_portExtra: "hop_home": "/path/to/hop-home/"Example of a new Airflow connection using Airflow's CLI:airflow connections add 'hop_default' \ --conn-json '{ "conn_type": "http", "login": "cluster", "password": "cluster", "host": "0.0.0.0", "port": 8080, "schema": "", "extra": { "hop_home": "/home/user/hop" } }'5. Creating a DAGHere's an example of a DAG:fromairflow_hop.operatorsimportHopPipelineOperatorfromairflow_hop.operatorsimportHopWorkflowOperator# ... #withDAG('sample_dag',start_date=datetime(2022,7,26),schedule_interval='@daily',catchup=False)asdag:# Define a pipelinefirst_pipe=HopPipelineOperator(task_id='first_pipe',pipeline='pipelines/first_pipeline.hpl',pipe_config='remote hop server',project_name='default',log_level='Basic')# Define a pipeline with parameterssecond_pipe=HopPipelineOperator(task_id='second_pipe',pipeline='pipelines/second_pipeline.hpl',pipe_config='remote hop server',project_name='default',log_level='Basic',params={'DATE':'{{ ds }}'})# Date in yyyy-mm-dd format# Define a workflow with parameterswork_test=HopWorkflowOperator(task_id='work_test',workflow='workflows/workflow_example.hwf',project_name='default',log_level='Basic',params={'DATE':'{{ ds }}'})# Date in yyyy-mm-dd formatfirst_pipe>>second_pipe>>work_testIt's important to point out that both the workflow and pipeline parameters within their respective operators must be a relative path parting from the project's directory.DevelopmentDeploy Apache Hop Server using DockerRequeriments:dockerdocker-composeIf you want to use Docker to create the server you can use the following docker-compose configuration as a template:services:apache-hop:image:apache/hop:latestports:-8080:8080volumes:-hop_path:/home/hopenvironment:HOP_SERVER_USER:clusterHOP_SERVER_PASS:clusterHOP_SERVER_PORT:8080HOP_SERVER_HOSTNAME:0.0.0.0Once done, the Hop server can be started using docker compose.LicenseCopyright 2022 Aneior Studio, SL Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
airflow-hop-plugin-custom
Hop Airflow pluginThis is an Apache Hop plugin for Apache Airflow in order to orchestrate Apache Hop pipelines and workflows from Airflow.RequirementsBefore setting up the plugin you must have completely set up your Hop environment:Configured Hop ServerConfigured remote pipeline configurationConfigured remote workflow configurationTo do so, go to the metadata window in Hop UI by pressing Ctrl+Shift+M or click theicon.Double clickHop serverto create a new configuration.Double clickPipeline Run Configurationto create a new configuration.Double clickWorkflow Run Configurationto create a new configuration.Set up guideThe following content will be a "how to set up the plugin" guide plus some requirements and restraints when it comes to its usage.1. Generate metadata.jsonFor the correct configuration of this plugin a file containing all Hop's metadata must be created inside each project directory. This can be done by exporting it from Hop UI.Please note that this process must be repeated each time the metadata of a project is modified.2. Install the pluginThe first step in order to get this plugin working is to install the package using the following command:pip install airflow-hop-plugin-custom3. Hop Directory StructureDue to some technical limitations it's really important for the Hop home directory to have the following structure.hop # This is the hop home directory ├── ... ├── config │ ├── hop-config.json │ ├── example_environment.json # This is where you should save your environment files │   ├── metadata │   │   └── ... │   └── projects │ ├── ... │ └── example_project # This is how your project's directory should look │ ├── metadata.json │ ├── metadata │ │ └── ... │ ├── example_directory │ │ └── example_workflow.hwl │ └── example_pipeline.hpl ├── ...Moreover, please remember to save all projects inside the "projects" directory and set a path relative to the hop home directory when configuring them like shown in the following picture:4. Create an Airflow ConnectionTo correctly use the operators you must create a newAirflow connection. There are multiple ways to do so and whichever you want can be used, but it should have these values for the following attributes:Connection ID: 'hop_default'Connection Type: 'http'Login: apache_hop_usernamePassword: apache_hop_passwordHost: apache_hop_serverPort: apache_hop_portExtra: "hop_home": "/path/to/hop-home/"Example of a new Airflow connection using Airflow's CLI:airflow connections add 'hop_default' \ --conn-json '{ "conn_type": "http", "login": "cluster", "password": "cluster", "host": "0.0.0.0", "port": 8080, "schema": "", "extra": { "hop_home": "/home/user/hop" } }'5. Creating a DAGHere's an example of a DAG:fromairflow_hop.operatorsimportHopPipelineOperatorfromairflow_hop.operatorsimportHopWorkflowOperator# ... #withDAG('sample_dag',start_date=datetime(2022,7,26),schedule_interval='@daily',catchup=False)asdag:# Define a pipelinefirst_pipe=HopPipelineOperator(task_id='first_pipe',pipeline='pipelines/first_pipeline.hpl',pipe_config='remote hop server',project_name='default',log_level='Basic')# Define a pipeline with parameterssecond_pipe=HopPipelineOperator(task_id='second_pipe',pipeline='pipelines/second_pipeline.hpl',pipe_config='remote hop server',project_name='default',log_level='Basic',params={'DATE':'{{ ds }}'})# Date in yyyy-mm-dd format# Define a workflow with parameterswork_test=HopWorkflowOperator(task_id='work_test',workflow='workflows/workflow_example.hwf',project_name='default',log_level='Basic',params={'DATE':'{{ ds }}'})# Date in yyyy-mm-dd formatfirst_pipe>>second_pipe>>work_testIt's important to point out that both the workflow and pipeline parameters within their respective operators must be a relative path parting from the project's directory.DevelopmentDeploy Apache Hop Server using DockerRequeriments:dockerdocker-composeIf you want to use Docker to create the server you can use the following docker-compose configuration as a template:services:apache-hop:image:apache/hop:latestports:-8080:8080volumes:-hop_path:/home/hopenvironment:HOP_SERVER_USER:clusterHOP_SERVER_PASS:clusterHOP_SERVER_PORT:8080HOP_SERVER_HOSTNAME:0.0.0.0Once done, the Hop server can be started using docker compose.LicenseCopyright 2022 Aneior Studio, SL Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
airflow-imaging-plugins
UNKNOWN
airflow-impatient
No description available on PyPI.
airflow-indexima
airflow-indeximaVersions followingSemantic VersioningOverviewIndeximaAirflowintegration based on pyhive.This project is used in our prod environment with success. As it a young project, take care of change, any help is welcome :)SetupRequirementsPython 3.6+InstallationInstall this library directly into an activated virtual environment:$ pip install airflow-indeximaor add it to yourPoetryproject:$ poetry add airflow-indeximaor you could use it as anAirflow pluginUsageAfter installation, the package can imported:$ python >>> import airflow_indexima >>> airflow_indexima.__version__SeeApi documentationa simple queryfromairflow_indexima.operatorsimportIndeximaQueryRunnerOperator...withdag:...op=IndeximaQueryRunnerOperator(task_id='my-task-id',sql_query='DELETE FROM Client WHERE GRPD = 1',indexima_conn_id='my-indexima-connection')...a load into indeximafromairflow_indexima.operators.indeximaimportIndeximaLoadDataOperator...withdag:...op=IndeximaLoadDataOperator(task_id='my-task-id',indexima_conn_id='my-indexima-connection',target_table='Client',source_select_query='select * from dsi.client',truncate=True,load_path_uri='jdbc:redshift://my-private-instance.com:5439/db_client?ssl=true&user=airflow-user&password=XXXXXXXX')...get load path uri from ConnectionIn order to get jdbc uri from an Airflow Connection, you could use:get_redshift_load_path_uriget_postgresql_load_path_urifrom moduleairflow_indexima.uriBoth method have this profile:Callable[[str, Optional[ConnectionDecorator]], str]Example:get_postgresql_load_path_uri(connection_id='my_conn') >> 'jdbc:postgresql://my-db:5432/db_client?ssl=true&user=airflow-user&password=XXXXXXXX'Indexima ConnectionAuthenticationPyHive supported authentication mode:'NONE': needs a username without password'CUSTOM': needs a username and password (default mode)'LDAP': needs a username and password'KERBEROS': need a kerberos service name'NOSASL': corresponds to hive.server2.authentication=NOSASL in hive-site.xmlConfigurationYou could set those parameters:host (str): The host to connect to.port (int): The (TCP) port to connect to.timeout_seconds ([int]): define the socket timeout in second (default None)socket_keepalive ([bool]): enable TCP keepalive, default false.auth (str): authentication modeusername ([str]): username to loginpassword ([str]): password to loginkerberos_service_name ([str]): kerberos service namehost,port,usernameandpasswordcame from airflow Connection configuration.timeout_seconds,socket_keepalive,authandkerberos_service_nameparameters can came from:attribut on Hook/Operator classAirflow Connection inextraparameter, like this:'{"auth": "CUSTOM", "timeout_seconds": 90, "socket_keepalive": true}'Setted attribut override airflow connection configuration.You could add a decorator function in order to post process Connection before usage. This decorator will be executed after connection configuration (see next section).customize Connection credential accessIf you use another backend to store your password (like AWS SSM), you could define a decorator and use it as a function in your dag.fromairflow.modelsimportConnectionfromairflowimportDAGfromairdlow_indexima.uriimportdefine_load_path_factory,get_redshift_load_path_uridefmy_decorator(conn:Connection)->Connection:# conn instance will be not shared, and use only on connection requestconn.password=get_ssm_parameter(param_name=f'{conn.conn_id}.{con.login}')returnconndag=DAG(dag_id='my_dag',user_defined_macros={# we define a macro get_load_path_uri'get_load_path_uri':define_load_path_factory(conn_id='my-redshift-connection',decorator=my_decorator,factory=get_redshift_load_path_uri)},...)withdag:...op=IndeximaLoadDataOperator(task_id='my-task-id',indexima_conn_id='my-indexima-connection',target_table='Client',source_select_query='select * from dsi.client',truncate=True,load_path_uri='{{ get_load_path_uri() }}')...a Connection decorator must follow this type:ConnectionDecorator = Callable[[Connection], Connection]define_load_path_factoryis a function which take:a connnection identifiera decoratorConnectionDecoratoran uri factoryUriGeneratorFactory = Callable[[str, Optional[ConnectionDecorator]], str]and return a function with no argument which can be called as a macro in dag's operator.Optional connection parametersOn each operator you could set this member:auth (Optional[str]): authentication mode (default: {'CUSTOM'})kerberos_service_name (Optional[str]): optional kerberos service nametimeout_seconds (Optional[Union[int, datetime.timedelta]]): define the socket timeout in second (could be an int or a timedelta)socket_keepalive (Optional[bool]): enable TCP keepalive.Note:if execution_timeout is set, it will be used as default value for timeout_seconds.Production FeedbackIn production, you could have few strange behaviour like those that we have meet."TSocket read 0 bytes"You could fine this issuehttps://github.com/dropbox/PyHive/issues/240on long load query running.Try this in sequence:check your operator configuration, and settimeout_secondsmember to 3600 second for example. You could have a different behaviour when running a dag with/without airflow context in docker container.if your facing a broken pipe, after 300s, and you have an AWS NLB V2 : Read againnetwork-load-balancers, and focus on this:Elastic Load Balancing sets the idle timeout value for TCP flows to 350 seconds. You cannot modify this value. For TCP listeners, clients or targets can use TCP keepalive packets to reset the idle timeout. TCP keepalive packets are not supported for TLS listeners.We have tried for you the "socket_keep_alive", and it did not work at all. Our solution was to remove our NLB and use a simple dns A field on indexima master."utf-8" or could not read byte ...Be very welcome to add{ "serialization.encoding": "utf-8"}in hive_configuration member of IndeximaHook.This setting is set in IndeximaHook.init, may you override it ?Playing Airflow without Airflow ServerWhen I was trying many little things and deals with hive stuff, i wrote a single script that help me a lot.Feel free to use it (or not) to set your dag by yourself:importosimportdatetimefromairflow.hooks.base_hookimportBaseHookfromairflowimportDAGfromairflow_indexima.operators.indeximaimportIndeximaLoadDataOperator# here we create our Airflow Connectionos.environ['AIRFLOW_CONN_INDEXIMA_ID']='hive://my-user:my-password@my-server:10000/default'conn=BaseHook.get_connection('indexima_id')dag=DAG(dag_id='my_dag',default_args={'start_date':datetime.datetime(year=2019,month=12,day=1),'depends_on_past':False,'email_on_failure':False,'email':[],},)withdag:load_operator=IndeximaLoadDataOperator(task_id='my_task',indexima_conn_id='indexima_id',target_table='my_table',source_select_query=("select * from source_table where ""creation_date_tms between '2019-11-30T00:00:00+00:00' and '2019-11-30T12:59:59.000999+00:00'"),truncate=True,truncate_sql=("DELETE FROM my_table WHERE ""creation_date_tms between '2019-11-30T00:00:00+00:00' and '2019-11-30T12:59:59.000999+00:00'"),load_path_uri='jdbc:postgresql://myserver:5439/db_common?user=etl_user&password=a_strong_password&ssl=true',retries=2,execution_timeout=datetime.timedelta(hours=3),sla=datetime.timedelta(hours=1,minutes=30),)# here we run the dagload_operator.execute(context={})delos.environ['AIRFLOW_CONN_INDEXIMA_ID']LicenseThe MIT License (MIT)ContributingSeeContributingThanksThanks to@bartosz25for his help with hive connection details...
airflow-installer
Airflow InstallerA command-line tool to simplify the installation of Apache Airflow in a virtual environment.FeaturesInstall Apache Airflow with optional dependencies in a virtual environment.Automatically detect the latest version of Apache Airflow from PyPI.Manage version constraints using constraints files.Easy-to-use command-line interface (CLI) for seamless installation.InstallationYou can install airflow-installer using pip:pipinstallairflow-installerOptionsUsage: airflow_installer [OPTIONS] ╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ --version TEXT Apache Airflow version to install. Defaults to latest. [default: 2.6.3] │ │ --constraints-url TEXT URL of the constraints file. Defaults to latest version constraints. │ │ --extras TEXT Extras or additional requirements to install with Apache Airflow. │ │ --requirements TEXT Path to a requirements.txt file to be used during installation. │ │ --venv-path TEXT Path where the virtual environment will be created [default: .venv/airflow] │ │ --recreate-venv --no-recreate-venv Recreate virtual environment if it already exists. [default: no-recreate-venv] │ │ --verbose --no-verbose Enable verbose debugging output. [default: no-verbose] │ │ --install-completion Install completion for the current shell. │ │ --show-completion Show completion for the current shell, to copy it or customize the installation. │ │ --help Show this message and exit. │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯Usage# Install the latest version of Apache Airflow in the default virtual environmentairflow-installer# Install a specific version of Apache Airflow in a custom virtual environmentairflow-installer--version2.6.3--venv-path.venv/my-airflow# Install Apache Airflow with specific extras and constraintsairflow-installer--version2.5.2--extras"[celery,crypto]"--constraints-url"https://raw.githubusercontent.com/apache/airflow/constraints-2.5.2/constraints-3.7.txt"# Install Apache Airflow using a requirements.txt fileairflow-installer--version2.6.0--requirementsrequirements.txt# Recreate the virtual environment if it already existsairflow-installer--recreate-venv# Enable verbose output for debuggingairflow-installer--verbose