status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 14,279 | ["airflow/providers/amazon/aws/example_dags/example_s3_bucket.py", "airflow/providers/amazon/aws/example_dags/example_s3_bucket_tagging.py", "airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/operators/s3_bucket.py", "airflow/providers/amazon/aws/operators/s3_bucket_tagging.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/s3.rst", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging_system.py"] | Add AWS S3 Bucket Tagging Operator | **Description**
Add the missing AWS Operators for the three (get/put/delete) AWS S3 bucket tagging APIs, including testing.
**Use case / motivation**
I am looking to add an Operator that will implement the existing API functionality to manage the tags on an AWS S3 bucket.
**Are you willing to submit a PR?**
Yes
**Related Issues**
None that I saw
| https://github.com/apache/airflow/issues/14279 | https://github.com/apache/airflow/pull/14402 | f25ec3368348be479dde097efdd9c49ce56922b3 | 8ced652ecf847ed668e5eed27e3e47a51a27b1c8 | 2021-02-17T17:07:01Z | python | 2021-02-28T20:50:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,270 | ["airflow/task/task_runner/standard_task_runner.py", "tests/task/task_runner/test_standard_task_runner.py"] | Specify that exit code -9 is due to RAM | Related to https://github.com/apache/airflow/issues/9655
It would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that.
I have found the code where the -9 is assigned but have no idea how to add a logging message.
self.process = None
if self._rc is None:
# Something else reaped it before we had a chance, so let's just "guess" at an error code.
self._rc = -9 | https://github.com/apache/airflow/issues/14270 | https://github.com/apache/airflow/pull/15207 | eae22cec9c87e8dad4d6e8599e45af1bdd452062 | 18e2c1de776c8c3bc42c984ea0d31515788b6572 | 2021-02-17T09:01:05Z | python | 2021-04-06T19:02:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,264 | ["airflow/sensors/external_task.py", "tests/dags/test_external_task_sensor_check_existense.py", "tests/sensors/test_external_task_sensor.py"] | AirflowException: The external DAG was deleted when external_dag_id references zipped DAG | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.16.3
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux airflow 3.10.0-1127.10.1.el7.x86_64 #1 SMP Tue May 26 15:05:43 EDT 2020 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
ExternalTaskSensor with check_existence=True referencing an external DAG inside a .zip file raises the following exception:
```
ERROR - The external DAG dag_a /opt/airflow-data/dags/my_dags.zip/dag_a.py was deleted.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1086, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1260, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1300, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/sensors/base.py", line 228, in execute
while not self.poke(context):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/sensors/external_task.py", line 159, in poke
self._check_for_existence(session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/sensors/external_task.py", line 184, in _check_for_existence
raise AirflowException(f'The external DAG {self.external_dag_id} was deleted.')
airflow.exceptions.AirflowException: The external DAG dag_a /opt/airflow-data/dags/my_dags.zip/dag_a.py was deleted.
```
**What you expected to happen**:
The existence check should PASS.
**How to reproduce it**:
1. Create a file *dag_a.py* with the following contents:
```
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.timezone import datetime
DEFAULT_DATE = datetime(2015, 1, 1)
with DAG("dag_a", start_date=DEFAULT_DATE, schedule_interval="@daily") as dag:
task_a = DummyOperator(task_id="task_a", dag=dag)
```
2. Create a file *dag_b.py* with contents:
```
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.sensors.external_task import ExternalTaskSensor
from airflow.utils.timezone import datetime
DEFAULT_DATE = datetime(2015, 1, 1)
with DAG("dag_b", start_date=DEFAULT_DATE, schedule_interval="@daily") as dag:
sense_a = ExternalTaskSensor(
task_id="sense_a",
external_dag_id="dag_a",
external_task_id="task_a",
check_existence=True
)
task_b = DummyOperator(task_id="task_b", dag=dag)
sense_a >> task_b
```
3. `zip my_dags.zip dag_a.py dag_b.py`
4. Load *my_dags.zip* into airflow and run *dag_b*
5. Task *sense_a* will fail with exception above.
**Anything else we need to know**:
| https://github.com/apache/airflow/issues/14264 | https://github.com/apache/airflow/pull/27056 | 911d90d669ab5d1fe1f5edb1d2353c7214611630 | 99a6bf783412432416813d1c4bb41052054dd5c6 | 2021-02-17T00:16:48Z | python | 2022-11-16T12:53:01Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,260 | ["UPDATING.md", "airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/models/dag.py", "airflow/models/taskinstance.py", "tests/sensors/test_external_task_sensor.py"] | Clearing using ExternalTaskMarker will not activate external DagRuns | **Apache Airflow version**:
2.0.1
**What happened**:
When clearing task across dags using `ExternalTaskMarker` the dag state of the external `DagRun` is not set to active. So cleared tasks in the external dag will not automatically start if the `DagRun` is a Failed or Succeeded state.
**What you expected to happen**:
The external `DagRun` run should also be set to Running state.
**How to reproduce it**:
Clear tasks in an external dag using an ExternalTaskMarker.
**Anything else we need to know**:
Looking at the code is has:
https://github.com/apache/airflow/blob/b23fc137812f5eabf7834e07e032915e2a504c17/airflow/models/dag.py#L1323-L1335
It seems like it intentionally calls the dag member method `set_dag_run_state` instead of letting the helper function `clear_task_insntances` set the `DagRun` state. But the member method will only change the state of `DagRun`s of dag where the original task is, while I believe `clear_task_instances` would correctly change the state of all involved `DagRun`s.
| https://github.com/apache/airflow/issues/14260 | https://github.com/apache/airflow/pull/15382 | f75dd7ae6e755dad328ba6f3fd462ade194dab25 | 2bca8a5425c234b04fdf32d6c50ae3a91cd08262 | 2021-02-16T14:06:32Z | python | 2021-05-29T15:01:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,252 | ["airflow/models/baseoperator.py", "tests/core/test_core.py"] | Unable to clear Failed task with retries |
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NA
**Environment**: Windows WSL2 (Ubuntu) Local
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04
- **Kernel** (e.g. `uname -a`): Linux d255bce4dcd5 5.4.72-microsoft-standard-WSL2
- **Install tools**: Docker -compose
- **Others**:
**What happened**:
I have a dag with tasks:
Task1 - Get Date
Task 2 - Get data from Api call (Have set retires to 3)
Task 3 - Load Data
Task 2 had failed after three attempts. I am unable to clear the task Instance and get the below error in UI.
[Dag Code](https://github.com/anilkulkarni87/airflow-docker/blob/master/dags/covidNyDaily.py)
```
Python version: 3.8.7
Airflow version: 2.0.1rc2
Node: d255bce4dcd5
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1547, in clear
return self._clear_dag_tis(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1475, in _clear_dag_tis
count = dag.clear(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 1324, in clear
clear_task_instances(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 160, in clear_task_instances
ti.max_tries = ti.try_number + task_retries - 1
TypeError: unsupported operand type(s) for +: 'int' and 'str'
```
**What you expected to happen**:
I expected to clear the Task Instance so that the task could be scheduled again.
**How to reproduce it**:
1) Clone the repo link shared above
2) Follow instructions to setup cluster.
3) Change code to enforce error in Task 2
4) Execute and try to clear task instance after three attempts.

| https://github.com/apache/airflow/issues/14252 | https://github.com/apache/airflow/pull/16415 | 643f3c35a6ba3def40de7db8e974c72e98cfad44 | 15ff2388e8a52348afcc923653f85ce15a3c5f71 | 2021-02-15T22:27:00Z | python | 2021-06-13T00:29:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,249 | ["airflow/models/dagrun.py"] | both airflow dags test and airflow backfill cli commands got same error in airflow Version 2.0.1 | **Apache Airflow version: 2.0.1**
**What happened:**
Running an airflow dags test or backfill CLI command shown in tutorial, produces the same error.
**dags test cli command result:**
```
(airflow_venv) (base) app@lunar_01:airflow$ airflow dags test tutorial 2015-06-01
[2021-02-16 04:29:22,355] {dagbag.py:448} INFO - Filling up the DagBag from /home/app/Lunar/src/airflow/dags
[2021-02-16 04:29:22,372] {example_kubernetes_executor_config.py:174} WARNING - Could not import DAGs in example_kubernetes_executor_config.py: No module named 'kubernetes'
[2021-02-16 04:29:22,373] {example_kubernetes_executor_config.py:175} WARNING - Install kubernetes dependencies with: pip install apache-airflow['cncf.kubernetes']
Traceback (most recent call last):
File "/home/app/airflow_venv/bin/airflow", line 10, in <module>
sys.exit(main())
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py", line 389, in dag_test
dag.run(executor=DebugExecutor(), start_date=args.execution_date, end_date=args.execution_date)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dag.py", line 1706, in run
job.run()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/base_job.py", line 237, in run
self._execute()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 805, in _execute
session=session,
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 715, in _execute_for_run_dates
tis_map = self._task_instances_for_dag_run(dag_run, session=session)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 359, in _task_instances_for_dag_run
dag_run.refresh_from_db()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dagrun.py", line 178, in refresh_from_db
DR.run_id == self.run_id,
File "/home/app/airflow_venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3500, in one
raise orm_exc.NoResultFound("No row was found for one()")
sqlalchemy.orm.exc.NoResultFound: No row was found for one()
```
**backfill cli command result:**
```
(airflow_venv) (base) app@lunar_01:airflow$ airflow dags backfill tutorial --start-date 2015-06-01 --end-date 2015-06-07
/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py:62 PendingDeprecationWarning: --ignore-first-depends-on-past is deprecated as the value is always set to True
[2021-02-16 04:30:16,979] {dagbag.py:448} INFO - Filling up the DagBag from /home/app/Lunar/src/airflow/dags
[2021-02-16 04:30:16,996] {example_kubernetes_executor_config.py:174} WARNING - Could not import DAGs in example_kubernetes_executor_config.py: No module named 'kubernetes'
[2021-02-16 04:30:16,996] {example_kubernetes_executor_config.py:175} WARNING - Install kubernetes dependencies with: pip install apache-airflow['cncf.kubernetes']
Traceback (most recent call last):
File "/home/app/airflow_venv/bin/airflow", line 10, in <module>
sys.exit(main())
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py", line 116, in dag_backfill
run_backwards=args.run_backwards,
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dag.py", line 1706, in run
job.run()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/base_job.py", line 237, in run
self._execute()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 805, in _execute
session=session,
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 715, in _execute_for_run_dates
tis_map = self._task_instances_for_dag_run(dag_run, session=session)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 359, in _task_instances_for_dag_run
dag_run.refresh_from_db()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dagrun.py", line 178, in refresh_from_db
DR.run_id == self.run_id,
File "/home/app/airflow_venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3500, in one
raise orm_exc.NoResultFound("No row was found for one()")
sqlalchemy.orm.exc.NoResultFound: No row was found for one()
``` | https://github.com/apache/airflow/issues/14249 | https://github.com/apache/airflow/pull/16809 | 2b7c59619b7dd6fd5031745ade7756466456f803 | faffaec73385db3c6910d31ccea9fc4f9f3f9d42 | 2021-02-15T20:42:29Z | python | 2021-07-07T11:04:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,222 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/scheduler_command.py", "chart/templates/scheduler/scheduler-deployment.yaml", "tests/cli/commands/test_scheduler_command.py"] | Scheduler Logging Unimplemented in Helm Chart with Airflow V2 & SequentialExecutor (serve_log) | ## Problem
The helm chart does not implement a way for SequentialExecutor Airflow 2 deployments to serve logs; without using elasticsearch.
## Details
Prior implementations utilize the CLI function `serve_logs`. This function has been deprecated as of v2.
In `airflow/templates/scheduler/scheduler-deployment.yaml`:
```yaml
181 {{- if and $local (not $elasticsearch) }}
182 # Start the sidecar log server if we're in local mode and
183 # we don't have elasticsearch enabled.
184 - name: scheduler-logs
185 image: {{ template "airflow_image" . }}
186 imagePullPolicy: {{ .Values.images.airflow.pullPolicy }}
187 args: ["serve_logs"]
```
This will cause the helm deployment to break; and the scheduler will perpetually fail to start the `scheduler-logs` container inside of the scheduler deployment.
Snippet from airflow [upgrade guide](https://airflow.apache.org/docs/apache-airflow/stable/upgrading-to-2.html).
```
Remove serve_logs command from CLI
The serve_logs command has been deleted. This command should be run only by internal application mechanisms and there is no need for it to be accessible from the CLI interface.
```
## Partial Solution
Not sure how the non-elastic method for serving logs going forward.
Astronomer branches yaml by:
```yaml
{{- if semverCompare ">=1.10.12" .Values.airflowVersion }}
...
{{- else }}
...
{{- end }}
``` | https://github.com/apache/airflow/issues/14222 | https://github.com/apache/airflow/pull/15557 | 053d903816464f699876109b50390636bf617eff | 414bb20fad6c6a50c5a209f6d81f5ca3d679b083 | 2021-02-13T22:53:22Z | python | 2021-04-29T15:06:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,208 | ["airflow/configuration.py", "docs/apache-airflow/howto/set-up-database.rst"] | Python 3.8 - Sqlite3 version error | python 3.8 centos 7 Docker image
Can't update sqlite3. Tried Airflow 2.0.1 and 2.0.0. Same issue on Python 3.6 with Airflow 2.0.0. I was able to force the install on 2.0.0 but when running a task it failed because of the sqlite3 version mismatch.
Am I just stupid? #13496
#```
(app-root) airflow db init
Traceback (most recent call last):
File "/opt/app-root/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/opt/app-root/lib64/python3.8/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/opt/app-root/lib64/python3.8/site-packages/airflow/settings.py", line 37, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/opt/app-root/lib64/python3.8/site-packages/airflow/configuration.py", line 1007, in <module>
conf.validate()
File "/opt/app-root/lib64/python3.8/site-packages/airflow/configuration.py", line 209, in validate
self._validate_config_dependencies()
File "/opt/app-root/lib64/python3.8/site-packages/airflow/configuration.py", line 246, in _validate_config_dependencies
raise AirflowConfigException(f"error: cannot use sqlite version < {min_sqlite_version}")
airflow.exceptions.AirflowConfigException: error: cannot use sqlite version < 3.15.0
(app-root) python -c "import sqlite3; print(sqlite3.sqlite_version)"
3.7.17
(app-root) python --version
Python 3.8.6
(app-root) pip install --upgrade sqlite3
ERROR: Could not find a version that satisfies the requirement sqlite3 (from versions: none)
ERROR: No matching distribution found for sqlite3
``` | https://github.com/apache/airflow/issues/14208 | https://github.com/apache/airflow/pull/14209 | 59c94c679e996ab7a75b4feeb1755353f60d030f | 4c90712f192dd552d1791712a49bcdc810ebe82f | 2021-02-12T15:57:55Z | python | 2021-02-13T17:46:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,202 | ["chart/templates/scheduler/scheduler-deployment.yaml"] | Scheduler in helm chart cannot access DAG with git sync | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.1
**What happened**:
When dags `git-sync` is `true` and `persistent` is `false`, `airflow dags list` returns nothing and the `DAGS Folder` is empty
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
Scheduler container should still have a volumeMount to read the volume `dags` populated by the `git-sync` container
<!-- What do you think went wrong? -->
**How to reproduce it**:
```
--set dags.persistence.enabled=false \
--set dags.gitSync.enabled=true \
```
Scheduler cannot access git-sync DAG as Scheduler's configured `DAGS Folder` path isn't mounted on the volume `dags`
<!---
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14202 | https://github.com/apache/airflow/pull/14203 | 8f21fb1bf77fc67e37dc13613778ff1e6fa87cea | e164080479775aca53146331abf6f615d1f03ff0 | 2021-02-12T06:56:10Z | python | 2021-02-19T01:03:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,200 | ["docs/apache-airflow/best-practices.rst", "docs/apache-airflow/security/index.rst", "docs/apache-airflow/security/secrets/secrets-backend/index.rst"] | Update Best practises doc | Update https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#variables to use Secret Backend (especially Environment Variables) as it asks user not to use Variable in top level | https://github.com/apache/airflow/issues/14200 | https://github.com/apache/airflow/pull/17319 | bcf719bfb49ca20eea66a2527307968ff290c929 | 2c1880a90712aa79dd7c16c78a93b343cd312268 | 2021-02-11T19:31:08Z | python | 2021-08-02T20:43:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,182 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Scheduler dies if executor_config isnt passed a dict when using K8s executor | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.15
**Environment**:
- **Cloud provider or hardware configuration**: k8s on bare metal
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: pip3
- **Others**:
**What happened**:
Scheduler dies with
```
[2021-02-10 21:09:27,469] {scheduler_job.py:1298} ERROR - Exception when executing SchedulerJob._run_schedu
ler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1384, in _run_scheduler
_loop
self.executor.heartbeat()
File "/usr/local/lib/python3.8/site-packages/airflow/executors/base_executor.py", line 158, in heartbeat
self.trigger_tasks(open_slots)
File "/usr/local/lib/python3.8/site-packages/airflow/executors/base_executor.py", line 188, in trigger_ta
sks
self.execute_async(key=key, command=command, queue=None, executor_config=ti.executor_config)
File "/usr/local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 493, in exec
ute_async
kube_executor_config = PodGenerator.from_obj(executor_config)
File "/usr/local/lib/python3.8/site-packages/airflow/kubernetes/pod_generator.py", line 175, in from_obj
k8s_legacy_object = obj.get("KubernetesExecutor", None)
AttributeError: 'V1Pod' object has no attribute 'get'
[2021-02-10 21:09:28,475] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 60
[2021-02-10 21:09:29,222] {process_utils.py:66} INFO - Process psutil.Process(pid=66, status='terminated',
started='21:09:27') (66) terminated with exit code None
[2021-02-10 21:09:29,697] {process_utils.py:206} INFO - Waiting up to 5 seconds for processes to exit...
[2021-02-10 21:09:29,716] {process_utils.py:66} INFO - Process psutil.Process(pid=75, status='terminated',
started='21:09:28') (75) terminated with exit code None
[2021-02-10 21:09:29,717] {process_utils.py:66} INFO - Process psutil.Process(pid=60, status='terminated',
exitcode=0, started='21:09:27') (60) terminated with exit code 0
[2021-02-10 21:09:29,717] {scheduler_job.py:1301} INFO - Exited execute loop
```
**What you expected to happen**:
DAG loading fails, producing an error for just that DAG, instead of crashing the scheduler.
**How to reproduce it**:
Create a task like
```
test = DummyOperator(task_id="new-pod-spec",
executor_config=k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
image="myimage",
image_pull_policy="Always"
)
]
)))
```
or
```
test = DummyOperator(task_id="new-pod-spec",
executor_config={"KubernetesExecutor": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
image="myimage",
image_pull_policy="Always"
)
]
))})
```
essentially anything where it expects a dict but gets something else, and run the scheduler using the kubernetes executor
| https://github.com/apache/airflow/issues/14182 | https://github.com/apache/airflow/pull/14323 | 68ccda38a7877fdd0c3b207824c11c9cd733f0c6 | e0ee91e15f8385e34e3d7dfc8a6365e350ea7083 | 2021-02-10T21:36:28Z | python | 2021-02-20T00:46:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,178 | ["chart/templates/configmaps/configmap.yaml", "chart/templates/configmaps/webserver-configmap.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/tests/test_webserver_deployment.py"] | Split out Airflow Configmap (webserver_config.py) | **Description**
Although the changes to FAB that include the [syncing of roles on login](https://github.com/dpgaspar/Flask-AppBuilder/commit/dbe1eded6369c199b777836eb08d829ba37634d7) hasn't been officially released, I'm proposing that we make some changes to the [airflow configmap](https://github.com/apache/airflow/blob/master/chart/templates/configmaps/configmap.yaml) in preparation for it.
Currently, this configmap contains the `airflow.cfg`, `webserver_config.py`, `airflow_local_settings.py`, `known_hosts`, `pod_template_file.yaml`, and the `krb5.conf`. With all of these tied together, changes to any of the contents across the listed files will force a restart for the flower deployment, scheduler deployment, worker deployment, and the webserver deployment through the setting of the `checksum/airflow-config` in each.
The reason I would like to split out at _least_ the `webserver_config.py` from the greater configmap is that I would like to have the opportunity to make incremental changes to the [AUTH_ROLES_MAPPING](https://github.com/dpgaspar/Flask-AppBuilder/blob/dbe1eded6369c199b777836eb08d829ba37634d7/docs/config.rst#configuration-keys) in that config without having to force restarts for all of the previously listed services apart from the webserver. Currently, if I were to add an additional group mapping that has no bearing on the workers/schedulers/flower these services would incur some down time despite not even mounting in this specific file to their pods. | https://github.com/apache/airflow/issues/14178 | https://github.com/apache/airflow/pull/14353 | a48bedf26d0f04901555187aed83296190604813 | 0891a8ea73813d878c0d00fbfdb59fa360e8d1cf | 2021-02-10T17:56:57Z | python | 2021-02-22T20:17:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,163 | ["airflow/executors/celery_executor.py", "tests/executors/test_celery_executor.py"] | TypeError: object of type 'map' has no len(): When celery executor multi-processes to get Task Instances | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.1
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): "18.04.1 LTS (Bionic Beaver)"
- **Kernel** (e.g. `uname -a`): 4.15.0-130-generic #134-Ubuntu
- **Install tools**:
- **Others**:
**What happened**:
I observe the following exception, in the scheduler intermittently:
```
[2021-02-10 03:51:26,651] {scheduler_job.py:1298} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1354, in _run_scheduler_loop
self.adopt_or_reset_orphaned_tasks()
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1837, in adopt_or_reset_orphaned_tasks
for attempt in run_with_db_retries(logger=self.log):
File "/home/foo/bar/.env38/lib/python3.8/site-packages/tenacity/__init__.py", line 390, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/tenacity/__init__.py", line 356, in iter
return fut.result()
File "/home/foo/py_src/lib/python3.8/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/home/foo/py_src/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1882, in adopt_or_reset_orphaned_tasks
to_reset = self.executor.try_adopt_task_instances(tis_to_reset_or_adopt)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 478, in try_adopt_task_instances
states_by_celery_task_id = self.bulk_state_fetcher.get_many(
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 554, in get_many
result = self._get_many_using_multiprocessing(async_results)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 595, in _get_many_using_multiprocessing
num_process = min(len(async_results), self._sync_parallelism)
TypeError: object of type 'map' has no len()
```
**What you expected to happen**:
I think the `len` should not be called on the `async_results`, or `map` should not be used in `try_adopt_task_instances`.
**How to reproduce it**:
Not sure how I can reproduce it. But, here are the offending lines:
https://github.com/apache/airflow/blob/90ab60bba877c65cb93871b97db13a179820d28b/airflow/executors/celery_executor.py#L479
Then, this branch gets hit:
https://github.com/apache/airflow/blob/90ab60bba877c65cb93871b97db13a179820d28b/airflow/executors/celery_executor.py#L554
The, we see the failure, here:
https://github.com/apache/airflow/blob/90ab60bba877c65cb93871b97db13a179820d28b/airflow/executors/celery_executor.py#L595
| https://github.com/apache/airflow/issues/14163 | https://github.com/apache/airflow/pull/14883 | aebacd74058d01cfecaf913c04c0dbc50bb188ea | 4ee442970873ba59ee1d1de3ac78ef8e33666e0f | 2021-02-10T04:13:18Z | python | 2021-04-06T09:21:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,106 | ["airflow/lineage/__init__.py", "airflow/lineage/backend.py", "docs/apache-airflow/lineage.rst", "tests/lineage/test_lineage.py"] | Lineage Backend removed for no reason | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
The possibility to add a lineage backend was removed in https://github.com/apache/airflow/pull/6564 but was never reintroduced. Now that this code is in 2.0, the lineage information is only in the xcoms and the only way to get it is through an experimental API that isn't very practical either.
**Use case / motivation**
A custom callback at the time lineage gets pushed is enough to send the lineage information to whatever lineage backend the user has.
**Are you willing to submit a PR?**
I'd be willing to make a PR recovering the LineageBackend and add changes if needed, unless there is a different plan for lineage from the maintainers.
| https://github.com/apache/airflow/issues/14106 | https://github.com/apache/airflow/pull/14146 | 9ac1d0a3963b0e152cb2ba4a58b14cf6b61a73a0 | af2d11e36ed43b0103a54780640493b8ae46d70e | 2021-02-05T16:47:46Z | python | 2021-04-03T08:26:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,104 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | BACKEND: Unbound Variable issue in docker entrypoint | This is NOT a bug in Airflow, I'm writing this issue for documentation should someone come across this same issue and need to identify how to solve it. Please tag as appropriate.
**Apache Airflow version**: Docker 2.0.1rc2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**: Dockered
- **Cloud provider or hardware configuration**: VMWare VM
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS
- **Kernel** (e.g. `uname -a`): 4.15.0-128-generic
- **Install tools**: Just docker/docker-compose
- **Others**:
**What happened**:
Worker, webserver and scheduler docker containers do not start, errors:
<details><summary>/entrypoint: line 71: BACKEND: unbound variable</summary>
af_worker | /entrypoint: line 71: BACKEND: unbound variable
af_worker | /entrypoint: line 71: BACKEND: unbound variable
af_worker exited with code 1
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
</details>
**What you expected to happen**:
Docker containers to start
**How to reproduce it**:
What ever docker-compose file I was copying, has a MySQL Connection String not compatible with: https://github.com/apache/airflow/blob/bc026cf6961626dd01edfaf064562bfb1f2baf42/scripts/in_container/prod/entrypoint_prod.sh#L58 -- Specifically, the connection string in the docker-compose did not have a password, and no : separator for a blank password.
Original Connection String: `mysql://root@mysql/airflow?charset=utf8mb4`
**Anything else we need to know**:
The solution is to use a password, or at the very least add the : to the user:password section
Fixed Connection String: `mysql://root:@mysql/airflow?charset=utf8mb4`
| https://github.com/apache/airflow/issues/14104 | https://github.com/apache/airflow/pull/14124 | d77f79d134e0d14443f75325b24dffed4b779920 | b151b5eea5057f167bf3d2f13a84ab4eb8e42734 | 2021-02-05T15:31:07Z | python | 2021-03-22T15:42:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,097 | ["UPDATING.md", "airflow/contrib/sensors/gcs_sensor.py", "airflow/providers/google/BACKPORT_PROVIDER_README.md", "airflow/providers/google/cloud/sensors/gcs.py", "tests/always/test_project_structure.py", "tests/deprecated_classes.py", "tests/providers/google/cloud/sensors/test_gcs.py"] | Typo in Sensor: GCSObjectsWtihPrefixExistenceSensor (should be GCSObjectsWithPrefixExistenceSensor) | Typo in Google Cloud Storage sensor: airflow/providers/google/cloud/sensors/gcs/GCSObjectsWithPrefixExistenceSensor
The word _With_ is spelled incorrectly. It should be: GCSObjects**With**PrefixExistenceSensor
**Apache Airflow version**: 2.0.0
**Environment**:
- **Cloud provider or hardware configuration**: Google Cloud
- **OS** (e.g. from /etc/os-release): Mac OS BigSur
| https://github.com/apache/airflow/issues/14097 | https://github.com/apache/airflow/pull/14179 | 6dc6339635f41a9fa50a987c4fdae5af0bae9fdc | e3bcaa3ba351234effe52ad380345c4e39003fcb | 2021-02-05T12:13:09Z | python | 2021-02-12T20:14:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,089 | ["airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/log/s3_task_handler.py", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py"] | S3 Remote Logging Kubernetes Executor worker task keeps waiting to send log: "acquiring 0" | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.16.15
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Airflow Helm Chart
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
A running task in a worker created from the Kubernetes Executor is constantly running with no progress being made. I checked the log and I see that it is "stuck" with a `[2021-02-05 01:07:17,316] {utils.py:580} DEBUG - Acquiring 0`
I see it is able to talk to S3, in particular it does a HEAD request to see if the key exists in S3, and I get a 404, which means the object does not exist in S3. And then, the logs just stop and seems to be waiting. No more logs show up about what is going on.
I am using an access point for the s3 remote base log folder, and that works in Airflow 1.10.14.
Running the following, a simple dag that should just prints a statement:
```
airflow@testdag2dbdbscouter-b7f961ff64d6490e80c5cfa2fd33a37c:/opt/airflow$ airflow tasks run test_dag-2 dbdb-scouter now --local --pool default_pool --subdir /usr/airflow/dags/monitoring_and_alerts/test_dag2.py
[2021-02-05 02:04:22,962] {settings.py:208} DEBUG - Setting up DB connection pool (PID 185)
[2021-02-05 02:04:22,963] {settings.py:279} DEBUG - settings.prepare_engine_args(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=185
[2021-02-05 02:04:23,164] {cli_action_loggers.py:40} DEBUG - Adding <function default_action_log at 0x7f0984c30290> to pre execution callback
[2021-02-05 02:04:30,379] {cli_action_loggers.py:66} DEBUG - Calling callbacks: [<function default_action_log at 0x7f0984c30290>]
[2021-02-05 02:04:30,499] {settings.py:208} DEBUG - Setting up DB connection pool (PID 185)
[2021-02-05 02:04:30,499] {settings.py:241} DEBUG - settings.prepare_engine_args(): Using NullPool
[2021-02-05 02:04:30,500] {dagbag.py:440} INFO - Filling up the DagBag from /usr/airflow/dags/monitoring_and_alerts/test_dag2.py
[2021-02-05 02:04:30,500] {dagbag.py:279} DEBUG - Importing /usr/airflow/dags/monitoring_and_alerts/test_dag2.py
[2021-02-05 02:04:30,511] {dagbag.py:405} DEBUG - Loaded DAG <DAG: test_dag-2>
[2021-02-05 02:04:30,567] {plugins_manager.py:270} DEBUG - Loading plugins
[2021-02-05 02:04:30,567] {plugins_manager.py:207} DEBUG - Loading plugins from directory: /opt/airflow/plugins
[2021-02-05 02:04:30,567] {plugins_manager.py:184} DEBUG - Loading plugins from entrypoints
[2021-02-05 02:04:30,671] {plugins_manager.py:414} DEBUG - Integrate DAG plugins
Running <TaskInstance: test_dag-2.dbdb-scouter 2021-02-05T02:04:23.265117+00:00 [None]> on host testdag2dbdbscouter-b7f961ff64d6490e80c5cfa2fd33a37c
```
If I check the logs directory, and open the log, I see that the log
```
[2021-02-05 01:07:17,314] {retryhandler.py:187} DEBUG - No retry needed.
[2021-02-05 01:07:17,314] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f3e27182b80>>
[2021-02-05 01:07:17,314] {utils.py:1186} DEBUG - S3 request was previously to an accesspoint, not redirecting.
[2021-02-05 01:07:17,316] {utils.py:580} DEBUG - Acquiring 0
```
If I do a manual keyboard interrupt to terminate the running task, I see the following:
```
[2021-02-05 02:11:30,103] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f097f048110>
[2021-02-05 02:11:30,103] {retryhandler.py:187} DEBUG - No retry needed.
[2021-02-05 02:11:30,103] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f097f0293d0>>
[2021-02-05 02:11:30,103] {utils.py:1187} DEBUG - S3 request was previously to an accesspoint, not redirecting.
[2021-02-05 02:11:30,105] {utils.py:580} DEBUG - Acquiring 0
[2021-02-05 02:11:30,105] {futures.py:277} DEBUG - TransferCoordinator(transfer_id=0) cancel(cannot schedule new futures after interpreter shutdown) called
[2021-02-05 02:11:30,105] {s3_task_handler.py:193} ERROR - Could not write logs to s3://arn:aws:s3:us-west-2:<ACCOUNT>:accesspoint:<BUCKET,PATH>/2021-02-05T02:04:23.265117+00:00/1.log
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/log/s3_task_handler.py", line 190, in s3_write
encrypt=conf.getboolean('logging', 'ENCRYPT_S3_LOGS'),
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 61, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 90, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 547, in load_string
self._upload_file_obj(file_obj, key, bucket_name, replace, encrypt, acl_policy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 638, in _upload_file_obj
client.upload_fileobj(file_obj, bucket_name, key, ExtraArgs=extra_args)
File "/home/airflow/.local/lib/python3.7/site-packages/boto3/s3/inject.py", line 538, in upload_fileobj
extra_args=ExtraArgs, subscribers=subscribers)
File "/home/airflow/.local/lib/python3.7/site-packages/s3transfer/manager.py", line 313, in upload
call_args, UploadSubmissionTask, extra_main_kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/s3transfer/manager.py", line 471, in _submit_transfer
main_kwargs=main_kwargs
File "/home/airflow/.local/lib/python3.7/site-packages/s3transfer/futures.py", line 467, in submit
future = ExecutorFuture(self._executor.submit(task))
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 165, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
```
My Airflow config:
```
[logging]
base_log_folder = /opt/airflow/logs
remote_logging = True
remote_log_conn_id = S3Conn
google_key_path =
remote_base_log_folder = s3://arn:aws:s3:us-west-2:<ACCOUNT>:accesspoint:<BUCKET>/logs
encrypt_s3_logs = False
logging_level = DEBUG
fab_logging_level = WARN
```
**What you expected to happen**:
<!-- What do you think went wrong? -->
I expected for the log to be sent to S3, but the task just waits at this point.
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
Extended the docker image, and baked in the test dag:
```
FROM apache/airflow:2.0.0-python3.7
COPY requirements.txt /requirements.txt
RUN pip install --user -r /requirements.txt
ENV AIRFLOW_DAG_FOLDER="/usr/airflow"
COPY --chown=airflow:root ./airflow ${AIRFLOW_DAG_FOLDER}
```
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14089 | https://github.com/apache/airflow/pull/14414 | 3dc762c8177264001793e20543c24c6414c14960 | 0d6cae4172ff185ec4c0fc483bf556ce3252b7b0 | 2021-02-05T02:30:53Z | python | 2021-02-24T13:42:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,077 | ["airflow/providers/google/marketing_platform/hooks/display_video.py", "airflow/providers/google/marketing_platform/operators/display_video.py", "tests/providers/google/marketing_platform/hooks/test_display_video.py", "tests/providers/google/marketing_platform/operators/test_display_video.py"] | GoogleDisplayVideo360Hook.download_media does not pass the resourceName correctly | **Apache Airflow version**: 1.10.12
**Environment**: Google Cloud Composer 1.13.3
- **Cloud provider or hardware configuration**:
- Google Cloud Composer
**What happened**:
The GoogleDisplayVideo360Hook.download_media hook tries to download media using the "resource_name" argument. However, [per the API spec](https://developers.google.com/display-video/api/reference/rest/v1/media/download) it should pass "resourceName" Thus, it breaks every time and can never download media.
Error: `ERROR - Got an unexpected keyword argument "resource_name"`
**What you expected to happen**: The hook should pass in the correct resourceName and then download the media file.
**How to reproduce it**: Run any workflow that tries to download any DV360 media.
**Anything else we need to know**: I have written a patch that fixes the issue and will submit it shortly. | https://github.com/apache/airflow/issues/14077 | https://github.com/apache/airflow/pull/20528 | af4a2b0240fbf79a0a6774a9662243050e8fea9c | a6e60ce25d9f3d621a7b4089834ca5e50cd123db | 2021-02-04T16:35:25Z | python | 2021-12-30T12:48:55Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,075 | ["airflow/providers/google/marketing_platform/hooks/display_video.py", "airflow/providers/google/marketing_platform/operators/display_video.py", "tests/providers/google/marketing_platform/operators/test_display_video.py"] | GoogleDisplayVideo360SDFtoGCSOperator does not pass the correct resource_name to the download_media hook | **Apache Airflow version**: 1.10.12
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**: Google Cloud Composer 1.13.3
- **Cloud provider or hardware configuration**:
- Google Cloud Composer
**What happened**:
The GoogleDisplayVideo360SDFtoGCSOperator is not able to download media. The operator calls the [download_media hook](https://github.com/apache/airflow/blob/10c026cb7a7189d9573f30f2f2242f0f76842a72/airflow/providers/google/marketing_platform/hooks/display_video.py#L237) and should pass in the name of the media resource to download. However, it is currently passing in the full resource object. This breaks the API call to download_media. The [API spec is here](https://developers.google.com/display-video/api/reference/rest/v1/media/download), for reference. Thus, the operator will fail every time.
An example error -- as you can see, it's requesting the full object in the path instead of just "sdfdownloadtasks/media/25447314'":
`ERROR - <HttpError 404 when requesting https://displayvideo.googleapis.com/download/%7B'name':%20'sdfdownloadtasks/operations/25447314',%20'metadata':%20%7B'@type':%20'type.googleapis.com/google.ads.displayvideo.v1.SdfDownloadTaskMetadata',%20'createTime':%20'2021-02-03T16:57:20.950Z',%20'endTime':%20'2021-02-03T16:57:52.898Z',%20'version':%20'SDF_VERSION_5_1'%7D,%20'done':%20True,%20'response':%20%7B'@type':%20'type.googleapis.com/google.ads.displayvideo.v1.SdfDownloadTask',%20'resourceName':%20'sdfdownloadtasks/media/25447314'%7D%7D?alt=media returned "Resource "{'name': 'sdfdownloadtasks/operations/25447314', 'metadata': {'@type': 'type.googleapis.com/google.ads.displayvideo.v1.SdfDownloadTaskMetadata', 'createTime': '2021-02-03T16:57:20.950Z', 'endTime': '2021-02-03T16:57:52.898Z', 'version': 'SDF_VERSION_5_1'}, 'done': True, 'response': {'@type': 'type.googleapis.com/google.ads.displayvideo.v1.SdfDownloadTask', 'resourceName': 'sdfdownloadtasks/media/25447314'}}" cannot be found.">`
**What you expected to happen**: The GoogleDisplayVideo360SDFtoGCSOperator should only pass in the resourceName and correctly download the file.
**How to reproduce it**: Run any workflow requesting to download an SDF file.
**Anything else we need to know**: I have written a patch and will submit it shortly. | https://github.com/apache/airflow/issues/14075 | https://github.com/apache/airflow/pull/22479 | 0f0a1a7d22dffab4487c35d3598b3b6aaf24c4c6 | 38fde2ea795f69ebd5f4ecc5668e162ce4694ac4 | 2021-02-04T16:21:54Z | python | 2022-03-23T13:38:47Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,071 | ["airflow/providers/jenkins/operators/jenkins_job_trigger.py", "tests/providers/jenkins/operators/test_jenkins_job_trigger.py"] | Add support for UNSTABLE Jenkins status | **Description**
Don't mark dag as `failed` when `UNSTABLE` status received from Jenkins.
It can be done by adding `allow_unstable: bool` or `success_status_values: list` parameter to `JenkinsJobTriggerOperator.__init__`. For now `SUCCESS` status is hardcoded, any other lead to fail.
**Use case / motivation**
I want to restart a job (`retries` parameter) only if I get `FAILED` status. `UNSTABLE` is okay for me and it's no need to restart.
**Are you willing to submit a PR?**
Yes
**Related Issues**
No
| https://github.com/apache/airflow/issues/14071 | https://github.com/apache/airflow/pull/14131 | f180fa13bf2a0ffa31b30bb21468510fe8a20131 | 78adaed5e62fa604d2ef2234ad560eb1c6530976 | 2021-02-04T15:20:47Z | python | 2021-02-08T21:43:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,054 | ["airflow/providers/samba/hooks/samba.py", "docs/apache-airflow-providers-samba/index.rst", "setup.py", "tests/providers/samba/hooks/test_samba.py"] | SambaHook using old unmaintained library |
**Description**
The [SambaHook](https://github.com/apache/airflow/blob/master/airflow/providers/samba/hooks/samba.py#L26) currently using [pysmbclient](https://github.com/apache/airflow/blob/master/setup.py#L408) this library hasn't been updated since 2017 https://pypi.org/project/PySmbClient/
I think worth moving to https://pypi.org/project/smbprotocol/ which newer and maintained.
| https://github.com/apache/airflow/issues/14054 | https://github.com/apache/airflow/pull/17273 | 6cc252635db6af6b0b4e624104972f0567f21f2d | f53dace36c707330e01c99204e62377750a5fb1f | 2021-02-03T23:05:43Z | python | 2021-08-01T21:38:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,051 | ["docs/build_docs.py", "docs/exts/docs_build/spelling_checks.py", "docs/spelling_wordlist.txt"] | Docs Builder creates SpellingError for Sphinx error unrelated to spelling issues | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**Environment**:
- **Cloud provider or hardware configuration**: n/a
- **OS** (e.g. from /etc/os-release): n/a
- **Kernel** (e.g. `uname -a`): n/a
- **Install tools**: n/a
- **Others**: n/a
**What happened**:
A sphinx warning unrelated to spelling issues running `sphinx-build` resulted in an instance of `SpellingError` to cause a docs build failure.
```
SpellingError(
file_path=None,
line_no=None,
spelling=None,
suggestion=None,
context_line=None,
message=(
f"Sphinx spellcheck returned non-zero exit status: {completed_proc.returncode}."
)
)
# sphinx.errors.SphinxWarning: /opt/airflow/docs/apache-airflow-providers-google/_api/drive/index.rst:document isn't included in any toctree
```
The actual issue was that I failed to include an `__init__.py` file in a directory that I created.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I think an exception should be raised unrelated to a spelling error. Preferably one that would indicate that there's a directory that's missing an init file, but at least a generic error unrelated to spelling
<!-- What do you think went wrong? -->
**How to reproduce it**:
Create a new plugin directory (e.g. `airflow/providers/google/suite/sensors`) and don't include an `__init__.py` file, and run `./breeze build-docs -- --docs-only -v`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
I'm specifically referring to lines 139 to 150 in `docs/exts/docs_build/docs_builder.py`
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14051 | https://github.com/apache/airflow/pull/14196 | e31b27d593f7379f38ced34b6e4ce8947b91fcb8 | cb4a60e9d059eeeae02909bb56a348272a55c233 | 2021-02-03T16:46:25Z | python | 2021-02-12T23:46:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,050 | ["airflow/jobs/scheduler_job.py", "airflow/serialization/serialized_objects.py", "tests/jobs/test_scheduler_job.py", "tests/serialization/test_dag_serialization.py"] | SLA mechanism does not work | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
I have the following DAG:
```py
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
with DAG(
dag_id="sla_trigger",
schedule_interval="* * * * *",
start_date=datetime(2020, 2, 3),
) as dag:
BashOperator(
task_id="bash_task",
bash_command="sleep 30",
sla=timedelta(seconds=2),
)
```
And in my understanding this dag should result in SLA miss every time it is triggered (every minute). However, after few minutes of running I don't see any SLA misses...
**What you expected to happen**:
I expect to see SLA if task takes longer than expected.
**How to reproduce it**:
Use the dag from above.
**Anything else we need to know**:
N/A
| https://github.com/apache/airflow/issues/14050 | https://github.com/apache/airflow/pull/14056 | 914e9ce042bf29dc50d410f271108b1e42da0add | 604a37eee50715db345c5a7afed085c9afe8530d | 2021-02-03T14:58:32Z | python | 2021-02-04T01:59:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,046 | ["airflow/www/templates/airflow/tree.html"] | Day change flag is in wrong place | **Apache Airflow version**: 2.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
In tree view, the "day marker" is shifted and one last dagrun of previous day is included in new day. See:
<img width="398" alt="Screenshot 2021-02-03 at 14 14 55" src="https://user-images.githubusercontent.com/9528307/106752180-7014c100-662a-11eb-9342-661a237ed66c.png">
The tooltip is on 4th dagrun, but the day flag in the same line as the 3rd one.
**What you expected to happen**:
I expect the to see the day flag between two days not earlier.
**How to reproduce it**:
Create a DAG with `schedule_interval="5 8-23/1 * * *"`
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14046 | https://github.com/apache/airflow/pull/14141 | 0f384f0644c8cfe55ca4c75d08b707be699b440f | 6dc6339635f41a9fa50a987c4fdae5af0bae9fdc | 2021-02-03T13:19:58Z | python | 2021-02-12T18:50:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,045 | ["docs/apache-airflow/stable-rest-api-ref.rst"] | Inexistant reference in docs/apache-airflow/stable-rest-api-ref.rst |
**Apache Airflow version**: 2.1.0.dev0 (master branch)
**What happened**:
in file `docs/apache-airflow/stable-rest-api-ref.rst` there is a reference to a file that does not longer exist: `/docs/exts/sphinx_redoc.py`.
The whole text:
```
It's a stub file. It will be converted automatically during the build process
to the valid documentation by the Sphinx plugin. See: /docs/exts/sphinx_redoc.py
```
**What you expected to happen**:
A reference to `docs/conf.py`, where I think is where the contents are now replaced during the build process.
**How to reproduce it**:
Go to the file in question.
**Anything else we need to know**:
I would've made a PR but I'm not 100% sure this is wrong or I just do not find the file referenced.
| https://github.com/apache/airflow/issues/14045 | https://github.com/apache/airflow/pull/14079 | 2bc9b9ce2b9fdca2d29565fc833ddc3a543daaa7 | e8c7dc3f7a81fb3a7179e154920b2350f4e992c6 | 2021-02-03T11:48:55Z | python | 2021-02-05T12:50:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,010 | ["airflow/www/templates/airflow/task.html"] | Order of items not preserved in Task instance view | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
The order of items is not preserved in Task Instance information:
<img width="542" alt="Screenshot 2021-02-01 at 16 49 09" src="https://user-images.githubusercontent.com/9528307/106482104-6a45a100-64ad-11eb-8d2f-e478c267bce9.png">
<img width="542" alt="Screenshot 2021-02-01 at 16 49 43" src="https://user-images.githubusercontent.com/9528307/106482167-7df10780-64ad-11eb-9434-ba3e54d56dec.png">
**What you expected to happen**:
I expect that the order will be always the same. Otherwise the UX is bad.
**How to reproduce it**:
Seems to happen randomly. But once seen the order is then consistent for given TI.
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14010 | https://github.com/apache/airflow/pull/14036 | 68758b826076e93fadecf599108a4d304dd87ac7 | fc67521f31a0c9a74dadda8d5f0ac32c07be218d | 2021-02-01T15:51:38Z | python | 2021-02-05T15:38:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,989 | ["airflow/providers/telegram/operators/telegram.py", "tests/providers/telegram/operators/test_telegram.py"] | AttributeError: 'TelegramOperator' object has no attribute 'text' | Hi there 👋
I was playing with the **TelegramOperator** and stumbled upon a bug with the `text` field. It is supposed to be a template field but in reality the instance of the **TelegramOperator** does not have this attribute thus every time I try to execute code I get the error:
> AttributeError: 'TelegramOperator' object has no attribute 'text'
```python
TelegramOperator(
task_id='send_message_telegram',
telegram_conn_id='telegram_conn_id',
text='Hello from Airflow!'
)
``` | https://github.com/apache/airflow/issues/13989 | https://github.com/apache/airflow/pull/13990 | 9034f277ef935df98b63963c824ba71e0dcd92c7 | 106d2c85ec4a240605830bf41962c0197b003135 | 2021-01-30T19:25:35Z | python | 2021-02-10T12:06:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,988 | ["airflow/www/utils.py", "airflow/www/views.py"] | List and Dict template fields are rendered as JSON. | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**Environment**: Linux
- **Cloud provider or hardware configuration**: amd64
- **OS** (e.g. from /etc/os-release): Centos 7
- **Kernel** (e.g. `uname -a`):
- **Install tools**: pip
- **Others**:
**What happened**:
The field `sql` is rendered as a serialized json `["select 1 from dual", "select 2 from dual"]` instead of a list of syntax-highlighted SQL statements.

**What you expected to happen**:
`lists` and `dicts` should be rendered as lists and dicts rather than serialized json unless the `template_field_renderer` is `json`

**How to reproduce it**:
```
from airflow import DAG
from airflow.providers.oracle.operators.oracle import OracleOperator
with DAG("demo", default_args={owner='airflow'}, start_date= pendulum.yesterday(), schedule_interval='@daily',) as dag:
OracleOperator(task_id='single', sql='select 1 from dual')
OracleOperator(task_id='list', sql=['select 1 from dual', 'select 2 from dual'])
```
**Anything else we need to know**:
Introduced by #11061, .
A quick and dirty work-around:
Edit file [airflow/www/views.py](https://github.com/PolideaInternal/airflow/blob/13ba1ec5494848d4a54b3291bd8db5841bfad72e/airflow/www/views.py#L673)
```
if renderer in renderers:
- if isinstance(content, (dict, list)):
+ if isinstance(content, (dict, list)) and renderer is renderers['json']:
content = json.dumps(content, sort_keys=True, indent=4)
html_dict[template_field] = renderers[renderer](content)
``` | https://github.com/apache/airflow/issues/13988 | https://github.com/apache/airflow/pull/14024 | 84ef24cae657babe3882d7ad6eecc9be9967e08f | e2a06a32c87d99127d098243b311bd6347ff98e9 | 2021-01-30T19:04:12Z | python | 2021-02-04T08:01:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,985 | ["airflow/www/static/js/connection_form.js"] | Can't save any connection if provider-provided connection form widgets have fields marked as InputRequired | **Apache Airflow version**: 2.0.0 with the following patch: https://github.com/apache/airflow/pull/13640
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**: AMD Ryzen 3900X (12C/24T), 64GB RAM
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04.1 LTS
- **Kernel** (e.g. `uname -a`): 5.9.8-050908-generic
- **Install tools**: N/A
- **Others**: N/A
**What happened**:
If there are custom hooks that implement the `get_connection_form_widgets` method that return fields using the `InputRequired` validator, saving breaks for all types of connections on the "Edit Connections" page.
In Chrome, the following message is logged to the browser console:
```
An invalid form control with name='extra__hook_name__field_name' is not focusable.
```
This happens because the field is marked as `<input required>` but is hidden using CSS when the connection type exposed by the custom hook is not selected.
**What you expected to happen**:
Should be able to save other types of connections.
In particular, either one of the following should happen:
1. The fields not belonging to the currently selected connection type should not just be hidden using CSS, but should be removed from the DOM entirely.
2. Remove the `required` attribute if the form field is hidden.
**How to reproduce it**:
Create a provider, and add a hook with something like:
```python
@staticmethod
def get_connection_form_widgets() -> Dict[str, Any]:
"""Returns connection widgets to add to connection form."""
return {
'extra__my_hook__client_id': StringField(
lazy_gettext('OAuth2 Client ID'),
widget=BS3TextFieldWidget(),
validators=[wtforms.validators.InputRequired()],
),
}
```
Go to the Airflow Web UI, click the "Add" button in the connection list page, then choose a connection type that's not the type exposed by the custom hook. Fill in details and click "Save".
**Anything else we need to know**: N/A
| https://github.com/apache/airflow/issues/13985 | https://github.com/apache/airflow/pull/14052 | f9c9e9c38f444a39987478f3d1a262db909de8c4 | 98bbe5aec578a012c1544667bf727688da1dadd4 | 2021-01-30T16:21:53Z | python | 2021-02-11T13:59:21Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,971 | ["UPDATING.md", "airflow/www/app.py", "tests/www/test_app.py"] | airflow webserver error when updated to airflow 2.0 | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: MAC
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: PIP3
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.
Python version: 3.9.0
Airflow version: 2.0.0
Node: 192-168-1-101.tpgi.com.au
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1953, in full_dispatch_request
return self.finalize_request(rv)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1970, in finalize_request
response = self.process_response(response)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 2269, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/usr/local/lib/python3.9/site-packages/flask/sessions.py", line 379, in save_session
response.set_cookie(
File "/usr/local/lib/python3.9/site-packages/werkzeug/wrappers/base_response.py", line 468, in set_cookie
dump_cookie(
File "/usr/local/lib/python3.9/site-packages/werkzeug/http.py", line 1217, in dump_cookie
raise ValueError("SameSite must be 'Strict', 'Lax', or 'None'.")
ValueError: SameSite must be 'Strict', 'Lax', or 'None'.**
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/13971 | https://github.com/apache/airflow/pull/14183 | 61b613359e2394869070b3ad94f64dfda3efac74 | 4336f4cfdbd843085672b8e49367cf1b9ab4a432 | 2021-01-29T08:05:21Z | python | 2021-02-11T00:20:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,924 | ["scripts/in_container/_in_container_utils.sh"] | Improve error messages and propagation in CI builds | Airflow version: dev
The error information in `Backport packages: wheel` is not that easy to find.
Here is the end of the step that failed and end of its log:
<img width="1151" alt="Screenshot 2021-01-27 at 12 02 01" src="https://user-images.githubusercontent.com/9528307/105982515-aa64e800-6097-11eb-91c8-9911448d1301.png">
but in fact the error happen some 500 lines earlier:
<img width="1151" alt="Screenshot 2021-01-27 at 12 01 47" src="https://user-images.githubusercontent.com/9528307/105982504-a769f780-6097-11eb-8873-02c1d9b2d670.png">
**What you expect to happen?**
I would expect that the error is at the end of the step. Otherwise the message `The previous step completed with error. Please take a look at output above ` is slightly miss-leading.
| https://github.com/apache/airflow/issues/13924 | https://github.com/apache/airflow/pull/15190 | 041a09f3ee6bc447c3457b108bd5431a2fd70ad9 | 7c17bf0d1e828b454a6b2c7245ded275b313c792 | 2021-01-27T11:07:09Z | python | 2021-04-04T20:20:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,918 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator with pod_template_file = No Metadata & Wrong Pod Name | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** 1.15.15
**What happened**:
If you use the **KubernetesPodOperator** with **LocalExecutor** and you use a **pod_template_file**, the pod created doesn't have metadata like :
- dag_id
- task_id
- ...
I want to have a ``privileged_escalation=True`` pod, launched by a KubernetesPodOperator but without the KubernetesExecutor.
Is it possible ?
**What you expected to happen**:
Have the pod launched with privileged escalation & metadata & correct pod-name override.
**How to reproduce it**:
* have a pod template file :
**privileged_runner.yaml** :
```yaml
apiVersion: v1
kind: Pod
metadata:
name: privileged-pod
spec:
containers:
- name: base
securityContext:
allowPrivilegeEscalation: true
privileged: true
```
* have a DAG file with KubernetesOperator in it :
**my-dag.py** :
```python
##=========================================================================================##
## CONFIGURATION
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
from airflow.operators.dummy_operator import DummyOperator
from airflow.kubernetes.secret import Secret
from kubernetes.client import models as k8s
from airflow.models import Variable
from datetime import datetime, timedelta
from airflow import DAG
env = Variable.get("process_env")
namespace = Variable.get("namespace")
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
##==============================##
## Définition du DAG
dag = DAG(
'transfert-files-to-nexus',
start_date=datetime.utcnow(),
schedule_interval="0 2 * * *",
default_args=default_args,
max_active_runs=1
)
##=========================================================================================##
## Définition des tâches
start = DummyOperator(task_id='start', dag=dag)
end = DummyOperator(task_id='end', dag=dag)
transfertfile = KubernetesPodOperator(namespace=namespace,
task_id="transfertfile",
name="transfertfile",
image="registrygitlab.fr/docker-images/python-runner:1.8.22",
image_pull_secrets="registrygitlab-curie",
pod_template_file="/opt/bitnami/airflow/dags/git-airflow-dags/privileged_runner.yaml",
is_delete_operator_pod=False,
get_logs=True,
dag=dag)
## Enchainement des tâches
start >> transfertfile >> end
```
**Anything else we need to know**:
I know that we have to use the ``KubernetesExecutor`` in order to have the **metadata**, but even if you use the ``KubernetesExecutor``, the fact that you have to use the **pod_template_file** for the ``KubernetesPodOperator`` makes no change, because in either ``LocalExecutor`` / ``KubernetesExecutor``you will endup with no pod name override correct & metadata. | https://github.com/apache/airflow/issues/13918 | https://github.com/apache/airflow/pull/15492 | def1e7c5841d89a60f8972a84b83fe362a6a878d | be421a6b07c2ae9167150b77dc1185a94812b358 | 2021-01-26T20:27:09Z | python | 2021-04-23T22:54:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,905 | ["setup.py"] | DockerOperator fails to pull an image | **Apache Airflow version**: 2.0
**Environment**:
- **OS** (from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (`uname -a`): Linux 37365fa0b59b 5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 GNU/Linux
- **Others**: running inside a docker container, forked puckel/docker-airflow
**What happened**:
`DockerOperator` does not attempt to pull an image unless force_pull is set to True, instead displaying a misleading 404 error.
**What you expected to happen**:
`DockerOperator` should attempt to pull an image when it is not present locally.
**How to reproduce it**:
Make sure you don't have an image tagged `debian:buster-slim` present locally.
```
DockerOperator(
task_id=f'try_to_pull_debian',
image='debian:buster-slim',
command=f'''echo hello''',
force_pull=False
)
```
prints: `{taskinstance.py:1396} ERROR - 404 Client Error: Not Found ("No such image: ubuntu:latest")`
This, on the other hand:
```
DockerOperator(
task_id=f'try_to_pull_debian',
image='debian:buster-slim',
command=f'''echo hello''',
force_pull=True
)
```
pulls the image and prints `{docker.py:263} INFO - hello`
**Anything else we need to know**:
I overrode `DockerOperator` to track down what I was doing wrong and found the following:
When trying to run an image that's not present locally, `self.cli.images(name=self.image)` in the line:
https://github.com/apache/airflow/blob/8723b1feb82339d7a4ba5b40a6c4d4bbb995a4f9/airflow/providers/docker/operators/docker.py#L286
returns a non-empty array even when the image has been deleted from the local machine.
In fact, `self.cli.images` appears to return non-empty arrays even when supplied with nonsense image names.
<details><summary>force_pull_false.log</summary>
[2021-01-27 06:15:28,987] {__init__.py:124} DEBUG - Preparing lineage inlets and outlets
[2021-01-27 06:15:28,987] {__init__.py:168} DEBUG - inlets: [], outlets: []
[2021-01-27 06:15:28,987] {config.py:21} DEBUG - Trying paths: ['/usr/local/airflow/.docker/config.json', '/usr/local/airflow/.dockercfg']
[2021-01-27 06:15:28,987] {config.py:25} DEBUG - Found file at path: /usr/local/airflow/.docker/config.json
[2021-01-27 06:15:28,987] {auth.py:182} DEBUG - Found 'auths' section
[2021-01-27 06:15:28,988] {auth.py:142} DEBUG - Found entry (registry='https://index.docker.io/v1/', username='xxxxxxx')
[2021-01-27 06:15:29,015] {connectionpool.py:433} DEBUG - http://localhost:None "GET /version HTTP/1.1" 200 851
[2021-01-27 06:15:29,060] {connectionpool.py:433} DEBUG - http://localhost:None "GET /v1.41/images/json?filter=debian%3Abuster-slim&only_ids=0&all=0 HTTP/1.1" 200 None
[2021-01-27 06:15:29,060] {docker.py:224} INFO - Starting docker container from image debian:buster-slim
[2021-01-27 06:15:29,063] {connectionpool.py:433} DEBUG - http://localhost:None "POST /v1.41/containers/create HTTP/1.1" 404 48
[2021-01-27 06:15:29,063] {taskinstance.py:1396} ERROR - 404 Client Error: Not Found ("No such image: debian:buster-slim")
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1086, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1260, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1300, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 305, in execute
return self._run_image()
File "/usr/local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 231, in _run_image
self.container = self.cli.create_container(
File "/usr/local/lib/python3.8/site-packages/docker/api/container.py", line 427, in create_container
return self.create_container_from_config(config, name)
File "/usr/local/lib/python3.8/site-packages/docker/api/container.py", line 438, in create_container_from_config
return self._result(res, True)
File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 267, in _result
self._raise_for_status(response)
File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("No such image: debian:buster-slim")
</details>
<details><summary>force_pull_true.log</summary>
[2021-01-27 06:17:01,811] {__init__.py:124} DEBUG - Preparing lineage inlets and outlets
[2021-01-27 06:17:01,811] {__init__.py:168} DEBUG - inlets: [], outlets: []
[2021-01-27 06:17:01,811] {config.py:21} DEBUG - Trying paths: ['/usr/local/airflow/.docker/config.json', '/usr/local/airflow/.dockercfg']
[2021-01-27 06:17:01,811] {config.py:25} DEBUG - Found file at path: /usr/local/airflow/.docker/config.json
[2021-01-27 06:17:01,811] {auth.py:182} DEBUG - Found 'auths' section
[2021-01-27 06:17:01,812] {auth.py:142} DEBUG - Found entry (registry='https://index.docker.io/v1/', username='xxxxxxxxx')
[2021-01-27 06:17:01,825] {connectionpool.py:433} DEBUG - http://localhost:None "GET /version HTTP/1.1" 200 851
[2021-01-27 06:17:01,826] {docker.py:287} INFO - Pulling docker image debian:buster-slim
[2021-01-27 06:17:01,826] {auth.py:41} DEBUG - Looking for auth config
[2021-01-27 06:17:01,826] {auth.py:242} DEBUG - Looking for auth entry for 'docker.io'
[2021-01-27 06:17:01,826] {auth.py:250} DEBUG - Found 'https://index.docker.io/v1/'
[2021-01-27 06:17:01,826] {auth.py:54} DEBUG - Found auth config
[2021-01-27 06:17:04,399] {connectionpool.py:433} DEBUG - http://localhost:None "POST /v1.41/images/create?tag=buster-slim&fromImage=debian HTTP/1.1" 200 None
[2021-01-27 06:17:04,400] {docker.py:301} INFO - buster-slim: Pulling from library/debian
[2021-01-27 06:17:04,982] {docker.py:301} INFO - a076a628af6f: Pulling fs layer
[2021-01-27 06:17:05,884] {docker.py:301} INFO - a076a628af6f: Downloading
[2021-01-27 06:17:11,429] {docker.py:301} INFO - a076a628af6f: Verifying Checksum
[2021-01-27 06:17:11,429] {docker.py:301} INFO - a076a628af6f: Download complete
[2021-01-27 06:17:11,480] {docker.py:301} INFO - a076a628af6f: Extracting
</details> | https://github.com/apache/airflow/issues/13905 | https://github.com/apache/airflow/pull/15731 | 7933aaf07f5672503cfd83361b00fda9d4c281a3 | 41930fdebfaa7ed2c53e7861c77a83312ca9bdc4 | 2021-01-26T05:49:03Z | python | 2021-05-09T21:05:49Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,891 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/migrations/versions/2c6edca13270_resource_based_permissions.py", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "docs/apache-airflow/security/access-control.rst", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/www/test_views.py"] | RBAC Granular DAG Permissions don't work as intended | Previous versions (before 2.0) allowed for granular can_edit DAG permissions so that different user groups can trigger different DAGs and access control is more specific. Since 2.0 it seems that this does not work as expected.
How to reproduce:
Create a copy of the VIEWER role, try adding it can dag edit on a specific DAG. **Expected Result:** user can trigger said DAG. **Actual Result:** user access is denied.
It seems to be a new parameter was added: **can create on DAG runs** and without it the user cannot run DAGs, however, with it, the user can run all DAGs without limitations and I believe this is an unintended use.
| https://github.com/apache/airflow/issues/13891 | https://github.com/apache/airflow/pull/13922 | 568327f01a39d6f181dda62ef6a143f5096e6b97 | 629abfdbab23da24ca45996aaaa6e3aa094dd0de | 2021-01-25T13:55:12Z | python | 2021-02-03T03:16:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,877 | ["airflow/migrations/versions/cf5dc11e79ad_drop_user_and_chart.py"] | Upgrading 1.10 sqlite database in 2.0 fails | While it is not an important case it might be annoying to users that if they used airflow 1.10 with sqlite, the migration to 2.0 will fail on dropping constraints in `known_event` table.
It would be great to provide some more useful message then asking the user to remove the sqlite database.
```
[2021-01-24 08:38:42,015] {db.py:678} INFO - Creating tables
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 03afc6b6f902 -> cf5dc11e79ad, drop_user_and_chart
Traceback (most recent call last):
File "/Users/vijayantsoni/.virtualenvs/airflow/bin/airflow", line 11, in <module>
sys.exit(main())
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 31, in initdb
db.initdb()
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/utils/db.py", line 549, in initdb
upgradedb()
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/utils/db.py", line 688, in upgradedb
command.upgrade(config, 'heads')
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/command.py", line 294, in upgrade
script.run_env()
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/script/base.py", line 481, in run_env
util.load_python_file(self.dir, "env.py")
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 97, in load_python_file
module = load_module_py(module_id, path)
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/util/compat.py", line 182, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/migrations/env.py", line 108, in <module>
run_migrations_online()
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/migrations/env.py", line 102, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/runtime/environment.py", line 813, in run_migrations
self.get_context().run_migrations(**kw)
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/runtime/migration.py", line 560, in run_migrations
step.migration_fn(**kw)
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/airflow/migrations/versions/cf5dc11e79ad_drop_user_and_chart.py", line 49, in upgrade
op.drop_constraint('known_event_user_id_fkey', 'known_event')
File "<string>", line 8, in drop_constraint
File "<string>", line 3, in drop_constraint
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/operations/ops.py", line 148, in drop_constraint
return operations.invoke(op)
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/operations/base.py", line 354, in invoke
return fn(self, operation)
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/operations/toimpl.py", line 160, in drop_constraint
operations.impl.drop_constraint(
File "/Users/vijayantsoni/.virtualenvs/airflow/lib/python3.8/site-packages/alembic/ddl/sqlite.py", line 52, in drop_constraint
raise NotImplementedError(
NotImplementedError: No support for ALTER of constraints in SQLite dialectPlease refer to the batch mode feature which allows for SQLite migrations using a copy-and-move strategy.
``` | https://github.com/apache/airflow/issues/13877 | https://github.com/apache/airflow/pull/13921 | df11a1d7dcc4e454b99a71805c133c3d15c197dc | 7f45e62fdf1dd5df50f315a4ab605b619d4b848c | 2021-01-24T17:30:47Z | python | 2021-01-29T19:37:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,843 | ["airflow/api_connexion/endpoints/log_endpoint.py", "tests/api_connexion/endpoints/test_log_endpoint.py"] | Task not found exception in get logs api |
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)**: NA
**Environment**: Docker
- **OS**: CentOS Linux 7 (Core)
- **Python version**: 3.6.8
**What happened**:
Every time I call get_log api (https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_log) to get logs for a specific task instance that is not in the dag now, I get the TaskNotFound exception.
```Traceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib64/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib64/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 384, in wrapper
return function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/response.py", line 103, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
return function(**kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/api_connexion/security.py", line 47, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/api_connexion/endpoints/log_endpoint.py", line 74, in get_log
ti.task = dag.get_task(ti.task_id)
File "/usr/local/lib/python3.6/site-packages/airflow/models/dag.py", line 1527, in get_task
raise TaskNotFound(f"Task {task_id} not found")
airflow.exceptions.TaskNotFound: Task 0-1769e47c-5933-42f9-ac59-b59c7de13382 not found
```
**What you expected to happen**:
Even if the task is not in the dag now I expect to get its log in a past run.
**How to reproduce it**:
Create a dag with a few tasks and run it. Then remove a task from the dag and try to get the log of the removed task in the past run using the api.
**Anything else we need to know**:
The problem is that in https://github.com/apache/airflow/blob/master/airflow/api_connexion/endpoints/log_endpoint.py at line 73 there is a call to get the task from current dag without catching the TaskNotFound exception.
| https://github.com/apache/airflow/issues/13843 | https://github.com/apache/airflow/pull/13872 | f473ca7130f844bc59477674e641b42b80698bb7 | dfbccd3b1f62738e0d5be15a9d9485976b4d8756 | 2021-01-22T16:47:51Z | python | 2021-01-24T13:49:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,805 | ["airflow/cli/commands/task_command.py"] | Could not get scheduler_job_id | **Apache Airflow version:**
2.0.0
**Kubernetes version (if you are using kubernetes) (use kubectl version):**
1.18.3
**Environment:**
Cloud provider or hardware configuration: AWS
**What happened:**
When trying to run a DAG, it gets scheduled, but task is never run. When attempting to run task manually, it shows an error:
```
Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.
Python version: 3.8.7
Airflow version: 2.0.0
Node: airflow-web-ffdd89d6-h98vj
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/views.py", line 1366, in run
executor.start()
File "/usr/local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 493, in start
raise AirflowException("Could not get scheduler_job_id")
airflow.exceptions.AirflowException: Could not get scheduler_job_id
```
**What you expected to happen:**
The task to be run successfully without
**How to reproduce it:**
Haven't pinpointed what causes the issue, besides an attempted upgrade from Airflow 1.10.14 to Airflow 2.0.0
**Anything else we need to know:**
This error is encountered in an upgrade of Airflow from 1.10.14 to Airflow 2.0.0
EDIT: Formatted to fit the issue template | https://github.com/apache/airflow/issues/13805 | https://github.com/apache/airflow/pull/16108 | 436e0d096700c344e7099693d9bf58e12658f9ed | cdc9f1a33854254607fa81265a323cf1eed6d6bb | 2021-01-21T10:09:05Z | python | 2021-05-27T12:50:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,799 | ["airflow/migrations/versions/8646922c8a04_change_default_pool_slots_to_1.py", "airflow/models/taskinstance.py"] | Scheduler crashes when unpausing some dags with: TypeError: '>' not supported between instances of 'NoneType' and 'int' | **Apache Airflow version**:
2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
1.15
**Environment**:
- **Cloud provider or hardware configuration**:
GKE
- **OS** (e.g. from /etc/os-release):
Ubuntu 18.04
**What happened**:
I just migrated from 1.10.14 to 2.0.0. When I turn on some random dags, the scheduler crashes with the following error:
```python
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1275, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1377, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1533, in _do_scheduling
num_queued_tis = self._critical_section_execute_task_instances(session=session)
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1132, in _critical_section_execute_task_instances
queued_tis = self._executable_task_instances_to_queued(max_tis, session=session)
File "/usr/local/lib/python3.6/dist-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1034, in _executable_task_instances_to_queued
if task_instance.pool_slots > open_slots:
TypeError: '>' not supported between instances of 'NoneType' and 'int'
```
**What you expected to happen**:
I expected those dags would have their tasks scheduled without problems.
**How to reproduce it**:
Can't reproduce it yet. Still trying to figure out if this happens only with specific dags or not.
**Anything else we need to know**:
I couldn't find in which context `task_instance.pool_slots` could be None
| https://github.com/apache/airflow/issues/13799 | https://github.com/apache/airflow/pull/14406 | c069e64920da780237a1e1bdd155319b007a2587 | f763b7c3aa9cdac82b5d77e21e1840fbe931257a | 2021-01-20T22:08:00Z | python | 2021-02-25T02:56:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,797 | ["airflow/sentry.py", "airflow/utils/session.py", "tests/utils/test_session.py"] | Sentry celery dag task run error |
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): Centos7
- **Kernel** (e.g. `uname -a`): Linux 3.10.0-693.5.2.el7.x86_64
- **Install tools**: celery==4.4.0, sentry-sdk==0.19.5
- **Others**: python 3.6.8
**What happened**:
We see this in the sentry error logs randomly for all dag tasks:
`TypeError in airflow.executors.celery_executor.execute_command`
```
TypeError: _run_mini_scheduler_on_child_tasks() got multiple values for argument 'session'
File "airflow/sentry.py", line 159, in wrapper
return func(task_instance, *args, session=session, **kwargs)
```
**What you expected to happen**:
No error in sentry.
**How to reproduce it**:
Schedule or manually run a dag task such as PythonOperator.
The error msg will appear when airflow runs dag task.
The error will not appear in the airflow web server logs but only on Sentry server.
**Anything else we need to know**:
N/A
| https://github.com/apache/airflow/issues/13797 | https://github.com/apache/airflow/pull/13929 | 24aa3bf02a2f987a68d1ff5579cbb34e945fa92c | 0e8698d3edb3712eba0514a39d1d30fbfeeaec09 | 2021-01-20T19:39:49Z | python | 2021-03-19T21:40:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,774 | ["airflow/providers/amazon/aws/operators/s3_copy_object.py"] | add acl_policy to S3CopyObjectOperator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
<!-- A short description of your feature -->
**Use case / motivation**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/13774 | https://github.com/apache/airflow/pull/13773 | 9923d606d2887c52390a30639fc1ee0d4000149c | 29730d720066a4c16d524e905de8cdf07e8cd129 | 2021-01-19T21:53:18Z | python | 2021-01-20T15:16:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,761 | ["airflow/example_dags/tutorial.py", "airflow/models/baseoperator.py", "airflow/serialization/schema.json", "airflow/www/utils.py", "airflow/www/views.py", "docs/apache-airflow/concepts.rst", "tests/serialization/test_dag_serialization.py", "tests/www/test_utils.py"] | Markdown from doc_md is not being rendered in ui | **Apache Airflow version**: 1.10.14
**Environment**:
- **Cloud provider or hardware configuration**: docker
- **OS** (e.g. from /etc/os-release): apache/airflow:1.10.14-python3.8
- **Kernel** (e.g. `uname -a`): Linux host 5.4.0-62-generic #70-Ubuntu SMP Tue Jan 12 12:45:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**: Docker version 19.03.8, build afacb8b7f0
- **Others**:
**What happened**:
I created a DAG and set the doc_md property on the object but it isn't being rendered in the UI.
**What you expected to happen**:
I expected the markdown to be rendered in the UI
**How to reproduce it**:
Created a new container using the `airflow:1.10.14`, I have tried the following images with the same results.
- airflow:1.10.14:image-python3.8
- airflow:1.10.14:image-python3.7
- airflow:1.10.12:image-python3.7
- airflow:1.10.12:image-python3.7
```
dag_docs = """
## Pipeline
#### Purpose
This is a pipeline
"""
dag = DAG(
'etl-get_from_api',
default_args=default_args,
description='A simple dag',
schedule_interval=timedelta(days=1),
)
dag.doc_md = dag_docs
```


I have also tried with using a doc-string to populate the doc_md as well as adding some text within the constructor.
```
dag = DAG(
'etl-get_from_api',
default_args=default_args,
description='A simple dag',
schedule_interval=timedelta(days=1),
doc_md = "some text"
)
```
All of the different permutations I've tried seem to have the same result. The only thing I can change is the description, that appears to show up correctly.
**Anything else we need to know**:
I have tried multiple browsers (Firefox and Chrome) and I have also done an inspect on from both the graph view and the tree view from within the dag but I can't find any of the text within the page at all.
| https://github.com/apache/airflow/issues/13761 | https://github.com/apache/airflow/pull/15191 | 7c17bf0d1e828b454a6b2c7245ded275b313c792 | e86f5ca8fa5ff22c1e1f48addc012919034c672f | 2021-01-19T08:10:12Z | python | 2021-04-05T02:46:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,755 | ["airflow/config_templates/airflow_local_settings.py", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/providers/elasticsearch/log/es_task_handler.py", "tests/providers/elasticsearch/log/test_es_task_handler.py"] | Elasticsearch log retrieval fails when "host" field is not a string | **Apache Airflow version**: 2.0.0
**Kubernetes version:** 1.17.16
**OS** (e.g. from /etc/os-release): Ubuntu 18.4
**What happened**:
Webserver gets exception when reading logs from Elasticsearch when "host" field in the log is not a string. Recent Filebeat template mapping creates host as an object with "host.name", "host.os" etc.
```
[2021-01-18 23:53:27,923] {app.py:1891} ERROR - Exception on /get_logs_with_metadata [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.7/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/www/views.py", line 1054, in get_logs_with_metadata
logs, metadata = task_log_reader.read_log_chunks(ti, try_number, metadata)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/log/log_reader.py", line 58, in read_log_chunks
logs, metadatas = self.log_handler.read(ti, try_number, metadata=metadata)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/log/file_task_handler.py", line 217, in read
log, metadata = self._read(task_instance, try_number_element, metadata)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/elasticsearch/log/es_task_handler.py", line 161, in _read
logs_by_host = self._group_logs_by_host(logs)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/elasticsearch/log/es_task_handler.py", line 130, in _group_logs_by_host
grouped_logs[key].append(log)
TypeError: unhashable type: 'AttrDict'
```
**What you expected to happen**:
Airflow Webserver successfully pulls the logs, replacing host value with default if needed.
<!-- What do you think went wrong? -->
The issue comes from this line. When "host" is a dictionary, it tries to insert it as a key to the `grouped_logs` dictionary, which throws `unhashable type: 'AttrDict'`.
```
def _group_logs_by_host(logs):
grouped_logs = defaultdict(list)
for log in logs:
key = getattr(log, 'host', 'default_host')
grouped_logs[key].append(log) # ---> fails when key is a dict
```
**How to reproduce it**:
I don't know how to concisely write this and make it easy to read at the same time.
1- Configure Airflow to read logs from Elasticsearch
```
[elasticsearch]
host = http://localhost:9200
write_stdout = True
json_format = True
```
2 - Load index template where host is an object [May need to add other fields to this template as well].
Filebeat adds this by default (and many more fields).
```
PUT _template/filebeat-airflow
{
"order": 1,
"index_patterns": [
"filebeat-airflow-*"
],
"mappings": {
"doc": {
"properties": {
"host": {
"properties": {
"name": {
"type": "keyword",
"ignore_above": 1024
},
"id": {
"type": "keyword",
"ignore_above": 1024
},
"architecture": {
"type": "keyword",
"ignore_above": 1024
},
"ip": {
"type": "ip"
},
"mac": {
"type": "keyword",
"ignore_above": 1024
}
}
}
}
}
}
}
```
3 - Post sample log and fill in `log_id` field for a valid dag run.
```
curl -X POST -H 'Content-Type: application/json' -i 'http://localhost:9200/filebeat-airflow/_doc' --data '{"message": "test log message", "log_id": "<fill-in-with-valid-example>", "offset": "1"}'
```
4 - Go to WebUI and try to view logs for dag_run.
**Workaround:** Remove host field completely with filebeat.
**Solution:** Do a type check if the extracted `host` field is a string, if not use the default value.
**Solution2:** Make host field name configurable so that we can set it to be `host.name` instead of hardcoded `'host'`.
If I have time I will submit the fix. I never submitted a commit before so I don't know how long it will take me to prepare a proper commit for this.
| https://github.com/apache/airflow/issues/13755 | https://github.com/apache/airflow/pull/14625 | 86b9d3b1e8b2513aa3f614b9a8eba679cdfd25e0 | 5cd0bf733b839951c075c54e808a595ac923c4e8 | 2021-01-19T04:08:57Z | python | 2021-06-11T18:32:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,750 | ["airflow/sensors/sql.py", "tests/sensors/test_sql_sensor.py"] | Support Standard SQL in BigQuery Sensor | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
A sql sensor which uses Standard SQL due to default one uses legacy sql
**Use case / motivation**
Currently (correct me if I am wrong!), the sql sensor only supports legacy sql. If I want to poke a BQ table, I do not think I can do that using standard sql right now.
**Are you willing to submit a PR?**
If community approves of this idea, sure!
| https://github.com/apache/airflow/issues/13750 | https://github.com/apache/airflow/pull/18431 | 83b51e53062dc596a630edd4bd01407a556f1aa6 | 314a4fe0050783ebb43b300c4c950667d1ddaa89 | 2021-01-18T19:35:41Z | python | 2021-11-26T15:04:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,746 | ["CONTRIBUTING.rst"] | Broken link on CONTRIBUTING.rst | Version Airflow 2.0, or most current version
In CONTRIBUTING.rst under the section "How to rebase a PR", [link to the docs section](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#id14) for reference, the link to Resolve conflicts link has an erroneous period at the end of the URL: [current incorrect link](https://www.jetbrains.com/help/idea/resolving-conflicts.html.)
The link should be https://www.jetbrains.com/help/idea/resolving-conflicts.html - without the period. The link works without the period.
Steps to reproduce:
Click on the Resolve conflicts link on the page CONTRIBUTING.rst in the documentation.
I would like to submit a PR to fix this, if someone would like to assist me in the review 😄 | https://github.com/apache/airflow/issues/13746 | https://github.com/apache/airflow/pull/13748 | 85a3ce1a47e0b84bac518e87481e92d266edea31 | b103a1dd0e22b67dcc8cb2a28a5afcdfb7554412 | 2021-01-18T16:35:26Z | python | 2021-01-18T18:29:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,744 | ["airflow/api_connexion/endpoints/connection_endpoint.py", "tests/api_connexion/endpoints/test_connection_endpoint.py"] | REST API Connection Endpoint doesn't return the extra field in response | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
Apache Airflow: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
- **Kernel** (e.g. `uname -a`):
Linux Personal 5.4.0-62-generic #70~18.04.1-Ubuntu SMP Tue Jan 12 17:18:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
REST API doesn't return the **extra** field of the connection in the response.
**What you expected to happen**:
<!-- What do you think went wrong? -->
It should return all the fields as shown in the documentation.

**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
Create one connection with id **leads_ec2** and define values as shown in the screenshot.

Now call the below API endpoint to get the connection details. And as shown in the screenshot it doesn't include the extra field in the response.
**API Endpoint** : `http://localhost:8000/api/v1/connections/leads_ec2`

**How often does this problem occur? Once? Every time etc?**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
Same for other connection_id. It doesn't return the extra field in the response.
| https://github.com/apache/airflow/issues/13744 | https://github.com/apache/airflow/pull/13885 | 31b956c6c22476d109c45c99d8a325c5c1e0fd45 | adf7755eaa67bd924f6a4da0498bce804da1dd4b | 2021-01-18T14:42:08Z | python | 2021-01-25T09:52:16Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,741 | ["airflow/stats.py", "tests/core/test_stats.py"] | Airflow 2.0 does not send metrics to statsD when Scheduler is run with Daemon mode |
**Apache Airflow version**:
2.0.0
**Environment**:
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04 LTS
- **Python version**: 3.8
- **Kernel** (e.g. `uname -a`): x86_64 x86_64 x86_64 GNU/Linux 5.4.0-58-generic #64-Ubuntu
- **Install tools**: pip
**What happened**:
Airflow 2.0 does not send metrics to statsD.
I configure Airflow with official documentation (https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html) and by this article https://dstan.medium.com/run-airflow-statsd-grafana-locally-16b372c86524 (but I used port 8125).
I turned on statsD:
```ini
statsd_on = True
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
```
But I do not see airflow metrics at http://localhost:9102/metrics (statsD metrics endpoint).
---
P.S. I noticed this error just using Airflow 2.0. In version 1.10.13 everything is ok in the same environment.
Thank you for advance.
| https://github.com/apache/airflow/issues/13741 | https://github.com/apache/airflow/pull/14454 | cfa1071eaf0672dbf2b2825c3fd6affaca68bdee | 0aa597e2ffd71d3587f629c0a1cb3d904e07b6e6 | 2021-01-18T12:26:52Z | python | 2021-02-26T14:45:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,713 | ["airflow/www/static/css/main.css"] | Airflow web server UI bouncing horizontally at some viewport widths | **Apache Airflow version**: 2.0.0
**Environment**: Ubuntu 20.04 LTS, Python 3.8.6 via pyenv
- **OS** (e.g. from /etc/os-release): 20.04.1 LTS (Focal Fossa)
- **Kernel** (e.g. `uname -a`): Linux DESKTOP-QBFDUA0 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**: Following steps in https://airflow.apache.org/docs/apache-airflow/stable/start.html
**What happened**:
I followed the quickstart here (https://airflow.apache.org/docs/apache-airflow/stable/start.html) to start Airflow on my machine. Then, I followed the tutorial here (https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html) to create my own DAG after disabling the example DAGs via the config file. The bouncing problem I'm reporting I actually noticed as soon as I launched Airflow. I'm just explaining what steps I took to get to what you see in the GIF below.
When I opened the Airflow UI in my browser, it appeared to "bounce" left and right. This happened on multiple pages. It seemed to happen only at certain widths bigger than the mobile width. At a large width, it didn't happen. I captured a GIF to try to demonstrate it:

I didn't see any JS errors in the console in dev tools as this was happening.
**What you expected to happen**: A bounce-free **Airflow experience**™️
**What do you think went wrong?**: CSS? I'm not qualified for this magical front end stuff tbh.
**How to reproduce it**: Run the steps I described above on Ubuntu 20.04 LTS or a similar Linux operating system, using Python 3.
**Anything else we need to know**: n/a
**How often does this problem occur? Once? Every time etc?**
Every time I launch the Airflow web server and scheduler and load it at `localhost:8080`.
| https://github.com/apache/airflow/issues/13713 | https://github.com/apache/airflow/pull/13857 | b9eb51a0fb32cd660a5459d73d7323865b34dd99 | f72be51aeca5edb5696a9feb2acb4ff8f6bcc658 | 2021-01-16T03:43:42Z | python | 2021-01-25T22:03:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,704 | ["airflow/operators/branch.py", "tests/operators/test_branch_operator.py"] | BaseBranchOperator should push to xcom by default | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
Not relevant
**Environment**:
Not relevant
**What happened**:
BranchPythonOperator performs xcom push by default since this is the behavior of PythonOperator.
However BaseBranchOperator doesn't do xcom push.
Note: It's impossible to push to xcom manually because the BaseBranchOperator has no return in it's execute method. So even when using `do_xcom_push=True` it won't help
https://github.com/apache/airflow/blob/master/airflow/operators/branch.py#L52
**What you expected to happen**:
BaseBranchOperator to do xcom push of the branch it choose to follow as the default or at least to support the parameter of `do_xcom_push=True`
| https://github.com/apache/airflow/issues/13704 | https://github.com/apache/airflow/pull/13763 | 3fd5ef355556cf0ad7896bb570bbe4b2eabbf46e | 3e257950990a6edd817c372036352f96d4f8a76b | 2021-01-15T17:51:00Z | python | 2021-01-21T01:16:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,702 | ["airflow/providers/google/marketing_platform/hooks/display_video.py", "tests/providers/google/marketing_platform/hooks/test_display_video.py"] | Google Marketing Platform Display and Video 360 SDF Operators Fail | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.12
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**: Google Cloud Composer 1.13.3
- **Cloud provider or hardware configuration**: Google Cloud Composer
**What happened**: All Google Marketing Platform Display and Video 360 SDF operations fail (other than creating the SDF). The "GoogleDisplayVideo360GetSDFDownloadOperationSensor" fails every time it runs in any DAG I have tested with the error 'Resource' object has no attribute 'operation'.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**: Given a valid SDF operation object that has been created, the operator should check if that object is ready to download. Then, the GoogleDisplayVideo360SDFtoGCSOperator should download the operation. However, the underlying hook that is being used is trying to reference an "operation" instead of "operations," per the SDF API specifications (see [here](https://developers.google.com/display-video/api/reference/rest/v1/sdfdownloadtasks.operations/get)). Thus, the requests always fail (as there is no "operation" attribute of the API).
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
To reproduce, try to create a valid DAG that creates a new SDF file and then downloads it. I have tested with my own DAGs which are based on the example DAG provided in that code. I believe that if you run the SDF portion of the [Display and Video example DAG](https://github.com/apache/airflow/blob/master/airflow/providers/google/marketing_platform/example_dags/example_display_video.py) in the docs then the issue will occur, regardless of environment.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**: The patch is very simple -- I have already tested by patching my code locally and confirming that the patch fixes the problem. I will submit a pull request shortly.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/13702 | https://github.com/apache/airflow/pull/13703 | 2f79fb9d37286020c172c00510d598aa819dc66b | 7ec858c4523b24e7a3d6dd1d49e3813e6eee7dff | 2021-01-15T16:40:42Z | python | 2021-01-17T12:47:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,700 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Partial subset DAGs do not update task group's used_group_ids | **Apache Airflow version**: 2.0.0
**Environment**:
- **Cloud provider or hardware configuration**: Docker container
- **OS** (e.g. from /etc/os-release): Debian Stretch
**What happened**:
When working on some custom DAG override logic, I noticed that invoking `DAG.partial_subset` does not properly update the corresponding `_task_group.used_group_ids` on the returned subset DAG, such that adding back a task which was excluded during the `partial_subset` operation fails.
**What you expected to happen**:
Tasks that had already been added to the original DAG can be added again to the subset DAG returned by `DAG.partial_subset`
**How to reproduce it**:
Create any DAG with a single task called, e.g. `my-task`, then invoke `dag.partial_subset(['not-my-task'], False, False)`
Note that the returned subset DAG's `_task_group.used_group_ids` still contains `my-task` even though it was not included in the subset DAG itself
**Anything else we need to know**:
I was able to work around this by adding logic to update the new partial subset DAG's `_task_group.used_group_ids` manually, but this should really be done as part of the `DAG.partial_subset` logic | https://github.com/apache/airflow/issues/13700 | https://github.com/apache/airflow/pull/15308 | 42a1ca8aab905a0eb1ffb3da30cef9c76830abff | 1e425fe6459a39d93a9ada64278c35f7cf0eab06 | 2021-01-15T14:47:54Z | python | 2021-04-20T18:08:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,697 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | Email config section is incorrect |
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**Environment**: This pertains to the docs
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
I see [here](https://airflow.apache.org/docs/apache-airflow/stable/howto/email-config.html#email-configuration) it says to set `subject_template` and `html_content_template` under the email header, but in the [configuration references](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#email) it doesn't show those two fields. Have they been removed for some reason?
| https://github.com/apache/airflow/issues/13697 | https://github.com/apache/airflow/pull/13709 | 74b2cd7364df192a8b53d4734e33b07e69864acc | 1ab19b40fdea3d6399fcab4cd8855813e0d232cf | 2021-01-15T14:02:02Z | python | 2021-01-16T01:11:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,685 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | scheduler dies with "sqlalchemy.exc.IntegrityError: (MySQLdb._exceptions.IntegrityError) (1062, "Duplicate entry 'huge_demo13499411352-2021-01-15 01:04:00.000000' for key 'dag_run.dag_id'")" | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: tencent cloud
- **OS** (e.g. from /etc/os-release): centos7
- **Kernel** (e.g. `uname -a`): 3.10
- **Install tools**:
- **Others**: Server version: 8.0.22 MySQL Community Server - GPL
**What happened**:
Scheduler died when I try to modify a dag's schedule_interval from "None" to "* */1 * * *"(I edited the dag file in the dag folder and saved it). A few minutes later I tried to start the scheduler again and it began to run.
And the logs are as follows:
```
{2021-01-15 09:06:22,636} {{scheduler_job.py:1293}} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 609, in do_execute
cursor.execute(statement, parameters)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 209, in execute
res = self._query(query)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 315, in _query
db.query(q)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/connections.py", line 239, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.IntegrityError: (1062, "Duplicate entry 'huge_demo13499411352-2021-01-15 01:04:00.000000' for key 'dag_run.dag_id'")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1275, in _execute
self._run_scheduler_loop()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1377, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1474, in _do_scheduling
self._create_dag_runs(query.all(), session)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1561, in _create_dag_runs
dag.create_dagrun(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/models/dag.py", line 1807, in create_dagrun
session.flush()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2540, in flush
self._flush(objects)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2682, in _flush
transaction.rollback(_capture_exception=True)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.raise_(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2642, in _flush
flush_context.execute()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 586, in execute
persistence.save_obj(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 239, in save_obj
_emit_insert_statements(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1135, in _emit_insert_statements
result = cached_connections[connection].execute(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 609, in do_execute
cursor.execute(statement, parameters)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 209, in execute
res = self._query(query)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 315, in _query
db.query(q)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/connections.py", line 239, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.IntegrityError: (MySQLdb._exceptions.IntegrityError) (1062, "Duplicate entry 'huge_demo13499411352-2021-01-15 01:04:00.000000' for key 'dag_run.dag_id'")
[SQL: INSERT INTO dag_run (dag_id, execution_date, start_date, end_date, state, run_id, creating_job_id, external_trigger, run_type, conf, last_scheduling_decision, dag_hash) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)]
[parameters: ('huge_demo13499411352', datetime.datetime(2021, 1, 15, 1, 4), datetime.datetime(2021, 1, 15, 1, 6, 22, 629433), None, 'running', 'scheduled__2021-01-15T01:04:00+00:00', 71466, 0, <DagRunType.SCHEDULED: 'scheduled'>, b'\x80\x05}\x94.', None, '60078c379cdeecb9bc8844eed5aa9745')]
(Background on this error at: http://sqlalche.me/e/13/gkpj)
{2021-01-15 09:06:23,648} {{process_utils.py:95}} INFO - Sending Signals.SIGTERM to GPID 66351
{2021-01-15 09:06:23,781} {{process_utils.py:61}} INFO - Process psutil.Process(pid=66351, status='terminated') (66351) terminated with exit code 0
{2021-01-15 09:06:23,781} {{scheduler_job.py:1296}} INFO - Exited execute loop
```
**What you expected to happen**:
Schdeduler should not die.
**How to reproduce it**:
I don't know how to reproduce it
**Anything else we need to know**:
No
| https://github.com/apache/airflow/issues/13685 | https://github.com/apache/airflow/pull/13920 | 05fbeb16bc40cd3a710804408d3ae84156b5aae6 | 594069ee061e9839b2b12aa43aa3a23e05beed86 | 2021-01-15T01:20:15Z | python | 2021-02-01T16:06:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,680 | ["chart/files/pod-template-file.kubernetes-helm-yaml"] | "dag_id could not be found" when running airflow on KubernetesExecutor | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.19.4
**What happened**:
I get this error when try to execute tasks using kubernetes
```
[2021-01-14 19:39:17,628] {dagbag.py:440} INFO - Filling up the DagBag from /opt/airflow/dags/repo/bash.py
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/commands/task_command.py", line 216, in task_run
dag = get_dag(args.subdir, args.dag_id)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/cli.py", line 189, in get_dag
'parse.'.format(dag_id)
airflow.exceptions.AirflowException: dag_id could not be found: bash. Either the dag did not exist or it failed to parse.
```
**What you expected to happen**:
get executed and terminate
**How to reproduce it**:
deploy airflow helm chart using this values.yaml:
```
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
---
# Default values for airflow.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# User and group of airflow user
uid: 50000
gid: 50000
# Airflow home directory
# Used for mount paths
airflowHome: "/opt/airflow"
# Default airflow repository -- overrides all the specific images below
defaultAirflowRepository: apache/airflow
# Default airflow tag to deploy
defaultAirflowTag: 2.0.0
# Select certain nodes for airflow pods.
nodeSelector: { }
affinity: { }
tolerations: [ ]
# Add common labels to all objects and pods defined in this chart.
labels: { }
# Ingress configuration
ingress:
# Enable ingress resource
enabled: false
# Configs for the Ingress of the web Service
web:
# Annotations for the web Ingress
annotations: { }
# The path for the web Ingress
path: ""
# The hostname for the web Ingress
host: ""
# configs for web Ingress TLS
tls:
# Enable TLS termination for the web Ingress
enabled: false
# the name of a pre-created Secret containing a TLS private key and certificate
secretName: ""
# HTTP paths to add to the web Ingress before the default path
precedingPaths: [ ]
# Http paths to add to the web Ingress after the default path
succeedingPaths: [ ]
# Configs for the Ingress of the flower Service
flower:
# Annotations for the flower Ingress
annotations: { }
# The path for the flower Ingress
path: ""
# The hostname for the flower Ingress
host: ""
# configs for web Ingress TLS
tls:
# Enable TLS termination for the flower Ingress
enabled: false
# the name of a pre-created Secret containing a TLS private key and certificate
secretName: ""
# HTTP paths to add to the flower Ingress before the default path
precedingPaths: [ ]
# Http paths to add to the flower Ingress after the default path
succeedingPaths: [ ]
# Network policy configuration
networkPolicies:
# Enabled network policies
enabled: false
# Extra annotations to apply to all
# Airflow pods
airflowPodAnnotations: { }
# Enable RBAC (default on most clusters these days)
rbacEnabled: true
# Airflow executor
# Options: SequentialExecutor, LocalExecutor, CeleryExecutor, KubernetesExecutor
executor: "KubernetesExecutor"
# If this is true and using LocalExecutor/SequentialExecutor/KubernetesExecutor, the scheduler's
# service account will have access to communicate with the api-server and launch pods.
# If this is true and using the CeleryExecutor, the workers will be able to launch pods.
allowPodLaunching: true
# Images
images:
airflow:
repository: ~
tag: ~
pullPolicy: IfNotPresent
pod_template:
repository: ~
tag: ~
pullPolicy: IfNotPresent
flower:
repository: ~
tag: ~
pullPolicy: IfNotPresent
statsd:
repository: apache/airflow
tag: airflow-statsd-exporter-2020.09.05-v0.17.0
pullPolicy: IfNotPresent
redis:
repository: redis
tag: 6-buster
pullPolicy: IfNotPresent
pgbouncer:
repository: apache/airflow
tag: airflow-pgbouncer-2020.09.05-1.14.0
pullPolicy: IfNotPresent
pgbouncerExporter:
repository: apache/airflow
tag: airflow-pgbouncer-exporter-2020.09.25-0.5.0
pullPolicy: IfNotPresent
gitSync:
repository: k8s.gcr.io/git-sync
tag: v3.1.6
pullPolicy: IfNotPresent
# Environment variables for all airflow containers
env:
- name: "AIRFLOW__KUBERNETES__GIT_SYNC_RUN_AS_USER"
value: "65533"
# Secrets for all airflow containers
secret: [ ]
# - envName: ""
# secretName: ""
# secretKey: ""
# Extra secrets that will be managed by the chart
# (You can use them with extraEnv or extraEnvFrom or some of the extraVolumes values).
# The format is "key/value" where
# * key (can be templated) is the the name the secret that will be created
# * value: an object with the standard 'data' or 'stringData' key (or both).
# The value associated with those keys must be a string (can be templated)
extraSecrets: { }
# eg:
# extraSecrets:
# {{ .Release.Name }}-airflow-connections:
# data: |
# AIRFLOW_CONN_GCP: 'base64_encoded_gcp_conn_string'
# AIRFLOW_CONN_AWS: 'base64_encoded_aws_conn_string'
# stringData: |
# AIRFLOW_CONN_OTHER: 'other_conn'
# {{ .Release.Name }}-other-secret-name-suffix: |
# data: |
# ...
# Extra ConfigMaps that will be managed by the chart
# (You can use them with extraEnv or extraEnvFrom or some of the extraVolumes values).
# The format is "key/value" where
# * key (can be templated) is the the name the configmap that will be created
# * value: an object with the standard 'data' key.
# The value associated with this keys must be a string (can be templated)
extraConfigMaps: { }
# eg:
# extraConfigMaps:
# {{ .Release.Name }}-airflow-variables:
# data: |
# AIRFLOW_VAR_HELLO_MESSAGE: "Hi!"
# AIRFLOW_VAR_KUBERNETES_NAMESPACE: "{{ .Release.Namespace }}"
# Extra env 'items' that will be added to the definition of airflow containers
# a string is expected (can be templated).
extraEnv: ~
# eg:
# extraEnv: |
# - name: PLATFORM
# value: FR
# Extra envFrom 'items' that will be added to the definition of airflow containers
# A string is expected (can be templated).
extraEnvFrom: ~
# eg:
# extraEnvFrom: |
# - secretRef:
# name: '{{ .Release.Name }}-airflow-connections'
# - configMapRef:
# name: '{{ .Release.Name }}-airflow-variables'
# Airflow database config
data:
# If secret names are provided, use those secrets
metadataSecretName: ~
resultBackendSecretName: ~
# Otherwise pass connection values in
metadataConnection:
user: postgres
pass: postgres
host: ~
port: 5432
db: postgres
sslmode: disable
resultBackendConnection:
user: postgres
pass: postgres
host: ~
port: 5432
db: postgres
sslmode: disable
# Fernet key settings
fernetKey: ~
fernetKeySecretName: ~
# In order to use kerberos you need to create secret containing the keytab file
# The secret name should follow naming convention of the application where resources are
# name {{ .Release-name }}-<POSTFIX>. In case of the keytab file, the postfix is "kerberos-keytab"
# So if your release is named "my-release" the name of the secret should be "my-release-kerberos-keytab"
#
# The Keytab content should be available in the "kerberos.keytab" key of the secret.
#
# apiVersion: v1
# kind: Secret
# data:
# kerberos.keytab: <base64_encoded keytab file content>
# type: Opaque
#
#
# If you have such keytab file you can do it with similar
#
# kubectl create secret generic {{ .Release.name }}-kerberos-keytab --from-file=kerberos.keytab
#
kerberos:
enabled: false
ccacheMountPath: '/var/kerberos-ccache'
ccacheFileName: 'cache'
configPath: '/etc/krb5.conf'
keytabPath: '/etc/airflow.keytab'
principal: '[email protected]'
reinitFrequency: 3600
config: |
# This is an example config showing how you can use templating and how "example" config
# might look like. It works with the test kerberos server that we are using during integration
# testing at Apache Airflow (see `scripts/ci/docker-compose/integration-kerberos.yml` but in
# order to make it production-ready you must replace it with your own configuration that
# Matches your kerberos deployment. Administrators of your Kerberos instance should
# provide the right configuration.
[logging]
default = "FILE:{{ template "airflow_logs_no_quote" . }}/kerberos_libs.log"
kdc = "FILE:{{ template "airflow_logs_no_quote" . }}/kerberos_kdc.log"
admin_server = "FILE:{{ template "airflow_logs_no_quote" . }}/kadmind.log"
[libdefaults]
default_realm = FOO.COM
ticket_lifetime = 10h
renew_lifetime = 7d
forwardable = true
[realms]
FOO.COM = {
kdc = kdc-server.foo.com
admin_server = admin_server.foo.com
}
# Airflow Worker Config
workers:
# Number of airflow celery workers in StatefulSet
replicas: 1
# Allow KEDA autoscaling.
# Persistence.enabled must be set to false to use KEDA.
keda:
enabled: false
namespaceLabels: { }
# How often KEDA polls the airflow DB to report new scale requests to the HPA
pollingInterval: 5
# How many seconds KEDA will wait before scaling to zero.
# Note that HPA has a separate cooldown period for scale-downs
cooldownPeriod: 30
# Maximum number of workers created by keda
maxReplicaCount: 10
persistence:
# Enable persistent volumes
enabled: true
# Volume size for worker StatefulSet
size: 100Gi
# If using a custom storageClass, pass name ref to all statefulSets here
storageClassName:
# Execute init container to chown log directory.
# This is currently only needed in KinD, due to usage
# of local-path provisioner.
fixPermissions: false
kerberosSidecar:
# Enable kerberos sidecar
enabled: false
resources: { }
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Grace period for tasks to finish after SIGTERM is sent from kubernetes
terminationGracePeriodSeconds: 600
# This setting tells kubernetes that its ok to evict
# when it wants to scale a node down.
safeToEvict: true
# Annotations to add to worker kubernetes service account.
serviceAccountAnnotations: { }
# Mount additional volumes into worker.
extraVolumes: [ ]
extraVolumeMounts: [ ]
# Airflow scheduler settings
scheduler:
# Airflow 2.0 allows users to run multiple schedulers,
# However this feature is only recommended for MySQL 8+ and Postgres
replicas: 1
# Scheduler pod disruption budget
podDisruptionBudget:
enabled: false
# PDB configuration
config:
maxUnavailable: 1
resources: { }
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# This setting can overwrite
# podMutation settings.
airflowLocalSettings: ~
# This setting tells kubernetes that its ok to evict
# when it wants to scale a node down.
safeToEvict: true
# Annotations to add to scheduler kubernetes service account.
serviceAccountAnnotations: { }
# Mount additional volumes into scheduler.
extraVolumes: [ ]
extraVolumeMounts: [ ]
# Airflow webserver settings
webserver:
allowPodLogReading: true
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 30
failureThreshold: 20
periodSeconds: 5
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 30
failureThreshold: 20
periodSeconds: 5
# Number of webservers
replicas: 1
# Additional network policies as needed
extraNetworkPolicies: [ ]
resources: { }
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Create initial user.
defaultUser:
enabled: true
role: Admin
username: admin
email: [email protected]
firstName: admin
lastName: user
password: admin
# Mount additional volumes into webserver.
extraVolumes: [ ]
# - name: airflow-ui
# emptyDir: { }
extraVolumeMounts: [ ]
# - name: airflow-ui
# mountPath: /opt/airflow
# This will be mounted into the Airflow Webserver as a custom
# webserver_config.py. You can bake a webserver_config.py in to your image
# instead
webserverConfig: ~
# webserverConfig: |
# from airflow import configuration as conf
# # The SQLAlchemy connection string.
# SQLALCHEMY_DATABASE_URI = conf.get('core', 'SQL_ALCHEMY_CONN')
# # Flask-WTF flag for CSRF
# CSRF_ENABLED = True
service:
type: NodePort
## service annotations
annotations: { }
# Annotations to add to webserver kubernetes service account.
serviceAccountAnnotations: { }
# Flower settings
flower:
# Additional network policies as needed
extraNetworkPolicies: [ ]
resources: { }
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# A secret containing the connection
secretName: ~
# Else, if username and password are set, create secret from username and password
username: ~
password: ~
service:
type: ClusterIP
# Statsd settings
statsd:
enabled: true
# Additional network policies as needed
extraNetworkPolicies: [ ]
resources: { }
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
service:
extraAnnotations: { }
# Pgbouncer settings
pgbouncer:
# Enable pgbouncer
enabled: false
# Additional network policies as needed
extraNetworkPolicies: [ ]
# Pool sizes
metadataPoolSize: 10
resultBackendPoolSize: 5
# Maximum clients that can connect to pgbouncer (higher = more file descriptors)
maxClientConn: 100
# Pgbouner pod disruption budget
podDisruptionBudget:
enabled: false
# PDB configuration
config:
maxUnavailable: 1
# Limit the resources to pgbouncerExported.
# When you specify the resource request the scheduler uses this information to decide which node to place
# the Pod on. When you specify a resource limit for a Container, the kubelet enforces those limits so
# that the running container is not allowed to use more of that resource than the limit you set.
# See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
# Example:
#
# resource:
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
resources: { }
service:
extraAnnotations: { }
# https://www.pgbouncer.org/config.html
verbose: 0
logDisconnections: 0
logConnections: 0
sslmode: "prefer"
ciphers: "normal"
ssl:
ca: ~
cert: ~
key: ~
redis:
terminationGracePeriodSeconds: 600
persistence:
# Enable persistent volumes
enabled: true
# Volume size for worker StatefulSet
size: 1Gi
# If using a custom storageClass, pass name ref to all statefulSets here
storageClassName:
resources: { }
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# If set use as redis secret
passwordSecretName: ~
brokerURLSecretName: ~
# Else, if password is set, create secret with it,
# else generate a new one on install
password: ~
# This setting tells kubernetes that its ok to evict
# when it wants to scale a node down.
safeToEvict: true
# Auth secret for a private registry
# This is used if pulling airflow images from a private registry
registry:
secretName: ~
# Example:
# connection:
# user: ~
# pass: ~
# host: ~
# email: ~
connection: { }
# Elasticsearch logging configuration
elasticsearch:
# Enable elasticsearch task logging
enabled: true
# A secret containing the connection
# secretName: ~
# Or an object representing the connection
# Example:
connection:
# user:
# pass:
host: elasticsearch-master-headless.elk.svc.cluster.local
port: 9200
# connection: {}
# All ports used by chart
ports:
flowerUI: 5555
airflowUI: 8080
workerLogs: 8793
redisDB: 6379
statsdIngest: 9125
statsdScrape: 9102
pgbouncer: 6543
pgbouncerScrape: 9127
# Define any ResourceQuotas for namespace
quotas: { }
# Define default/max/min values for pods and containers in namespace
limits: [ ]
# This runs as a CronJob to cleanup old pods.
cleanup:
enabled: false
# Run every 15 minutes
schedule: "*/15 * * * *"
# Configuration for postgresql subchart
# Not recommended for production
postgresql:
enabled: true
postgresqlPassword: postgres
postgresqlUsername: postgres
# Config settings to go into the mounted airflow.cfg
#
# Please note that these values are passed through the `tpl` function, so are
# all subject to being rendered as go templates. If you need to include a
# literal `{{` in a value, it must be expressed like this:
#
# a: '{{ "{{ not a template }}" }}'
#
# yamllint disable rule:line-length
config:
core:
dags_folder: '{{ include "airflow_dags" . }}'
load_examples: 'False'
executor: '{{ .Values.executor }}'
# For Airflow 1.10, backward compatibility
colored_console_log: 'True'
remote_logging: '{{- ternary "True" "False" .Values.elasticsearch.enabled }}'
# Authentication backend used for the experimental API
api:
auth_backend: airflow.api.auth.backend.deny_all
logging:
remote_logging: '{{- ternary "True" "False" .Values.elasticsearch.enabled }}'
colored_console_log: 'True'
logging_level: INFO
metrics:
statsd_on: '{{ ternary "True" "False" .Values.statsd.enabled }}'
statsd_port: 9125
statsd_prefix: airflow
statsd_host: '{{ printf "%s-statsd" .Release.Name }}'
webserver:
enable_proxy_fix: 'True'
expose_config: 'True'
rbac: 'True'
celery:
default_queue: celery
scheduler:
scheduler_heartbeat_sec: 5
# For Airflow 1.10, backward compatibility
statsd_on: '{{ ternary "True" "False" .Values.statsd.enabled }}'
statsd_port: 9125
statsd_prefix: airflow
statsd_host: '{{ printf "%s-statsd" .Release.Name }}'
# Restart Scheduler every 41460 seconds (11 hours 31 minutes)
# The odd time is chosen so it is not always restarting on the same "hour" boundary
run_duration: 41460
elasticsearch:
json_format: 'True'
log_id_template: "{dag_id}_{task_id}_{execution_date}_{try_number}"
elasticsearch_configs:
max_retries: 3
timeout: 30
retry_timeout: 'True'
kerberos:
keytab: '{{ .Values.kerberos.keytabPath }}'
reinit_frequency: '{{ .Values.kerberos.reinitFrequency }}'
principal: '{{ .Values.kerberos.principal }}'
ccache: '{{ .Values.kerberos.ccacheMountPath }}/{{ .Values.kerberos.ccacheFileName }}'
kubernetes:
namespace: '{{ .Release.Namespace }}'
airflow_configmap: '{{ include "airflow_config" . }}'
airflow_local_settings_configmap: '{{ include "airflow_config" . }}'
pod_template_file: '{{ include "airflow_pod_template_file" . }}/pod_template_file.yaml'
worker_container_repository: '{{ .Values.images.airflow.repository | default .Values.defaultAirflowRepository }}'
worker_container_tag: '{{ .Values.images.airflow.tag | default .Values.defaultAirflowTag }}'
delete_worker_pods: 'False'
multi_namespace_mode: '{{ if .Values.multiNamespaceMode }}True{{ else }}False{{ end }}'
# yamllint enable rule:line-length
multiNamespaceMode: false
podTemplate:
# Git sync
dags:
persistence:
# Enable persistent volume for storing dags
enabled: false
# Volume size for dags
size: 1Gi
# If using a custom storageClass, pass name here
storageClassName: gp2
# access mode of the persistent volume
accessMode: ReadWriteMany
## the name of an existing PVC to use
existingClaim: "airflow-dags"
gitSync:
enabled: true
repo: [email protected]:Tikna-inc/airflow.git
branch: main
rev: HEAD
root: "/git"
dest: "repo"
depth: 1
maxFailures: 0
subPath: ""
sshKeySecret: airflow-ssh-secret
wait: 60
containerName: git-sync
uid: 65533
```
**and this is the dag with its tasks**
```
from datetime import timedelta
import requests
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.utils.dates import days_ago
logging.getLogger().setLevel(level=logging.INFO)
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
def get_active_customers():
requests.get("localhost:8080")
dag = DAG(
'bash',
default_args=default_args,
description='A simple test DAG',
schedule_interval='*/2 * * * *',
start_date=days_ago(1),
tags=['Test'],
is_paused_upon_creation=False,
catchup=False
)
t1 = BashOperator(
task_id='print_date',
bash_command='mkdir ./itsMe',
dag=dag
)
t1
```
This is airflow.cfg file
```cfg
[api]
auth_backend = airflow.api.auth.backend.deny_all
[celery]
default_queue = celery
[core]
colored_console_log = True
dags_folder = /opt/airflow/dags/repo/
executor = KubernetesExecutor
load_examples = False
remote_logging = False
[elasticsearch]
json_format = True
log_id_template = {dag_id}_{task_id}_{execution_date}_{try_number}
[elasticsearch_configs]
max_retries = 3
retry_timeout = True
timeout = 30
[kerberos]
ccache = /var/kerberos-ccache/cache
keytab = /etc/airflow.keytab
principal = [email protected]
reinit_frequency = 3600
[kubernetes]
airflow_configmap = airflow-airflow-config
airflow_local_settings_configmap = airflow-airflow-config
dags_in_image = False
delete_worker_pods = False
multi_namespace_mode = False
namespace = airflow
pod_template_file = /opt/airflow/pod_templates/pod_template_file.yaml
worker_container_repository = apache/airflow
worker_container_tag = 2.0.0
[logging]
colored_console_log = True
logging_level = INFO
remote_logging = False
[metrics]
statsd_host = airflow-statsd
statsd_on = True
statsd_port = 9125
statsd_prefix = airflow
[scheduler]
run_duration = 41460
scheduler_heartbeat_sec = 5
statsd_host = airflow-statsd
statsd_on = True
statsd_port = 9125
statsd_prefix = airflow
[webserver]
enable_proxy_fix = True
expose_config = True
```
This is the pod yaml file for the new tasks
```
apiVersion: v1
kind: Pod
metadata:
annotations:
dag_id: bash2
execution_date: "2021-01-14T20:16:00+00:00"
kubernetes.io/psp: eks.privileged
task_id: create_dir
try_number: "2"
labels:
airflow-worker: "38"
airflow_version: 2.0.0
dag_id: bash2
execution_date: 2021-01-14T20_16_00_plus_00_00
kubernetes_executor: "True"
task_id: create_dir
try_number: "2"
name: sss3
namespace: airflow
spec:
containers:
- args:
- airflow
- tasks
- run
- bash2
- create_dir
- "2021-01-14T20:16:00+00:00"
- --local
- --pool
- default_pool
- --subdir
- /opt/airflow/dags/repo/bash.py
env:
- name: AIRFLOW__CORE__EXECUTOR
value: LocalExecutor
- name: AIRFLOW__CORE__FERNET_KEY
valueFrom:
secretKeyRef:
key: fernet-key
name: airflow-fernet-key
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: connection
name: airflow-airflow-metadata
- name: AIRFLOW_CONN_AIRFLOW_DB
valueFrom:
secretKeyRef:
key: connection
name: airflow-airflow-metadata
- name: AIRFLOW_IS_K8S_EXECUTOR_POD
value: "True"
image: apache/airflow:2.0.0
imagePullPolicy: IfNotPresent
name: base
resources: { }
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/airflow/logs
name: airflow-logs
- mountPath: /opt/airflow/airflow.cfg
name: config
readOnly: true
subPath: airflow.cfg
- mountPath: /etc/git-secret/ssh
name: git-sync-ssh-key
subPath: ssh
- mountPath: /opt/airflow/dags
name: dags
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: airflow-worker-token-7sdtr
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- env:
- name: GIT_SSH_KEY_FILE
value: /etc/git-secret/ssh
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_KNOWN_HOSTS
value: "false"
- name: GIT_SYNC_REV
value: HEAD
- name: GIT_SYNC_BRANCH
value: main
- name: GIT_SYNC_REPO
value: [email protected]:Tikna-inc/airflow.git
- name: GIT_SYNC_DEPTH
value: "1"
- name: GIT_SYNC_ROOT
value: /git
- name: GIT_SYNC_DEST
value: repo
- name: GIT_SYNC_ADD_USER
value: "true"
- name: GIT_SYNC_WAIT
value: "60"
- name: GIT_SYNC_MAX_SYNC_FAILURES
value: "0"
- name: GIT_SYNC_ONE_TIME
value: "true"
image: k8s.gcr.io/git-sync:v3.1.6
imagePullPolicy: IfNotPresent
name: git-sync
resources: { }
securityContext:
runAsUser: 65533
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /git
name: dags
- mountPath: /etc/git-secret/ssh
name: git-sync-ssh-key
readOnly: true
subPath: gitSshKey
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: airflow-worker-token-7sdtr
readOnly: true
nodeName: ip-172-31-41-37.eu-south-1.compute.internal
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
runAsUser: 50000
serviceAccount: airflow-worker
serviceAccountName: airflow-worker
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: { }
name: dags
- name: git-sync-ssh-key
secret:
defaultMode: 288
secretName: airflow-ssh-secret
- emptyDir: { }
name: airflow-logs
- configMap:
defaultMode: 420
name: airflow-airflow-config
name: config
- name: airflow-worker-token-7sdtr
secret:
defaultMode: 420
secretName: airflow-worker-token-7sdtr
```
**-----------------------Important----------------------------**
**Debugging**
for debugging purpose I have changed the pod args rather than running the task, I ran it with
```
spec:
containers:
- args:
- airflow
- webserver
```
and tried to look for the Dags , and found None. It seems like gitSync is not working with the pods triggered by kubernetesExecutor.
Any help please ??? | https://github.com/apache/airflow/issues/13680 | https://github.com/apache/airflow/pull/13826 | 3909232fafd09ac72b49010ecdfd6ea48f06d5cf | 5f74219e6d400c4eae9134f6015c72430d6d549f | 2021-01-14T19:47:20Z | python | 2021-02-04T19:01:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,679 | ["airflow/utils/db.py"] | SQL Syntax errors on startup | **Apache Airflow version**: 2.0.0
**What happened**:
While investigating issues relating to task getting stuck, I saw this sql error in postgres logs. I am not entirely sure of what it impacts but I thought of letting you know.
```
ERROR: column "connection.password" must appear in the GROUP BY clause or be used in an aggregate function at character 8
STATEMENT: SELECT connection.password AS connection_password, connection.extra AS connection_extra, connection.id AS connection_id, connection.conn_id AS connection_conn_id, connection.conn_type AS connection_conn_type, connection.description AS connection_description, connection.host AS connection_host, connection.schema AS connection_schema, connection.login AS connection_login, connection.port AS connection_port, connection.is_encrypted AS connection_is_encrypted, connection.is_extra_encrypted AS connection_is_extra_encrypted, count(connection.conn_id) AS count_1
FROM connection GROUP BY connection.conn_id
HAVING count(connection.conn_id) > 1
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT connection.password AS connection_password, connection.extra AS connection_extra, connection.id AS connection_id, connection.conn_id AS connection_conn_id, connection.conn_type AS connection_conn_type, connection.description AS connection_description, connection.host AS connection_host, connection.schema AS connection_schema, connection.login AS connection_login, connection.port AS connection_port, connection.is_encrypted AS connection_is_encrypted, connection.is_extra_encrypted AS connection_is_extra_encrypted
FROM connection
WHERE connection.conn_type IS NULL
```
**How to reproduce it**:
1. Run `docker-compose run initdb`
2. Run `docker-compose run upgradedb`
<details> <summary> Here's my docker-compose </summary>
```
version: "3.2"
networks:
airflow:
services:
postgres:
container_name: af_postgres
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_DB=airflow
- POSTGRES_PASSWORD=airflow
volumes:
- ./postgresql/data:/var/lib/postgresql/data
command: >
postgres
-c listen_addresses=*
-c logging_collector=on
-c log_destination=stderr
networks:
- airflow
initdb:
container_name: af_initdb
image: docker.io/apache/airflow:2.0.0-python3.7
environment:
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://airflow:airflow@postgres:5432/airflow
depends_on:
- postgres
entrypoint: /bin/bash
command: -c "airflow db init"
networks:
- airflow
upgradedb:
container_name: af_upgradedb
image: docker.io/apache/airflow:2.0.0-python3.7
environment:
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://airflow:airflow@postgres:5432/airflow
depends_on:
- postgres
entrypoint: /bin/bash
command: -c "airflow db upgrade"
networks:
- airflow
```
</details>
**Anything else we need to know**:
Upon looking the code, I believe having `Connection.conn_id` [here](https://github.com/apache/airflow/blob/ab5f770bfcd8c690cbe4d0825896325aca0beeca/airflow/utils/db.py#L613) will resolve the sql syntax error.
| https://github.com/apache/airflow/issues/13679 | https://github.com/apache/airflow/pull/13783 | 1602ec97c8d5bc7a7a8b42e850ac6c7a7030e47d | b4c8a0406e88f330b38e8571b5b3ea399ff6fe7d | 2021-01-14T18:15:42Z | python | 2021-01-20T07:23:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,677 | ["chart/templates/scheduler/scheduler-deployment.yaml"] | Airflow Scheduler Liveliness Probe does not support running multiple instances | ## Topic
Airflow with Kubernetes Executor
## Version
Airflow 2.0.0
## Description
Hi,
This code
https://github.com/apache/airflow/blob/1d2977f6a4c67fa6174c79dcdc4e9ee3ce06f1b1/chart/templates/scheduler/scheduler-deployment.yaml#L138
causes scheduler pods to randomly restart due to liveliness probe hitting random hostname, if more than one scheduler replica is running.
### Solution
Suggesting this change:
```
livenessProbe:
exec:
command:
- python
- '-Wignore'
- '-c'
- >
import sys
import os
from airflow.jobs.scheduler_job import SchedulerJob
from airflow.utils.session import provide_session
from airflow.utils.state import State
from airflow.utils.net import get_hostname
@provide_session
def all_running_jobs(session=None):
return session.query(SchedulerJob).filter(SchedulerJob.state == State.RUNNING).all()
os.environ['AIRFLOW__CORE__LOGGING_LEVEL'] = 'ERROR'
os.environ['AIRFLOW__LOGGING__LOGGING_LEVEL'] = 'ERROR'
all_active_schedulers = all_running_jobs()
current_scheduler = get_hostname()
for _job in all_active_schedulers:
if _job.hostname == current_scheduler and _job.is_alive():
sys.exit(0)
sys.exit(1)
``` | https://github.com/apache/airflow/issues/13677 | https://github.com/apache/airflow/pull/13705 | 808092928a66908f36aec585b881c5390d365130 | 2abfe1e1364a98e923a0967e4a989ccabf8bde54 | 2021-01-14T17:23:03Z | python | 2021-01-15T23:52:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,676 | ["airflow/api_connexion/endpoints/xcom_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_xcom_endpoint.py"] | API Endpoints - /xcomEntries/{xcom_key} - doesn't return value | **Apache Airflow version**: 2.0.0
**What happened**:
Using endpoint `/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key}` I got Response Body but without `value` entry. Like:
```
{
"dag_id": "string",
"execution_date": "string",
"key": "string",
"task_id": "string",
"timestamp": "string"
}
```
Instead of:
```
{
"dag_id": "string",
"execution_date": "string",
"key": "string",
"task_id": "string",
"timestamp": "string",
"value": "string"
}
```
The exact value by defined `key` exists. | https://github.com/apache/airflow/issues/13676 | https://github.com/apache/airflow/pull/13684 | 2fef2ab1bf0f8c727a503940c9c65fd5be208386 | dc80fa4cbc070fc6e84fcc95799d185badebaa71 | 2021-01-14T15:57:46Z | python | 2021-01-15T10:18:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,668 | ["airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "airflow/models/dagrun.py", "airflow/models/pool.py", "airflow/models/taskinstance.py", "airflow/utils/sqlalchemy.py", "tests/utils/test_sqlalchemy.py"] | scheduler dies with "MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')" | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: tencent cloud
- **OS** (e.g. from /etc/os-release): centos7
- **Kernel** (e.g. `uname -a`): 3.10
- **Install tools**:
- **Others**: Server version: 8.0.22 MySQL Community Server - GPL
**What happened**:
Scheduler dies when I try to restart it. And the logs are as follows:
```
{2021-01-14 13:29:05,424} {{scheduler_job.py:1293}} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 609, in do_execute
cursor.execute(statement, parameters)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 209, in execute
res = self._query(query)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 315, in _query
db.query(q)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/connections.py", line 239, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1275, in _execute
self._run_scheduler_loop()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1349, in _run_scheduler_loop
self.adopt_or_reset_orphaned_tasks()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1758, in adopt_or_reset_orphaned_tasks
session.query(SchedulerJob)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 4063, in update
update_op.exec_()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1697, in exec_
self._do_exec()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1895, in _do_exec
self._execute_stmt(update_stmt)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1702, in _execute_stmt
self.result = self.query._execute_crud(stmt, self.mapper)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3568, in _execute_crud
return conn.execute(stmt, self._params)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 609, in do_execute
cursor.execute(statement, parameters)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 209, in execute
res = self._query(query)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/cursors.py", line 315, in _query
db.query(q)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/MySQLdb/connections.py", line 239, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: UPDATE job SET state=%s WHERE job.state = %s AND job.latest_heartbeat < %s]
[parameters: ('failed', 'running', datetime.datetime(2021, 1, 14, 5, 28, 35, 157941))]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
{2021-01-14 13:29:06,435} {{process_utils.py:95}} INFO - Sending Signals.SIGTERM to GPID 6293
{2021-01-14 13:29:06,677} {{process_utils.py:61}} INFO - Process psutil.Process(pid=6318, status='terminated') (6318) terminated with exit code None
{2021-01-14 13:29:06,767} {{process_utils.py:201}} INFO - Waiting up to 5 seconds for processes to exit...
{2021-01-14 13:29:06,850} {{process_utils.py:61}} INFO - Process psutil.Process(pid=6320, status='terminated') (6320) terminated with exit code None
{2021-01-14 13:29:06,850} {{process_utils.py:61}} INFO - Process psutil.Process(pid=6319, status='terminated') (6319) terminated with exit code None
{2021-01-14 13:29:06,858} {{process_utils.py:61}} INFO - Process psutil.Process(pid=6321, status='terminated') (6321) terminated with exit code None
{2021-01-14 13:29:06,864} {{process_utils.py:201}} INFO - Waiting up to 5 seconds for processes to exit...
{2021-01-14 13:29:06,876} {{process_utils.py:61}} INFO - Process psutil.Process(pid=6293, status='terminated') (6293) terminated with exit code 0
{2021-01-14 13:29:06,876} {{scheduler_job.py:1296}} INFO - Exited execute loop
```
**What you expected to happen**:
Schdeduler should not die.
**How to reproduce it**:
I don't know how to reproduce it
**Anything else we need to know**:
I just upgrade airflow from 1.10.14. Now I try to fix it temporarily by catching the exception in scheduler_job.py
```python
for dag_run in dag_runs:
try:
self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
except Exception as e:
self.log.exception(e)
```
| https://github.com/apache/airflow/issues/13668 | https://github.com/apache/airflow/pull/14031 | 019389d034700c53d218135ab01128ff8b325b1c | 568327f01a39d6f181dda62ef6a143f5096e6b97 | 2021-01-14T07:05:53Z | python | 2021-02-03T02:55:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,667 | ["airflow/models/dagbag.py"] | scheduler dies with "TypeError: '>' not supported between instances of 'NoneType' and 'datetime.datetime'" | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: tencent cloud
- **OS** (e.g. from /etc/os-release): centos7
- **Kernel** (e.g. `uname -a`): 3.10
- **Install tools**:
- **Others**: Server version: 8.0.22 MySQL Community Server - GPL
**What happened**:
Scheduler dies when I try to restart it. And the logs are as follows:
```
2021-01-14 14:07:44,429} {{scheduler_job.py:1754}} INFO - Resetting orphaned tasks for active dag runs
{2021-01-14 14:08:14,470} {{scheduler_job.py:1754}} INFO - Resetting orphaned tasks for active dag runs
{2021-01-14 14:08:16,968} {{scheduler_job.py:1293}} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1275, in _execute
self._run_scheduler_loop()
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1377, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1516, in _do_scheduling
self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1629, in _schedule_dag_run
dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/.pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/models/dagbag.py", line 187, in get_dag
if sd_last_updated_datetime > self.dags_last_fetched[dag_id]:
TypeError: '>' not supported between instances of 'NoneType' and 'datetime.datetime'
{2021-01-14 14:08:17,975} {{process_utils.py:95}} INFO - Sending Signals.SIGTERM to GPID 53178
{2021-01-14 14:08:18,212} {{process_utils.py:61}} INFO - Process psutil.Process(pid=58676, status='terminated') (58676) terminated with exit code None
{2021-01-14 14:08:18,295} {{process_utils.py:201}} INFO - Waiting up to 5 seconds for processes to exit...
{2021-01-14 14:08:18,345} {{process_utils.py:61}} INFO - Process psutil.Process(pid=53178, status='terminated') (53178) terminated with exit code 0
{2021-01-14 14:08:18,345} {{process_utils.py:61}} INFO - Process psutil.Process(pid=58677, status='terminated') (58677) terminated with exit code None
{2021-01-14 14:08:18,346} {{process_utils.py:61}} INFO - Process psutil.Process(pid=58678, status='terminated') (58678) terminated with exit code None
{2021-01-14 14:08:18,346} {{process_utils.py:61}} INFO - Process psutil.Process(pid=58708, status='terminated') (58708) terminated with exit code None
{2021-01-14 14:08:18,346} {{scheduler_job.py:1296}} INFO - Exited execute loop
```
**What you expected to happen**:
Schdeduler should not die.
**How to reproduce it**:
I don't know how to reproduce it
**Anything else we need to know**:
I just upgrade airflow from 1.10.14. Now I try to fix it temporarily by catching the exception in scheduler_job.py
```python
for dag_run in dag_runs:
try:
self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
except Exception as e:
self.log.exception(e)
```
| https://github.com/apache/airflow/issues/13667 | https://github.com/apache/airflow/pull/13899 | ffb472cf9e630bd70f51b74b0d0ea4ab98635572 | 8958d125cd4ac9e58d706d75be3eb88d591199cd | 2021-01-14T06:51:40Z | python | 2021-01-26T13:32:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,659 | ["docs/apache-airflow/howto/define_extra_link.rst"] | Operator Extra Links not showing up on UI | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): Linux
- **Kernel** (e.g. `uname -a`): Linux
- **Install tools**:
- **Others**:
**What happened**:
Followed the Example Here: https://airflow.apache.org/docs/apache-airflow/stable/howto/define_extra_link.html#define-an-operator-extra-link, and was expecting Link to show up on UI but it does not :(

```
class GoogleLink(BaseOperatorLink):
name = "Google"
def get_link(self, operator, dttm):
return "https://www.google.com"
class MyFirstOperator(BaseOperator):
operator_extra_links = (
GoogleLink(),
)
@apply_defaults
def __init__(self, **kwargs):
super().__init__(**kwargs)
def execute(self, context):
self.log.info("Hello World!")
print(self.extra_links)
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**: I expected a Button Link to show up in Task Instance Model
<!-- What do you think went wrong? -->
**How to reproduce it**: Follow Example here on Airflow 2.0 https://airflow.apache.org/docs/apache-airflow/stable/howto/define_extra_link.html#define-an-operator-extra-link
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/13659 | https://github.com/apache/airflow/pull/13683 | 3558538883612a10e9ea3521bf864515b6e560c5 | 3d21082adc3bde63a15dad4db85b448ff695cfc6 | 2021-01-13T20:55:43Z | python | 2021-01-15T12:21:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,656 | ["airflow/www/static/js/connection_form.js"] | Password is unintendedly changed when editing a connection | **Apache Airflow version**: 2.0.0
**What happened**:
When editing a connection - without changing the password - and saving the edited connection, a wrong password is saved.
**What you expected to happen**:
If I do not change the password in the UI, I expect that the password is not changed.
**How to reproduce it**:
- Create a new connection and save it (screenshots 1 + 2)
- Edit the connection without editing the password, and save it again (screenshots 3 + 4)
If you _do_ edit the password, the (new or old) password is saved correctly.
*Screenshot 1*

*Screenshot 2*

*Screenshot 3*

*Screenshot 4*

(I blurred out the full string in the unlikely case that the full string might contain information on my fernet key or something) | https://github.com/apache/airflow/issues/13656 | https://github.com/apache/airflow/pull/15073 | 1627323a197bba2c4fbd71816a9a6bd3f78c1657 | b4374d33b0e5d62c3510f1f5ac4a48e7f48cb203 | 2021-01-13T16:34:22Z | python | 2021-03-29T19:12:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,653 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_schema.py", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | API Endpoint for Airflow V1 - DAGs details | **Description**
We need to have the endpoint in Airflow V1 to retrieve details of existing DAG, e.g. `GET /dags/{dag_id}/details `
**Use case / motivation**
We want to be able to retrieve/discover the parameters that a DAG accepts. We can see that you pass parameters when you execute a dag via the conf object. We can also see that you explicitly declare parameters that a DAG accepts via the params argument when creating the DAG.
However, we can't see anywhere via either the REST API or CLI that allows you to retrieve this information from a DAG (note that we are not saying a DAG run).
It doesn't even look like version 2 API supports this although the OpenAPI spec mentions a dags/{dag_id}/details endpoint but this is not documented. We found the related GitHub issue for this new endpoint and it is done but looks like documentation isn't yet updated.
Please can you:
1. Provide the response for the v2 details endpoint
2. Advise when v2 documentation will be updated with the details endpoint.
3. Advise if there is a workaround for us doing this on v1.1
**Related Issues**
#8138
| https://github.com/apache/airflow/issues/13653 | https://github.com/apache/airflow/pull/13790 | 2c6c7fdb2308de98e142618836bdf414df9768c8 | 10b8ecc86f24739a38e56347dcc8dc60e3e43975 | 2021-01-13T14:21:27Z | python | 2021-01-21T15:42:19Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,638 | ["airflow/utils/log/file_task_handler.py", "tests/utils/test_log_handlers.py"] | Stable API task logs | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NA
**Environment**:
- **Cloud provider or hardware configuration**: PC (docker-compose)
- **OS** (e.g. from /etc/os-release): Linux mint 20 (for PC), Debian Buster in container
- **Kernel** (e.g. `uname -a`): Linux 607a1bfeebd2 5.4.0-60-generic #67-Ubuntu SMP Tue Jan 5 18:31:36 UTC 2021 x86_64 GNU/Linux
- **Install tools**: Poetry (so pipy)
- **Others**:
Using python 3.8.6, with Celery Executor, one worker
Task did run properly
**What happened**:
I tried to get the logs of a task instance using the stable Rest API through the Swagger UI included in Airflow, and it crashed (got a stack trace)
I got 500 error
```
engine-webserver_1 | 2021-01-12T16:45:18.465370280Z [2021-01-12 16:45:18,464] {app.py:1891} ERROR - Exception on /api/v1/dags/insert/dagRuns/manual__2021-01-12T15:05:59.560500+00:00/taskInstances/insert-db/logs/0 [GET]
engine-webserver_1 | 2021-01-12T16:45:18.465391147Z Traceback (most recent call last):
engine-webserver_1 | 2021-01-12T16:45:18.465394643Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
engine-webserver_1 | 2021-01-12T16:45:18.465397709Z response = self.full_dispatch_request()
engine-webserver_1 | 2021-01-12T16:45:18.465400161Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
engine-webserver_1 | 2021-01-12T16:45:18.465402912Z rv = self.handle_user_exception(e)
engine-webserver_1 | 2021-01-12T16:45:18.465405405Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
engine-webserver_1 | 2021-01-12T16:45:18.465407715Z reraise(exc_type, exc_value, tb)
engine-webserver_1 | 2021-01-12T16:45:18.465409739Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
engine-webserver_1 | 2021-01-12T16:45:18.465412258Z raise value
engine-webserver_1 | 2021-01-12T16:45:18.465414560Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
engine-webserver_1 | 2021-01-12T16:45:18.465425555Z rv = self.dispatch_request()
engine-webserver_1 | 2021-01-12T16:45:18.465427999Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
engine-webserver_1 | 2021-01-12T16:45:18.465429697Z return self.view_functions[rule.endpoint](**req.view_args)
engine-webserver_1 | 2021-01-12T16:45:18.465431146Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
engine-webserver_1 | 2021-01-12T16:45:18.465433001Z response = function(request)
engine-webserver_1 | 2021-01-12T16:45:18.465434308Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
engine-webserver_1 | 2021-01-12T16:45:18.465435841Z response = function(request)
engine-webserver_1 | 2021-01-12T16:45:18.465437122Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/connexion/decorators/validation.py", line 384, in wrapper
engine-webserver_1 | 2021-01-12T16:45:18.465438620Z return function(request)
engine-webserver_1 | 2021-01-12T16:45:18.465440074Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/connexion/decorators/response.py", line 103, in wrapper
engine-webserver_1 | 2021-01-12T16:45:18.465441667Z response = function(request)
engine-webserver_1 | 2021-01-12T16:45:18.465443086Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
engine-webserver_1 | 2021-01-12T16:45:18.465445345Z return function(**kwargs)
engine-webserver_1 | 2021-01-12T16:45:18.465446713Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/airflow/api_connexion/security.py", line 47, in decorated
engine-webserver_1 | 2021-01-12T16:45:18.465448202Z return func(*args, **kwargs)
engine-webserver_1 | 2021-01-12T16:45:18.465449538Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
engine-webserver_1 | 2021-01-12T16:45:18.465451032Z return func(*args, session=session, **kwargs)
engine-webserver_1 | 2021-01-12T16:45:18.465452504Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/airflow/api_connexion/endpoints/log_endpoint.py", line 81, in get_log
engine-webserver_1 | 2021-01-12T16:45:18.465454135Z logs, metadata = task_log_reader.read_log_chunks(ti, task_try_number, metadata)
engine-webserver_1 | 2021-01-12T16:45:18.465455658Z File "/brain/engine/.cache/poetry/meta-vSi4r4R8-py3.8/lib/python3.8/site-packages/airflow/utils/log/log_reader.py", line 58, in read_log_chunks
engine-webserver_1 | 2021-01-12T16:45:18.465457226Z logs, metadatas = self.log_handler.read(ti, try_number, metadata=metadata)
engine-webserver_1 | 2021-01-12T16:45:18.465458632Z ValueError: not enough values to unpack (expected 2, got 1)
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I expected to get the logs of my task
<!-- What do you think went wrong? -->
**How to reproduce it**:
I think it's everytime (at least on my side)
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
Other stable API call, such as getting list of dags runs, task instance, etc worked well.
Logs is appearing well if I go to
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
-->
EDIT : Ok, I'm stupid, I put 0 as try number, instead of 1...
So not a big bug, though I think 0 as try number should be a 400 status response, not 500 crash.
Should I keep it open ? | https://github.com/apache/airflow/issues/13638 | https://github.com/apache/airflow/pull/14001 | 32d2c25e2dd1fd069f51bdfdd79595f12047a867 | 2366f861ee97f50e2cff83d557a1ae97030febf9 | 2021-01-12T17:10:25Z | python | 2021-02-01T13:33:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,637 | ["UPDATING.md", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | Scheduler takes 100% of CPU without task execution | Hi,
running airflow 2.0.0 with python 3.6.9 the scheduler is consuming much CPU time without execution any task:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
15758 oli 20 0 42252 3660 3124 R 100.0 0.0 0:00.06 top
16764 oli 20 0 590272 90648 15468 R 200.0 0.3 0:00.59 airflow schedul
16769 oli 20 0 588808 77236 13900 R 200.0 0.3 0:00.55 airflow schedul
1 root 20 0 1088 548 516 S 0.0 0.0 0:13.28 init
10 root 20 0 900 80 16 S 0.0 0.0 0:00.00 init | https://github.com/apache/airflow/issues/13637 | https://github.com/apache/airflow/pull/13664 | 9536ad906f1591a5a0f82f69ba3bd214c4516c5b | e4b8ee63b04a25feb21a5766b1cc997aca9951a9 | 2021-01-12T14:16:04Z | python | 2021-01-14T13:08:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,634 | ["airflow/providers/segment/provider.yaml"] | Docs: Segment `external-doc-url` links to Dingtalk API | On file `master:airflow/providers/segment/provider.yaml`
```
integrations:
- integration-name: Segment
external-doc-url: https://oapi.dingtalk.com
tags: [service]
```
That is the API for Dingtalk, which is an unrelated Alibaba owned service. The docs for Twilio Segment can be found at [(https://segment.com/docs/)](url)
I am not sure if this issue is the result of an issue somewhere else, but I identified this will adding integration logos.
Not sure any of this is relevant because but it appears I must add this information
**Apache Airflow version**: 2.0.0
**Environment**: GNU/Linux
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu
- **Kernel** (e.g. `uname -a`): 20.4
- **Install tools**: PIP
**What happened**: `external-doc-url` is mapped to dingtalk API
**What you expected to happen**: `external-doc-url` to be mapped to Segment docs
**How to reproduce it**: Observe code at `master:airflow/providers/segment/provider.yaml`
```
integrations:
- integration-name: Segment
external-doc-url: https://oapi.dingtalk.com
tags: [service]
```
New to Apache Airflow, but please bear with me 😄 | https://github.com/apache/airflow/issues/13634 | https://github.com/apache/airflow/pull/13645 | 189af54043a6aa6e7557bda6cf7cfca229d0efd2 | 548d082008c0c83f44020937f6ff19ca006b96cc | 2021-01-12T13:06:38Z | python | 2021-01-13T12:07:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,629 | ["airflow/providers/apache/hive/hooks/hive.py"] | HiveCliHook kill method error | https://github.com/apache/airflow/blob/6c458f29c0eeadb1282e524e76fdd379d6436824/airflow/providers/apache/hive/hooks/hive.py#L464
It should be:
```
if hasattr(self, 'sub_process')
``` | https://github.com/apache/airflow/issues/13629 | https://github.com/apache/airflow/pull/14542 | 45a0ac2e01c174754f4e6612c8e4d3125061d096 | d9e4454c66051a9e8bb5b2f3814d46f29332b89d | 2021-01-12T07:06:21Z | python | 2021-03-01T13:59:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,624 | ["airflow/www/templates/airflow/dags.html"] | Misleading dag pause info tooltip |
**Apache Airflow version**:
2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
N/A
**Environment**:
N/A
**What happened**:
The UI tooltip is misleading and confuses the user.
Tooltip says " use this toggle to pause the dag" which implies that if the toggle is set to **ON** the flow is paused, but in fact it's the reverse of that.
Either the logic should be reversed so that if the toggle is on, the DAG is paused, or the wording should be changed to explicitly state the actual functionality of the "on state" of the toggle.
something like "When this toggle is ON, the DAG will be executed at scheduled times, turn this toggle off to pause executions of this dag ".
**What you expected to happen**:
UI tooltip should be honest and clear about its function.
**How to reproduce it**:
open DAGs window of the airflow webserver in a supported browser, hold mouse over the (i) on the second cell from left on the top row.
<img width="534" alt="Screen Shot 2021-01-11 at 12 27 18 PM" src="https://user-images.githubusercontent.com/14813957/104258476-7bfad200-5434-11eb-8152-443f05071e4b.png">
| https://github.com/apache/airflow/issues/13624 | https://github.com/apache/airflow/pull/13642 | 3d538636984302013969aa82a04d458d24866403 | c4112e2e9deaa2e30e6fd05d43221023d0d7d40b | 2021-01-12T01:46:46Z | python | 2021-01-12T19:14:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,602 | ["airflow/www/utils.py", "tests/www/test_utils.py"] | WebUI returns an error when logs that do not use a DAG list `None` as the DAG ID | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**: docker-compose
- **OS** (e.g. from /etc/os-release): Docker `apache/airflow` `sha256:b4f957bef5a54ca0d781ae1431d8485f125f0b5d18f3bc7e0416c46e617db265`
- **Kernel** (e.g. `uname -a`): Linux c697ae3a0397 5.4.0-58-generic #64~18.04.1-Ubuntu SMP Wed Dec 9 17:11:11 UTC 2020 x86_64 GNU/Linux
- **Install tools**: docker
- **Others**:
**What happened**:
When an event that does not include a DAG is logged in the UI, this event lists the DAG ID as "None". This "None" is treated as an actual DAG ID with a link, which throws an error if clicked.
```
Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.
Python version: 3.6.12
Airflow version: 2.0.0
Node: 9097c882a712
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/www/decorators.py", line 97, in view_func
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/www/views.py", line 2028, in graph
dag = current_app.dag_bag.get_dag(dag_id)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/dagbag.py", line 171, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/dagbag.py", line 227, in _add_dag_from_db
raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
airflow.exceptions.SerializedDagNotFound: DAG 'None' not found in serialized_dag table
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I expected `None` to not be a link or have it link to some sort of error page. Instead it throws an error.
**How to reproduce it**:
Run a CLI command such as `airflow dags list`, then go to `/log/list/` in the web UI, and click on the `None` *Dag Id* for the logged event for the command.

**Anything else we need to know**:
This problem appears to occur every time. | https://github.com/apache/airflow/issues/13602 | https://github.com/apache/airflow/pull/13619 | eb40eea81be95ecd0e71807145797b6d82375885 | 8ecdef3e50d3b83901d70a13794ae6afabc4964e | 2021-01-11T01:26:42Z | python | 2021-01-12T10:16:01Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,597 | ["airflow/www/static/js/connection_form.js", "airflow/www/views.py"] | Extra field widgets of custom connections do not properly save data | **Apache Airflow version**: 2.0.0
**Environment**: Docker image `apache/airflow:2.0.0-python3.8` on Win10 with WSL
**What happened**:
I built a custom provider with a number of custom connections.
This works:
- The connections are properly registered
- The UI does not show hidden fields as per `get_ui_field_behaviour`
- The UI correctly relabels fields as per `get_ui_field_behaviour`
- The UI correctly shows added widgets as per `get_connection_form_widgets` (well, mostly)
What does not work:
- The UI does not save values entered for additional widgets
I used the [JDBC example](https://github.com/apache/airflow/blob/master/airflow/providers/jdbc/hooks/jdbc.py) to string myself along by copying it and pasting it as a hook into my custom provider package. (I did not install the JDBC provider package, unless it is installed in the image I use - but if I don't add it in my own provider package, I don't have the connection type in the UI, so I assume it is not). Curiously, The JDBC hook works just fine. I then created the following file:
```Python
"""
You find two child classes of DbApiHook in here. One is the exact copy of the JDBC
provider hook, minus some irrelevant logic (I only care about the UI stuff here).
The other is the exact same thing, except I added an "x" behind every occurance
of "jdbc" in strings and names.
"""
from typing import Any, Dict, Optional
from airflow.hooks.dbapi import DbApiHook
class JdbcXHook(DbApiHook):
"""
Copy of JdbcHook below. Added an "x" at various places, including the class name.
"""
conn_name_attr = 'jdbcx_conn_id' # added x
default_conn_name = 'jdbcx_default' # added x
conn_type = 'jdbcx' # added x
hook_name = 'JDBCx Connection' # added x
supports_autocommit = True
@staticmethod
def get_connection_form_widgets() -> Dict[str, Any]:
"""Returns connection widgets to add to connection form"""
from flask_appbuilder.fieldwidgets import BS3TextFieldWidget
from flask_babel import lazy_gettext
from wtforms import StringField
# added an x in the keys
return {
"extra__jdbcx__drv_path": StringField(lazy_gettext('Driver Path'), widget=BS3TextFieldWidget()),
"extra__jdbcx__drv_clsname": StringField(
lazy_gettext('Driver Class'), widget=BS3TextFieldWidget()
),
}
@staticmethod
def get_ui_field_behaviour() -> Dict:
"""Returns custom field behaviour"""
return {
"hidden_fields": ['port', 'schema', 'extra'],
"relabeling": {'host': 'Connection URL'},
}
class JdbcHook(DbApiHook):
"""
General hook for jdbc db access.
JDBC URL, username and password will be taken from the predefined connection.
Note that the whole JDBC URL must be specified in the "host" field in the DB.
Raises an airflow error if the given connection id doesn't exist.
"""
conn_name_attr = 'jdbc_conn_id'
default_conn_name = 'jdbc_default'
conn_type = 'jdbc'
hook_name = 'JDBC Connection plain'
supports_autocommit = True
@staticmethod
def get_connection_form_widgets() -> Dict[str, Any]:
"""Returns connection widgets to add to connection form"""
from flask_appbuilder.fieldwidgets import BS3TextFieldWidget
from flask_babel import lazy_gettext
from wtforms import StringField
return {
"extra__jdbc__drv_path": StringField(lazy_gettext('Driver Path'), widget=BS3TextFieldWidget()),
"extra__jdbc__drv_clsname": StringField(
lazy_gettext('Driver Class'), widget=BS3TextFieldWidget()
),
}
@staticmethod
def get_ui_field_behaviour() -> Dict:
"""Returns custom field behaviour"""
return {
"hidden_fields": ['port', 'schema', 'extra'],
"relabeling": {'host': 'Connection URL'},
}
```
**What you expected to happen**:
After doing the above, I expected
- Seeing both in the add connection UI
- Being able to use both the same way
**What actually happenes**:
- I _do_ see both in the UI (Screenshot 1)
- For some reason, the "normal" hook has BOTH extra fields - not just his own two? (Screenshot 2)
- If I add the connection as in Screenshot 2, they are saved in the four fields (his own two + the two for the "x" hook) properly as shown in Screenshot 3
- If I seek to edit the connection again, they are also they - all four fields - with the correct values in the UI
- If I add the connection for the "x" type as in Screenshot 4, it ostensibly saves it - with two fields as defined in the code
- You can see in screenshot 5, that the extra is saved as an empty string?!
- When trying to edit the connection in the UI, you also see that there is no data saved for two extra widgets?!
- I added a few more screenshots of airflow providers CLI command results (note that the package `ewah` has a number of other custom hooks, and the issue above occurs for *all* of them)
*Screenshot 1:*

*Screenshot 2:*

*Screenshot 3:*

*Screenshot 4:*

*Screenshot 5:*

*Screenshot 6 - airflow providers behaviours:*

*Screenshot 7 - airflow providers get:*

(Note: This error occurs with pre-installed providers as well)
*Screenshot 8 - airflow providers hooks:*

*Screenshot 9 - aorflow providers list:*

*Screenshot 10 - airflow providers widgets:*

**How to reproduce it**:
- create a custom provider package
- add the code snippet pasted above somewhere
- add the two classes to the `hook-class-names` list in the provider info
- install the provider package
- do what I described above
| https://github.com/apache/airflow/issues/13597 | https://github.com/apache/airflow/pull/13640 | 34eb203c5177bc9be91a9387d6a037f6fec9dba1 | b007fc33d481f0f1341d1e1e4cba719a5fe6580d | 2021-01-10T12:00:44Z | python | 2021-01-12T23:32:49Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,559 | ["airflow/models/taskinstance.py"] | Nested templated variables do not always render | **Apache Airflow version**:
1.10.14 and 1.10.8.
**Environment**:
Python 3.6 and Airflow 1.10.14 on sqllite,
**What happened**:
Nested jinja templates do not consistently render when running tasks. TI run rendering behavior also differs from airflow UI and airflow render cli.
**What you expected to happen**:
Airflow should render nested jinja templates consistently and completely across each interface. Coming from airflow 1.8.2, this used to be the case.
<!-- What do you think went wrong? -->
This regression may have been introduced in 1.10.6 with a refactor of BaseOperator templating functionality.
https://github.com/apache/airflow/pull/5461
Whether or not a nested layer renders seems to differ based on which arg is being templated in an operator and perhaps order. Furthermore, it seems like the render cli and airflow ui each apply TI.render_templates() a second time, creating inconsistency in what nested templates get rendered.
There may be bug in the way BaseOperator.render_template() observes/caches templated fields
**How to reproduce it**:
From the most basic airflow setup
nested_template_bug.py
```
from datetime import datetime
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
with DAG("nested_template_bug", start_date=datetime(2021, 1, 1)) as dag:
arg0 = 'level_0_{{task.task_id}}_{{ds}}'
kwarg1 = 'level_1_{{task.op_args[0]}}'
def print_fields(arg0, kwarg1):
print(f'level 0 arg0: {arg0}')
print(f'level 1 kwarg1: {kwarg1}')
nested_render = PythonOperator(
task_id='nested_render',
python_callable=print_fields,
op_args=[arg0, ],
op_kwargs={
'kwarg1': kwarg1,
},
)
```
```
> airflow test c
level 0 arg0: level_0_nested_render_2021-01-01
level 1 kwarg1: level_1_level_0_{{task.task_id}}_{{ds}}
> airflow render nested_template_bug nested_render 2021-01-01
# ----------------------------------------------------------
# property: op_args
# ----------------------------------------------------------
['level_0_nested_render_2021-01-01']
# ----------------------------------------------------------
# property: op_kwargs
# ----------------------------------------------------------
{'kwarg1': 'level_1_level_0_nested_render_2021-01-01'}
``` | https://github.com/apache/airflow/issues/13559 | https://github.com/apache/airflow/pull/18516 | b0a29776b32cbee657c9a6369d15278a999e927f | 1ac63cd5e2533ce1df1ec1170418a09170998699 | 2021-01-08T04:06:45Z | python | 2021-09-28T15:30:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,535 | ["airflow/providers/docker/CHANGELOG.rst", "airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | DockerOperator / XCOM : `TypeError: Object of type bytes is not JSON serializable` | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NA
**Environment**:
* local Ubuntu 18.04 LTS /
* docker-compose version 1.25.3, build d4d1b42b
* docker 20.10.1, build 831ebea
**What happened**:
when enabling xcom push for a docker operator the following error is thrown after the task finishes succesfully:
`TypeError: Object of type bytes is not JSON serializable`
**What you expected to happen**:
* error is not thrown
* if xcom_all is True: xcom contains all log lines
* if xcom_all is False: xcom contains last log line
**How to reproduce it**:
see docker compose and readme here:
https://github.com/AlessioM/airflow-xcom-issue
| https://github.com/apache/airflow/issues/13535 | https://github.com/apache/airflow/pull/13536 | 2de7793881da0968dd357a54e8b2a99017891915 | cd3307ff2147b170dc3feb5999edf5c8eebed4ba | 2021-01-07T09:22:20Z | python | 2021-07-26T17:55:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,532 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | In DockerOperator the parameter auto_remove doesn't work in | When setting DockerOperator with auto_remove=True in airflow version 2.0.0 the container remain in the container list if it was finished with 'Exited (1)' | https://github.com/apache/airflow/issues/13532 | https://github.com/apache/airflow/pull/13993 | 8eddc8b5019890a712810b8e5b1185997adb9bf4 | ba54afe58b7cbd3711aca23252027fbd034cca41 | 2021-01-07T07:48:37Z | python | 2021-01-31T19:23:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,531 | ["airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Airflow v1 REST List task instances api can not get `no_status` task instance | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0
**Environment**:
- **OS** ubuntu 18.04
- **Kernel** 5.4.0-47-generic
**What happened**:
When I use list task instances REST api, I can not get the instances that status is `no_status`.
```
### Get Task Instances
POST {{baseUrl}}/dags/~/dagRuns/~/taskInstances/list
Authorization: Basic admin:xxx
Content-Type: application/json
{
"dag_ids": ["stop_dags"]
}
or
{
"dag_ids": ["stop_dags"],
"state": ["null"]
}
```
**What you expected to happen**:
include all state task instances when I don't have specific states.
<!-- What do you think went wrong? -->
**How to reproduce it**:
use REST test tools like postman to visit the api.
**Anything else we need to know**:
I can not find the REST api that I get all the dag runs instances with specific state, maybe should extend the REST api.
Thanks!
| https://github.com/apache/airflow/issues/13531 | https://github.com/apache/airflow/pull/19487 | 1e570229533c4bbf5d3c901d5db21261fa4b1137 | f636060fd7b5eb8facd1acb10a731d4e03bc864a | 2021-01-07T07:19:52Z | python | 2021-11-20T16:09:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,515 | ["airflow/providers/slack/ADDITIONAL_INFO.md", "airflow/providers/slack/BACKPORT_PROVIDER_README.md", "airflow/providers/slack/README.md", "airflow/providers/slack/hooks/slack.py", "docs/conf.py", "docs/spelling_wordlist.txt", "scripts/ci/images/ci_verify_prod_image.sh", "setup.py", "tests/providers/slack/hooks/test_slack.py"] | Update slackapiclient / slack_sdk to v3 | Hello,
Slack has released updates to its library and we can start using it for it.
We especially like one change.
> slack_sdk has no required dependencies. This means aiohttp is no longer automatically resolved.
I've looked through the documentation and it doesn't look like a difficult task, but I think it's still worth testing.
More info: https://slack.dev/python-slack-sdk/v3-migration/index.html#from-slackclient-2-x
Best regards,
Kamil Breguła | https://github.com/apache/airflow/issues/13515 | https://github.com/apache/airflow/pull/13745 | dbd026227949a74e5995c8aef3c35bd80fc36389 | 283945001363d8f492fbd25f2765d39fa06d757a | 2021-01-06T12:56:13Z | python | 2021-01-25T21:13:48Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,504 | ["airflow/jobs/scheduler_job.py", "airflow/models/dagbag.py", "tests/jobs/test_scheduler_job.py"] | Scheduler is unable to find serialized DAG in the serialized_dag table | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Not relevant
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): CentOS Linux 7 (Core)
- **Kernel** (e.g. `uname -a`): Linux us01odcres-jamuaar-0003 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**: PostgreSQL 12.2
- **Others**:
**What happened**:
I have 2 dag files say, dag1.py and dag2.py.
dag1.py creates a static DAG i.e. once it's parsed it will create 1 specific DAG.
dag2.py creates dynamic DAGs based on json files kept in an external location.
The static DAG (generated from dag1.py) has a task in the later stage which generates json files and they get picked up by dag2.py which creates dynamic DAGs.
The dynamic DAGs which get created are unpaused by default and get scheduled once.
This whole process used to work fine with airflow 1.x where DAG serialization was not mandatory and was turned off by default.
But with Airflow 2.0 I am getting the following exception occasionally when the dynamically generated DAGs try to get scheduled by the scheduler.
```
[2021-01-06 10:09:38,742] {scheduler_job.py:1293} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/global/packages/python/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1275, in _execute
self._run_scheduler_loop()
File "/global/packages/python/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1377, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/global/packages/python/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1474, in _do_scheduling
self._create_dag_runs(query.all(), session)
File "/global/packages/python/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1557, in _create_dag_runs
dag = self.dagbag.get_dag(dag_model.dag_id, session=session)
File "/global/packages/python/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/global/packages/python/lib/python3.7/site-packages/airflow/models/dagbag.py", line 171, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/global/packages/python/lib/python3.7/site-packages/airflow/models/dagbag.py", line 227, in _add_dag_from_db
raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
airflow.exceptions.SerializedDagNotFound: DAG 'dynamic_dag_1' not found in serialized_dag table
```
When I checked the serialized_dag table manually, I am able to see the DAG entry there.
I found the last_updated column value to be **2021-01-06 10:09:38.757076+05:30**
Whereas the exception got logged at **[2021-01-06 10:09:38,742]** which is little before the last_updated time.
I think this means that the Scheduler tried to look for the DAG entry in the serialized_dag table before DagFileProcessor created the entry.
Is this right or something else can be going on here?
**What you expected to happen**:
Scheduler should start looking for the DAG entry in the serialized_dag table only after DagFileProcessor has added it.
Here it seems that DagFileProcessor added the DAG entry in the **dag** table, scheduler immediately fetched this dag_id from it and tried to find the same in **serialized_dag** table even before DagFileProcessor could add that.
**How to reproduce it**:
It occurs occasionally and there is no well defined way to reproduce it.
**Anything else we need to know**:
| https://github.com/apache/airflow/issues/13504 | https://github.com/apache/airflow/pull/13893 | 283945001363d8f492fbd25f2765d39fa06d757a | b9eb51a0fb32cd660a5459d73d7323865b34dd99 | 2021-01-06T07:57:27Z | python | 2021-01-25T21:55:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,494 | ["airflow/providers/google/cloud/log/stackdriver_task_handler.py", "airflow/utils/log/log_reader.py", "tests/cli/commands/test_info_command.py", "tests/providers/google/cloud/log/test_stackdriver_task_handler.py", "tests/providers/google/cloud/log/test_stackdriver_task_handler_system.py"] | Unable to view StackDriver logs in Web UI | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.16.15-gke.4901
**Environment**:
- **Cloud provider or hardware configuration**: GKE
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Using the apache/airflow docker image
- **Others**: Running 1 pod encapsulating 2 containers (1 x webserver and 1x scheduler) running in localexecutor mode
**What happened**:
I have remote logging configured for tasks to send the logs to StackDriver as per the below configuration. The logs get sent to Stackdriver okay and I can view them via the GCP console. However I cannot view them when browsing the UI.
The UI shows a spinning wheel and I see requests in the network tab to
`https://my_airflow_instance/get_logs_with_metadata?dag_id=XXX......`
These requests take about 15 seconds to run before returning with HTTP 200 and something like this in the response body:
```
{"message":"","metadata":{"end_of_log":false,"next_page_token":"xxxxxxxxx"}}
```
So no actual log data
**What you expected to happen**:
I should see the logs in the Web UI
**How to reproduce it**:
Configure remote logging for StackDriver with the below config:
```
AIRFLOW__LOGGING__GOOGLE_KEY_PATH: "/var/run/secrets/airflow/secrets/google-cloud-platform/stackdriver/credentials.json"
AIRFLOW__LOGGING__LOG_FORMAT: "[%(asctime)s] {{%(filename)s:%(lineno)d}} %(levelname)s - %(message)s"
AIRFLOW__LOGGING__REMOTE_LOGGING: "True"
AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER: "stackdriver://airflow-tasks"
```
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/13494 | https://github.com/apache/airflow/pull/13784 | d65376c377341fa9d6da263e145e06880d4620a8 | 833e3383230e1f6f73f8022ddf439d3d531eff01 | 2021-01-05T17:47:14Z | python | 2021-02-02T17:38:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,464 | ["airflow/models/dagrun.py"] | Scheduler fails if task is removed at runtime | **Apache Airflow version**: 2.0.0, LocalExecutor
**Environment**: Docker on Win10 with WSL, official Python3.8 image
**What happened**:
When a DAG is running, and I delete task from the running DAG, the scheduler fails. When using Docker, upon automatic restart of the scheduler, the scheduler just fails again, perpetually.

Note: I don't _know_ if the task itself was running at the time, but I would guess it was.
**What you expected to happen**:
The scheduler should understand that the task is not part of the DAG anymore and not fail.
**How to reproduce it**:
- Create a DAG with multiple tasks
- Let it run
- While running, delete one of the tasks from the source code
- See the scheduler break | https://github.com/apache/airflow/issues/13464 | https://github.com/apache/airflow/pull/14057 | d45739f7ce0de183329d67fff88a9da3943a9280 | eb78a8b86c6e372bbf4bfacb7628b154c16aa16b | 2021-01-04T16:54:58Z | python | 2021-02-04T10:08:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,451 | ["airflow/providers/http/sensors/http.py", "tests/providers/http/sensors/test_http.py"] | Modify HttpSensor to continue poking if the response is not 404 | **Description**
As documented in the [HttpSensor](https://airflow.apache.org/docs/apache-airflow-providers-http/stable/_modules/airflow/providers/http/sensors/http.html) if the response for the HTTP call is an error different from "404" the task will Fail.
>HTTP Error codes other than 404 (like 403) or Connection Refused Error
> would fail the sensor itself directly (no more poking).
The code block that apply this behavior:
```
except AirflowException as exc:
if str(exc).startswith("404"):
return False
raise exc
```
**Use case / motivation**
I am working with an API that returns 500 for any error that happens internally (unauthorized, Not Acceptable, etc) and need the sensor be able to continue poking even the response is different from 404.
Another case's an API that sometimes returns 429 and makes the task fail. (Could solve with a large interval)
The first API has a bad design, but since we need to consume some services like this, I would like to have more flexibility when working with HttpSensor
**What do you want to happen**
When creating a HttpSensor task, I would like to be able to pass a list of status codes that will make the Sensor return "False" if the HTTP status code in the response match one code of the list to make the Sensor continue poking.
If no status code is set, the default to return False and continue poking will be 404 like is now.
**Are you willing to submit a PR?**
Yep! | https://github.com/apache/airflow/issues/13451 | https://github.com/apache/airflow/pull/13499 | 7a742cb03375a57291242131a27ffd4903bfdbd8 | 1602ec97c8d5bc7a7a8b42e850ac6c7a7030e47d | 2021-01-03T17:10:29Z | python | 2021-01-20T00:02:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,434 | ["airflow/models/dag.py", "tests/jobs/test_scheduler_job.py"] | Airflow 2.0.0 manual run causes scheduled run to skip | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**: local/aws
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS
- **Kernel** (e.g. `uname -a`): 5.4.0-1032-aws
- **Install tools**: pip
- **Others**:
**What happened**:
I did a fresh Airflow 2.0.0 install. With this version, when I manually trigger a DAG, Airflow skips the next scheduled run.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
Manual runs do not interfere with the scheduled runs prior to Airflow 2.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Create a simple hourly DAG. After enabling it and the initial run, run it manually. It shall skip the next hour. Below is an example, where the manual run with execution time of 08:17 causes the scheduled run with execution time of 08:00 to skip.

| https://github.com/apache/airflow/issues/13434 | https://github.com/apache/airflow/pull/13963 | 8e0db6eae371856597dce0ccf8a920b0107965cd | de277c69e7909cf0d563bbd542166397523ebbe0 | 2021-01-02T12:59:07Z | python | 2021-01-30T12:02:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,414 | ["airflow/operators/trigger_dagrun.py", "tests/operators/test_trigger_dagrun.py"] | DAG raises error when passing non serializable JSON object via trigger | When passing a non serializable JSON object in a trigger, I get the following error below. The logs become unavailable.
my code:
```py
task_trigger_ad_attribution = TriggerDagRunOperator(
task_id='trigger_ad_attribution',
trigger_dag_id=AD_ATTRIBUTION_DAG_ID,
conf={"message": "Triggered from display trigger",
'trigger_info':
{'dag_id':DAG_ID,
'now':datetime.datetime.now(),
},
'trigger_date' : '{{execution_date}}'
},
)
```
```
Ooops!
Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.
Python version: 3.6.9
Airflow version: 2.0.0
Node: henry-Inspiron-5566
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/airflow/www/decorators.py", line 97, in view_func
return f(*args, **kwargs)
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/airflow/www/views.py", line 1997, in tree
data = htmlsafe_json_dumps(data, separators=(',', ':'))
File "/home/henry/Envs2/airflow3/lib/python3.6/site-packages/jinja2/utils.py", line 614, in htmlsafe_json_dumps
dumper(obj, **kwargs)
File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.6/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.6/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.6/json/encoder.py", line 180, in default
o.__class__.__name__)
TypeError: Object of type 'datetime' is not JSON serializable
``` | https://github.com/apache/airflow/issues/13414 | https://github.com/apache/airflow/pull/13964 | 862443f6d3669411abfb83082c29c2fad7fcf12d | b4885b25871ae7ede2028f81b0d88def3e22f23a | 2020-12-31T20:51:45Z | python | 2021-01-29T16:24:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,376 | ["airflow/cli/commands/sync_perm_command.py", "tests/cli/commands/test_sync_perm_command.py"] | airflow sync-perm command does not sync DAG level Access Control | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**What happened**:
Running sync_perm CLI command does not synchronize the permission granted through the DAG via access_control.
This is because of dag serialization. When dag serialization is enabled, the dagbag will exhibit a lazy loading behaviour.
**How to reproduce it**:
1. Add access_control to a DAG where the new role has permission to see the DAG.
```
access_control={
"test": {'can_dag_read'}
},
```
4. Run `airflow sync-perm`.
5. Log in as the new user and you will still not see any DAG.
6. If you refresh the DAG, the new user will be able to DAG after they refresh their page
**Expected behavior**
When I run `airflow sync-perm`, I expect the role who has been granted read permission for the DAG to be able to see that DAG.
This is also an issue with 1.10.x with DAG Serialization enabled, so would be good to backport it too.
| https://github.com/apache/airflow/issues/13376 | https://github.com/apache/airflow/pull/13377 | d5cf993f81ea2c4b5abfcb75ef05a6f3783874f2 | 1b94346fbeca619f3084d05bdc5358836ed02318 | 2020-12-29T23:33:44Z | python | 2020-12-30T11:35:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,360 | ["airflow/providers/amazon/aws/transfers/mongo_to_s3.py", "tests/providers/amazon/aws/transfers/test_mongo_to_s3.py"] | Add 'mongo_collection' to template_fields in MongoToS3Operator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
<!-- A short description of your feature -->
Make `MongoToS3Operator` `mongo_collection` parameter templated.
**Use case / motivation**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
This would allow for passing a templated mongo collection from other tasks, such as a mongo collection used as data destination by using `S3Hook`. For instance, we could use templated mongo collection to write data for different dates in different collections by using: `mycollection.{{ ds_nodash }}`.
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
Yes.
**Related Issues**
<!-- Is there currently another issue associated with this? -->
N/A
| https://github.com/apache/airflow/issues/13360 | https://github.com/apache/airflow/pull/13361 | e43688358320a5f20776c0d346c310a568a55049 | f7a1334abe4417409498daad52c97d3f0eb95137 | 2020-12-29T11:42:55Z | python | 2021-01-02T10:32:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,340 | ["airflow/www/security.py", "tests/www/test_security.py"] | Anonymous users aren't able to view DAGs even with Admin Role | **Apache Airflow version**: 2.0.0 (Current master)
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04.1 LTS
- **Kernel** (e.g. `uname -a`): Linux ubuntu 5.4.0-58-generic #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
webserver_config.py file config:
```
# Uncomment to setup Public role name, no authentication needed
AUTH_ROLE_PUBLIC = 'Admin'
```
**What happened**:
After disabling the authentication, all users are identified as "Anonymous User" and no dags are load on the screen because there is a method that returns an empty set for roles when a user is anonymous.
views.py file:
```
# Get all the dag id the user could access
filter_dag_ids = current_app.appbuilder.sm.get_accessible_dag_ids(g.user)
```
security.py file:
```
def get_accessible_dags(self, user_actions, user, session=None):
"""Generic function to get readable or writable DAGs for authenticated user."""
if user.is_anonymous:
return set()
user_query = (
session.query(User)
.options(
joinedload(User.roles)
.subqueryload(Role.permissions)
.options(joinedload(PermissionView.permission), joinedload(PermissionView.view_menu))
)
.filter(User.id == user.id)
.first()
)
resources = set()
for role in user_query.roles:
...
```
**What you expected to happen**:
Since the option to disable login exists, I expect that all anonymous users have the Role specified in the webserver_config.py file in the AUTH_ROLE_PUBLIC entry.
It will make anonymous users able to see/edit dags if the roles specified as the default for anonymous users match the DAG roles.
**How to reproduce it**:
Set the following entry in webserver_config.py file config to disable authentication and make all users anonymous with the 'Admin" role:
```
# Uncomment to setup Public role name, no authentication needed
AUTH_ROLE_PUBLIC = 'Admin'
```
With the current master branch installed, run
`airflow webserver`
No DAGs will appear:

**Anything else we need to know**:
The methods have explicit comments about being used for authenticated user:
```
def get_accessible_dags(self, user_actions, user, session=None):
"""Generic function to get readable or writable DAGs for authenticated user."""
```
But there is no way for anonymous users to be able to see DAGs on the screen without modifying the behavior of this method. | https://github.com/apache/airflow/issues/13340 | https://github.com/apache/airflow/pull/14042 | 88bdcfa0df5bcb4c489486e05826544b428c8f43 | 78aa921a715c69d0095ab28dd48793824f0b0a0d | 2020-12-28T13:40:23Z | python | 2021-02-04T00:48:16Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,331 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/scheduler_command.py", "chart/templates/scheduler/scheduler-deployment.yaml", "tests/cli/commands/test_scheduler_command.py"] | Helm Chart uses unsupported commands for Airflow 2.0 - serve_logs | Hello,
Our Helm Chart uses the command that is deleted in Airflow 2.0. We should consider what we want to do with it - add this command again or delete the reference to this command in the Helm Chart.
https://github.com/apache/airflow/blob/d41c6a46b176a80e1cdb0bcc592f5a8baec21c41/chart/templates/scheduler/scheduler-deployment.yaml#L177-L197
Related PR:
https://github.com/apache/airflow/pull/6843
Best regards,
Kamil Breguła
CC: @dstandish | https://github.com/apache/airflow/issues/13331 | https://github.com/apache/airflow/pull/15557 | 053d903816464f699876109b50390636bf617eff | 414bb20fad6c6a50c5a209f6d81f5ca3d679b083 | 2020-12-27T23:00:54Z | python | 2021-04-29T15:06:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,325 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | max_tis_per_query=0 leads to nothing being scheduled in 2.0.0 | After upgrading to airflow 2.0.0 it seems as if the scheduler isn't working anymore. Tasks hang on scheduled state, but no tasks get executed. I've tested this with sequential and celery executor. When using the celery executor no messages seem to arrive in RabbiyMq
This is on local docker. Everything was working fine before upgrading. There don't seem to be any error messages, so I'm not completely sure if this is a bug or a misconfiguration on my end.
Using python:3.7-slim-stretch Docker image. Regular setup that we're using is CeleryExecutor. Mysql version is 5.7
Any help would be greatly appreciated.
**Python packages**
alembic==1.4.3
altair==4.1.0
amazon-kclpy==1.5.0
amqp==2.6.1
apache-airflow==2.0.0
apache-airflow-providers-amazon==1.0.0
apache-airflow-providers-celery==1.0.0
apache-airflow-providers-ftp==1.0.0
apache-airflow-providers-http==1.0.0
apache-airflow-providers-imap==1.0.0
apache-airflow-providers-jdbc==1.0.0
apache-airflow-providers-mysql==1.0.0
apache-airflow-providers-sqlite==1.0.0
apache-airflow-upgrade-check==1.1.0
apispec==3.3.2
appdirs==1.4.4
argcomplete==1.12.2
argon2-cffi==20.1.0
asn1crypto==1.4.0
async-generator==1.10
attrs==20.3.0
azure-common==1.1.26
azure-core==1.9.0
azure-storage-blob==12.6.0
Babel==2.9.0
backcall==0.2.0
bcrypt==3.2.0
billiard==3.6.3.0
black==20.8b1
bleach==3.2.1
boa-str==1.1.0
boto==2.49.0
boto3==1.7.3
botocore==1.10.84
cached-property==1.5.2
cattrs==1.1.2
cbsodata==1.3.3
celery==4.4.2
certifi==2020.12.5
cffi==1.14.4
chardet==3.0.4
click==7.1.2
clickclick==20.10.2
cmdstanpy==0.9.5
colorama==0.4.4
colorlog==4.0.2
commonmark==0.9.1
connexion==2.7.0
convertdate==2.3.0
coverage==4.2
croniter==0.3.36
cryptography==3.3.1
cycler==0.10.0
Cython==0.29.21
decorator==4.4.2
defusedxml==0.6.0
dill==0.3.3
dnspython==2.0.0
docutils==0.14
email-validator==1.1.2
entrypoints==0.3
ephem==3.7.7.1
et-xmlfile==1.0.1
fbprophet==0.7.1
fire==0.3.1
Flask==1.1.2
Flask-AppBuilder==3.1.1
Flask-Babel==1.0.0
Flask-Bcrypt==0.7.1
Flask-Caching==1.9.0
Flask-JWT-Extended==3.25.0
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.4
flask-swagger==0.2.13
Flask-WTF==0.14.3
flatten-json==0.1.7
flower==0.9.5
funcsigs==1.0.2
future==0.18.2
graphviz==0.15
great-expectations==0.13.2
gunicorn==19.10.0
holidays==0.10.4
humanize==3.2.0
idna==2.10
importlib-metadata==1.7.0
importlib-resources==1.5.0
inflection==0.5.1
ipykernel==5.4.2
ipython==7.19.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
iso8601==0.1.13
isodate==0.6.0
itsdangerous==1.1.0
JayDeBeApi==1.2.3
jdcal==1.4.1
jedi==0.17.2
jellyfish==0.8.2
Jinja2==2.11.2
jmespath==0.10.0
joblib==1.0.0
JPype1==1.2.0
json-merge-patch==0.2
jsonpatch==1.28
jsonpointer==2.0
jsonschema==3.2.0
jupyter-client==6.1.7
jupyter-core==4.7.0
jupyterlab-pygments==0.1.2
kinesis-events==0.1.0
kiwisolver==1.3.1
kombu==4.6.11
korean-lunar-calendar==0.2.1
lazy-object-proxy==1.4.3
lockfile==0.12.2
LunarCalendar==0.0.9
Mako==1.1.3
Markdown==3.3.3
MarkupSafe==1.1.1
marshmallow==3.10.0
marshmallow-enum==1.5.1
marshmallow-oneofschema==2.0.1
marshmallow-sqlalchemy==0.23.1
matplotlib==3.3.3
mistune==0.8.4
mock==1.0.1
mockito==1.2.2
msrest==0.6.19
mypy-extensions==0.4.3
mysql-connector-python==8.0.18
mysqlclient==2.0.2
natsort==7.1.0
nbclient==0.5.1
nbconvert==6.0.7
nbformat==5.0.8
nest-asyncio==1.4.3
nose==1.3.7
notebook==6.1.5
numpy==1.19.4
oauthlib==3.1.0
openapi-spec-validator==0.2.9
openpyxl==3.0.5
oscrypto==1.2.1
packaging==20.8
pandas==1.1.5
pandocfilters==1.4.3
parso==0.7.1
pathspec==0.8.1
pendulum==2.1.2
pexpect==4.8.0
phonenumbers==8.12.15
pickleshare==0.7.5
Pillow==8.0.1
prison==0.1.3
prometheus-client==0.8.0
prompt-toolkit==3.0.8
protobuf==3.14.0
psutil==5.8.0
ptyprocess==0.6.0
pyarrow==2.0.0
pycodestyle==2.6.0
pycparser==2.20
pycryptodomex==3.9.9
pydevd-pycharm==193.5233.109
Pygments==2.7.3
PyJWT==1.7.1
PyMeeus==0.3.7
pyodbc==4.0.30
pyOpenSSL==19.1.0
pyparsing==2.4.7
pyrsistent==0.17.3
pystan==2.19.1.1
python-crontab==2.5.1
python-daemon==2.2.4
python-dateutil==2.8.1
python-editor==1.0.4
python-nvd3==0.15.0
python-slugify==4.0.1
python3-openid==3.2.0
pytz==2019.3
pytzdata==2020.1
PyYAML==5.3.1
pyzmq==20.0.0
recordlinkage==0.14
regex==2020.11.13
requests==2.23.0
requests-oauthlib==1.3.0
rich==9.2.0
ruamel.yaml==0.16.12
ruamel.yaml.clib==0.2.2
s3transfer==0.1.13
scikit-learn==0.23.2
scipy==1.5.4
scriptinep3==0.3.1
Send2Trash==1.5.0
setproctitle==1.2.1
setuptools-git==1.2
shelljob==0.5.6
six==1.15.0
sklearn==0.0
snowflake-connector-python==2.3.7
snowflake-sqlalchemy==1.2.4
SQLAlchemy==1.3.22
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.36.8
swagger-ui-bundle==0.0.8
tabulate==0.8.7
TagValidator==0.0.8
tenacity==6.2.0
termcolor==1.1.0
terminado==0.9.1
testpath==0.4.4
text-unidecode==1.3
threadpoolctl==2.1.0
thrift==0.13.0
toml==0.10.2
toolz==0.11.1
tornado==6.1
tqdm==4.54.1
traitlets==5.0.5
typed-ast==1.4.1
typing-extensions==3.7.4.3
tzlocal==1.5.1
unicodecsv==0.14.1
urllib3==1.24.2
validate-email==1.3
vine==1.3.0
watchtower==0.7.3
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.5.1
wrapt==1.12.1
WTForms==2.3.1
xlrd==2.0.1
XlsxWriter==1.3.7
zipp==3.4.0
**Relevant config**
```
# The folder where your airflow pipelines live, most likely a
# subfolder in a code repositories
# This path must be absolute
dags_folder = /usr/local/airflow/dags
# The executor class that airflow should use. Choices include
# SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor
executor = CeleryExecutor
# The SqlAlchemy connection string to the metadata database.
# SqlAlchemy supports many different database engine, more information
# their website
sql_alchemy_conn = db+mysql://airflow:airflow@postgres/airflow
# The SqlAlchemy pool size is the maximum number of database connections
# in the pool.
sql_alchemy_pool_size = 5
# The SqlAlchemy pool recycle is the number of seconds a connection
# can be idle in the pool before it is invalidated. This config does
# not apply to sqlite.
sql_alchemy_pool_recycle = 3600
# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 32
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 128
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
# How long before timing out a python file import while filling the DagBag
dagbag_import_timeout = 60
# The class to use for running task instances in a subprocess
task_runner = StandardTaskRunner
# Whether to enable pickling for xcom (note that this is insecure and allows for
# RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
enable_xcom_pickling = True
# When a task is killed forcefully, this is the amount of time in seconds that
# it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
killed_task_cleanup_time = 60
# This flag decides whether to serialise DAGs and persist them in DB. If set to True, Webserver reads from DB instead of parsing DAG files
store_dag_code = True
# You can also update the following default configurations based on your needs
min_serialized_dag_update_interval = 30
min_serialized_dag_fetch_interval = 10
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
worker_concurrency = 16
# When you start an airflow worker, airflow starts a tiny web server
# subprocess to serve the workers local log files to the airflow main
# web server, who then builds pages and sends them to users. This defines
# the port on which the logs are served. It needs to be unused, and open
# visible from the main web server to connect into the workers.
worker_log_server_port = 8793
# The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
# a sqlalchemy database. Refer to the Celery documentation for more
# information.
broker_url = amqp://amqp:5672/1
# Another key Celery setting
result_backend = db+mysql://airflow:airflow@postgres/airflow
# Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
# it `airflow flower`. This defines the IP that Celery Flower runs on
flower_host = 0.0.0.0
# This defines the port that Celery Flower runs on
flower_port = 5555
# Default queue that tasks get assigned to and that worker listen on.
default_queue = airflow
# Import path for celery configuration options
celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
# No SSL
ssl_active = False
[scheduler]
# Task instances listen for external kill signal (when you clear tasks
# from the CLI or the UI), this defines the frequency at which they should
# listen (in seconds).
job_heartbeat_sec = 5
# The scheduler constantly tries to trigger new tasks (look at the
# scheduler section in the docs for more information). This defines
# how often the scheduler should run (in seconds).
scheduler_heartbeat_sec = 5
# after how much time should the scheduler terminate in seconds
# -1 indicates to run continuously (see also num_runs)
run_duration = -1
# after how much time a new DAGs should be picked up from the filesystem
min_file_process_interval = 60
use_row_level_locking=False
dag_dir_list_interval = 300
# How often should stats be printed to the logs
print_stats_interval = 30
child_process_log_directory = /usr/local/airflow/logs/scheduler
# Local task jobs periodically heartbeat to the DB. If the job has
# not heartbeat in this many seconds, the scheduler will mark the
# associated task instance as failed and will re-schedule the task.
scheduler_zombie_task_threshold = 300
# Turn off scheduler catchup by setting this to False.
# Default behavior is unchanged and
# Command Line Backfills still work, but the scheduler
# will not do scheduler catchup if this is False,
# however it can be set on a per DAG basis in the
# DAG definition (catchup)
catchup_by_default = True
# This changes the batch size of queries in the scheduling main loop.
# This depends on query length limits and how long you are willing to hold locks.
# 0 for no limit
max_tis_per_query = 0
# The scheduler can run multiple threads in parallel to schedule dags.
# This defines how many threads will run.
parsing_processes = 4
authenticate = False
``` | https://github.com/apache/airflow/issues/13325 | https://github.com/apache/airflow/pull/13512 | b103a1dd0e22b67dcc8cb2a28a5afcdfb7554412 | 31d31adb58750d473593a9b13c23afcc9a0adf97 | 2020-12-27T10:25:52Z | python | 2021-01-18T21:24:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,306 | ["BREEZE.rst", "Dockerfile", "Dockerfile.ci", "scripts/ci/images/ci_verify_prod_image.sh", "scripts/ci/libraries/_initialization.sh", "setup.py"] | The "ldap" extra misses libldap dependency | The 'ldap' provider misses 'ldap' extra dep (which adds ldap3 pip dependency). | https://github.com/apache/airflow/issues/13306 | https://github.com/apache/airflow/pull/13308 | 13a9747bf1d92020caa5d4dc825e096ce583f2df | d23ac9b235c5b30a5d2d3a3a7edf60e0085d68de | 2020-12-24T18:21:48Z | python | 2020-12-28T16:07:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,295 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | In triggered SubDag (schedule_interval=None), when clearing a successful Subdag, child tasks aren't run | **Apache Airflow version**:
Airflow 2.0
**Environment**:
Ubuntu 20.04 (WSL on Windows 10)
- **OS** (e.g. from /etc/os-release):
VERSION="20.04.1 LTS (Focal Fossa)"
- **Kernel** (e.g. `uname -a`):
Linux XXX 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
**What happened**:
After successfully running a SUBDAG, clearing it (including downstream+recursive) doesn't trigger the inner tasks. Instead, the subdag is marked successful and the inner tasks all stay cleared and aren't re-run.
**What you expected to happen**:
Expected Clear with DownStream + Recursive to re-run all subdag tasks.
<!-- What do you think went wrong? -->
**How to reproduce it**:
1. Using a slightly modified version of https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#subdags:
```python
from airflow import DAG
from airflow.example_dags.subdags.subdag import subdag
from airflow.operators.dummy import DummyOperator
from airflow.operators.subdag import SubDagOperator
from airflow.utils.dates import days_ago
def subdag(parent_dag_name, child_dag_name, args):
dag_subdag = DAG(
dag_id=f'{parent_dag_name}.{child_dag_name}',
default_args=args,
start_date=days_ago(2),
schedule_interval=None,
)
for i in range(5):
DummyOperator(
task_id='{}-task-{}'.format(child_dag_name, i + 1),
default_args=args,
dag=dag_subdag,
)
return dag_subdag
DAG_NAME = 'example_subdag_operator'
args = {
'owner': 'airflow',
}
dag = DAG(
dag_id=DAG_NAME, default_args=args, start_date=days_ago(2), schedule_interval=None, tags=['example']
)
start = DummyOperator(
task_id='start',
dag=dag,
)
section_1 = SubDagOperator(
task_id='section-1',
subdag=subdag(DAG_NAME, 'section-1', args),
dag=dag,
)
some_other_task = DummyOperator(
task_id='some-other-task',
dag=dag,
)
section_2 = SubDagOperator(
task_id='section-2',
subdag=subdag(DAG_NAME, 'section-2', args),
dag=dag,
)
end = DummyOperator(
task_id='end',
dag=dag,
)
start >> section_1 >> some_other_task >> section_2 >> end
```
2. Run the subdag fully.
3. Clear (with recursive/downstream) any of the SubDags.
4. The Subdag will be marked successful, but if you zoom into the subdag, you'll see all the child tasks were not run.
| https://github.com/apache/airflow/issues/13295 | https://github.com/apache/airflow/pull/14776 | 0b50e3228519138c9826bc8e98f0ab5dc40a268d | 052163516bf91ab7bb53f4ec3c7b5621df515820 | 2020-12-24T01:51:24Z | python | 2021-03-18T10:38:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,262 | ["airflow/providers/google/cloud/hooks/dataflow.py", "tests/providers/google/cloud/hooks/test_dataflow.py"] | Dataflow Flex Template Operator |
**Apache Airflow version**:
1. 1.10.9 Composer Airflow Image
**Environment**:
- **Cloud provider or hardware configuration**: Cloud Composer
**What happened**:
Error logs indicate appears to not recognize the job as Batch.
[2020-12-22 16:28:53,445] {taskinstance.py:1135} ERROR - 'type'
Traceback (most recent call last)
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 972, in _run_raw_tas
result = task_copy.execute(context=context
File "/usr/local/lib/airflow/airflow/providers/google/cloud/operators/dataflow.py", line 647, in execut
on_new_job_id_callback=set_current_job_id
File "/usr/local/lib/airflow/airflow/providers/google/common/hooks/base_google.py", line 383, in inner_wrappe
return func(self, *args, **kwargs
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/dataflow.py", line 804, in start_flex_templat
jobs_controller.wait_for_done(
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/dataflow.py", line 348, in wait_for_don
while self._jobs and not all(self._check_dataflow_job_state(job) for job in self._jobs)
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/dataflow.py", line 348, in <genexpr
while self._jobs and not all(self._check_dataflow_job_state(job) for job in self._jobs)
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/dataflow.py", line 321, in _check_dataflow_job_stat
wait_for_running = job['type'] == DataflowJobType.JOB_TYPE_STREAMIN
KeyError: 'type
I have specified:
```
with models.DAG(
dag_id="pdc-test",
start_date=days_ago(1),
schedule_interval=None,
) as dag_flex_template:
start_flex_template = DataflowStartFlexTemplateOperator(
task_id="pdc-test",
body={
"launchParameter": {
"containerSpecGcsPath": GCS_FLEX_TEMPLATE_TEMPLATE_PATH,
"jobName": DATAFLOW_FLEX_TEMPLATE_JOB_NAME,
"parameters": {
"stage": STAGE,
"target": TARGET,
"path": PATH,
"filename": FILENAME,
"column": "geometry"
},
"environment": {
"network": NETWORK,
"subnetwork": SUBNETWORK,
"machineType": "n1-standard-1",
"numWorkers": "1",
"maxWorkers": "1",
"tempLocation": "gs://test-pipelines-work/batch",
"workerZone": "northamerica-northeast1",
"enableStreamingEngine": "false",
"serviceAccountEmail": "<number>[email protected]",
"ipConfiguration": "WORKER_IP_PRIVATE"
},
}
},
location=LOCATION,
project_id=GCP_PROJECT_ID
)```
**What you expected to happen**:
Expecting the dag to run.
<!-- What do you think went wrong? -->
Appears the Operator is not handling the input as a batch type Flex Template. DataflowJobType should be BATCH and not STREAMING.
**How to reproduce it**:
1. Create a Batch Flex Template as of https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates
2. Point code above to your registered template and invoke.
| https://github.com/apache/airflow/issues/13262 | https://github.com/apache/airflow/pull/14914 | 7c2ed5394e12aa02ff280431b8d35af80d37b1f0 | a7e144bec855f6ccf0fa5ae8447894195ffe170f | 2020-12-22T19:32:24Z | python | 2021-03-23T18:48:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,254 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Import error when using custom backend and sql_alchemy_conn_secret | **Apache Airflow version**: 2.0.0
**Environment**:
- **Cloud provider or hardware configuration**: N/A
- **OS** (e.g. from /etc/os-release): custom Docker image (`FROM python:3.6`) and macOS Big Sur (11.0.1)
- **Kernel** (e.g. `uname -a`):
- `Linux xxx 4.14.174+ #1 SMP x86_64 GNU/Linux`
- `Darwin xxx 20.1.0 Darwin Kernel Version 20.1.0 rRELEASE_X86_64 x86_64`
- **Install tools**:
- **Others**:
**What happened**:
I may have mixed 2 different issues here, but this is what happened to me.
I'm trying to use Airflow with the `airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend` and a `sql_alchemy_conn_secret` too, however, I have a `NameError` exception when attempting to run either `airflow scheduler` or `airflow webserver`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/usr/local/lib/python3.6/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 786, in <module>
conf.read(AIRFLOW_CONFIG)
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 447, in read
self._validate()
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 196, in _validate
self._validate_config_dependencies()
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 224, in _validate_config_dependencies
is_sqlite = "sqlite" in self.get('core', 'sql_alchemy_conn')
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 324, in get
option = self._get_option_from_secrets(deprecated_key, deprecated_section, key, section)
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 342, in _get_option_from_secrets
option = self._get_secret_option(section, key)
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 303, in _get_secret_option
return _get_config_value_from_secret_backend(secrets_path)
NameError: name '_get_config_value_from_secret_backend' is not defined
```
**What you expected to happen**:
A proper import and configuration creation.
**How to reproduce it**:
`airflow.cfg`:
```ini
[core]
# ...
sql_alchemy_conn_secret = some-key
# ...
[secrets]
backend = airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
backend_kwargs = { ... }
# ...
```
**Anything else we need to know**:
Here is the workaround I have for the moment, not sure it works all the way, and probably doesn't cover all edge cases, tho it kinda works for my setup:
Move `get_custom_secret_backend` before (for me it's actually below `_get_config_value_from_secret_backend`): https://github.com/apache/airflow/blob/cc87caa0ce0b31aa29df7bbe90bdcc2426d80ff1/airflow/configuration.py#L794
Then comment: https://github.com/apache/airflow/blob/cc87caa0ce0b31aa29df7bbe90bdcc2426d80ff1/airflow/configuration.py#L232-L236
| https://github.com/apache/airflow/issues/13254 | https://github.com/apache/airflow/pull/13260 | 7a560ab6de7243e736b66599842b241ae60d1cda | 69d6d0239f470ac75e23160bac63408350c1835a | 2020-12-22T14:08:30Z | python | 2020-12-24T17:09:19Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,226 | ["UPDATING.md"] | Use of SQLAInterface in custom models in Plugins | We might need to add to Airflow 2.0 upgrade documentation the need to use `CustomSQLAInterface` instead of `SQLAInterface`.
If you want to define your own appbuilder models you need to change the interface to a Custom one:
Non-RBAC replace:
```
from flask_appbuilder.models.sqla.interface import SQLAInterface
datamodel = SQLAInterface(your_data_model)
```
with RBAC (in 1.10):
```
from airflow.www_rbac.utils import CustomSQLAInterface
datamodel = CustomSQLAInterface(your_data_model)
```
and in 2.0:
```
from airflow.www.utils import CustomSQLAInterface
datamodel = CustomSQLAInterface(your_data_model)
```
| https://github.com/apache/airflow/issues/13226 | https://github.com/apache/airflow/pull/14478 | 0a969db2b025709505f8043721c83218a73bb84d | 714a07542c2560b50d013d66f71ad9a209dd70b6 | 2020-12-21T17:40:47Z | python | 2021-03-03T00:29:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,225 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/models/dag.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py", "tests/api_connexion/schemas/test_task_instance_schema.py"] | Clear Tasks via the stable REST API with task_id filter | **Description**
I have noticed that the stable REST API doesn't have the ability to run a task (which is possible from the airflow web interface.
I think it would be nice to have either:
- Run task
- Run all failing tasks (rerun from point of failure)
this ability for integrations.
**Use case / motivation**
I would like the ability to identify the failing tasks on a specific DAG Run and rerun only them.
I would like to do it remotely (non-interactive) using the REST API.
I could write a script that run only the failing tasks, but I couldn't find a way to "Run" a task, when I have the task instance ID.
**Are you willing to submit a PR?**
Not at this stage
**Related Issues**
| https://github.com/apache/airflow/issues/13225 | https://github.com/apache/airflow/pull/14500 | a265fd54792bb7638188eaf4f6332ae95d24899e | e150bbfe0a7474308ba7df9c89e699b77c45bb5c | 2020-12-21T17:38:56Z | python | 2021-04-07T06:54:34Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,214 | ["airflow/migrations/versions/2c6edca13270_resource_based_permissions.py"] | Make migration logging consistent | **Apache Airflow version**:
2.0.0.dev
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
When I run `airflow db reset -y` I got
```
INFO [alembic.runtime.migration] Running upgrade bef4f3d11e8b -> 98271e7606e2, Add scheduling_decision to DagRun and DAG
INFO [alembic.runtime.migration] Running upgrade 98271e7606e2 -> 52d53670a240, fix_mssql_exec_date_rendered_task_instance_fields_for_MSSQL
INFO [alembic.runtime.migration] Running upgrade 52d53670a240 -> 364159666cbd, Add creating_job_id to DagRun table
INFO [alembic.runtime.migration] Running upgrade 364159666cbd -> 45ba3f1493b9, add-k8s-yaml-to-rendered-templates
INFO [alembic.runtime.migration] Running upgrade 45ba3f1493b9 -> 849da589634d, Prefix DAG permissions.
INFO [alembic.runtime.migration] Running upgrade 849da589634d -> 2c6edca13270, Resource based permissions.
[2020-12-21 10:15:40,510] {manager.py:727} WARNING - No user yet created, use flask fab command to do it.
[2020-12-21 10:15:41,964] {providers_manager.py:291} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.compute_ssh.ComputeEngineSSHHook' from 'apache-airflow-providers-google' package: No module named 'google.cloud.oslogin_v1'
[2020-12-21 10:15:42,791] {providers_manager.py:291} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.compute_ssh.ComputeEngineSSHHook' from 'apache-airflow-providers-google' package: No module named 'google.cloud.oslogin_v1'
[2020-12-21 10:15:47,157] {migration.py:515} INFO - Running upgrade 2c6edca13270 -> 61ec73d9401f, Add description field to connection
[2020-12-21 10:15:47,160] {migration.py:515} INFO - Running upgrade 61ec73d9401f -> 64a7d6477aae, fix description field in connection to be text
[2020-12-21 10:15:47,164] {migration.py:515} INFO - Running upgrade 64a7d6477aae -> e959f08ac86c, Change field in DagCode to MEDIUMTEXT for MySql
[2020-12-21 10:15:47,381] {dagbag.py:440} INFO - Filling up the DagBag from /root/airflow/dags
[2020-12-21 10:15:47,857] {dag.py:1813} INFO - Sync 29 DAGs
[2020-12-21 10:15:47,870] {dag.py:1832} INFO - Creating ORM DAG for example_bash_operator
[2020-12-21 10:15:47,871] {dag.py:1832} INFO - Creating ORM DAG for example_kubernetes_executor
[2020-12-21 10:15:47,872] {dag.py:1832} INFO - Creating ORM DAG for example_xcom_args
[2020-12-21 10:15:47,873] {dag.py:1832} INFO - Creating ORM DAG for tutorial
[2020-12-21 10:15:47,873] {dag.py:1832} INFO - Creating ORM DAG for example_python_operator
[2020-12-21 10:15:47,874] {dag.py:1832} INFO - Creating ORM DAG for example_xcom
```
**What you expected to happen**:
I expect to see all migration logging to be formatted in the same style. I would also love to see no unrelated logs - this will make `db reset` easier to digest.
**How to reproduce it**:
Run `airflow db reset -y`
**Anything else we need to know**:
N/A
| https://github.com/apache/airflow/issues/13214 | https://github.com/apache/airflow/pull/13458 | feb84057d34b2f64e3b5dcbaae2d3b18f5f564e4 | 43b2d3392224d8e0d6fb8ce8cdc6b0f0b0cc727b | 2020-12-21T10:21:14Z | python | 2021-01-04T17:25:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,200 | ["airflow/utils/cli.py", "tests/utils/test_cli_util.py"] | CLI `airflow scheduler -D --pid <PIDFile>` fails silently if PIDFile given is a relative path |
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Environment**: Linux & MacOS, venv
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.3 LTS / MacOS 10.15.7
- **Kernel** (e.g. `uname -a`):
- Linux *** 5.4.0-1029-aws #30~18.04.1-Ubuntu SMP Tue Oct 20 11:09:25 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- Darwin *** 19.6.0 Darwin Kernel Version 19.6.0: Thu Oct 29 22:56:45 PDT 2020; root:xnu-6153.141.2.2~1/RELEASE_X86_64 x86_64
**What happened**:
Say I'm in my home dir, running command `airflow scheduler -D --pid test.pid` (`test.pid` is a relative path) is supposed to start the scheduler in daemon mode, and the PID will be stored in the file `test.pid` (if it doesn't exist, it should be created).
However, the scheduler is NOT started. This can be validated by running `ps aux | grep airflow | grep scheduler` (no process is shown). In the whole process, I don't see any error message.
However, if I change the pid file path to an absolute path, i.e. `airflow scheduler -D --pid ${PWD}/test.pid`, it successfully start the scheduler in daemon mode (can be validated via the method above).
**What you expected to happen**:
Even if the PID file path provided is a relative path, the scheduler should be started properly as well.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Described above
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/13200 | https://github.com/apache/airflow/pull/13232 | aa00e9bcd4ec16f42338b30d29e87ccda8eecf82 | 93e4787b70a85cc5f13db5e55ef0c06629b45e6e | 2020-12-20T22:16:54Z | python | 2020-12-22T22:18:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,192 | ["airflow/providers/google/cloud/operators/mlengine.py", "tests/providers/google/cloud/operators/test_mlengine.py"] | Generalize MLEngineStartTrainingJobOperator to custom images | **Description**
The operator is arguably unnecessarily limited to AI Platform’s standard images. The only change that is required to lift this constraint is making `package_uris` and `training_python_module` optional with default values `[]` and `None`, respectively. Then, using `master_config`, one can supply `imageUri` and run any image of choice.
**Use case / motivation**
This will open up for running arbitrary images on AI Platform.
**Are you willing to submit a PR?**
If the above sounds reasonable, I can open pull requests. | https://github.com/apache/airflow/issues/13192 | https://github.com/apache/airflow/pull/13318 | 6e1a6ff3c8a4f8f9bcf8b7601362359bfb2be6bf | f6518dd6a1217d906d863fe13dc37916efd78b3e | 2020-12-20T10:26:37Z | python | 2021-01-02T10:34:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,181 | ["chart/templates/workers/worker-kedaautoscaler.yaml", "chart/tests/helm_template_generator.py", "chart/tests/test_keda.py"] | keda scaledobject not created even though keda enabled in helm config | In brand new cluster using k3d locally, I first installed keda:
```bash
helm install keda \
--namespace keda kedacore/keda \
--version "v1.5.0"
```
Next, I installed airflow using this config:
```yaml
executor: CeleryExecutor
defaultAirflowTag: 2.0.0-python3.7
airflowVersion: 2.0.0
workers:
keda:
enabled: true
persistence:
enabled: false
pgbouncer:
enabled: true
```
I think this should create a scaled object `airflow-worker`.
But it does not.
@turbaszek and @dimberman you may have insight ...
| https://github.com/apache/airflow/issues/13181 | https://github.com/apache/airflow/pull/13183 | 4aba9c5a8b89d2827683fb4c84ac481c89ebc2b3 | a9d562e1c3c16c98750c9e3be74347f882acb97a | 2020-12-19T08:30:36Z | python | 2020-12-21T10:19:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,151 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Task Instances in the "removed" state prevent the scheduler from scheduling new tasks when max_active_runs is set | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux 6ae65b86e112 5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 GNU/Linux
- **Others**: Python 3.8
**What happened**:
After migrating one of our development Airflow instances from 1.10.14 to 2.0.0, the scheduler started to refuse to schedule tasks for a DAG that did not actually exceed its `max_active_runs`.
When it did this the following error would be logged:
```
DAG <dag_name> already has 577 active runs, not queuing any tasks for run 2020-12-17 08:05:00+00:00
```
A bit of digging revealed that this DAG had task instances associated with it that are in the `removed` state. As soon as I forced the task instances that are in the `removed` state into the `failed` state, the tasks would be scheduled.
I believe the root cause of the issue is that [this filter](https://github.com/apache/airflow/blob/master/airflow/jobs/scheduler_job.py#L1506) does not filter out tasks that are in the `removed` state.
**What you expected to happen**:
I expected the task instances in the DAG to be scheduled, because the DAG did not actually exceed the number of `max_active_runs`.
**How to reproduce it**:
I think the best approach to reproduce it is as follows:
1. Create a DAG and set `max_active_runs` to 1.
2. Ensure the DAG has ran successfully a number of times, such that it has some history associated with it.
3. Set one historical task instance to the `removed` state (either by directly updating it in the DB, or deleting a task from a DAG before it has been able to execute).
**Anything else we need to know**:
The Airflow instance that I ran into this issue with contains about 3 years of task history, which means that we actually had quite a few task instances that are in the `removed` state, but there is no easy way to delete those from the Web UI.
A work around is to set the tasks to `failed`, which will allow the scheduler to proceed. | https://github.com/apache/airflow/issues/13151 | https://github.com/apache/airflow/pull/13165 | 5cf2fbf12462de0a684ec4f631783850f7449059 | ef8f414c20b3cd64e2226ec5c022e799a6e0af86 | 2020-12-18T13:14:51Z | python | 2020-12-19T12:09:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,142 | ["airflow/www/security.py", "docs/apache-airflow/security/webserver.rst", "tests/www/test_security.py"] | Error while attempting to disable login (setting AUTH_ROLE_PUBLIC = 'Admin') | # Error while attempting to disable login
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: mac-pro
- **OS** (e.g. from /etc/os-release): osx
- **Kernel** (e.g. `uname -a`): Darwin C02CW1JLMD6R 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64
- **Install tools**:
- **Others**:
**What happened**:
When setting in `webserver_config.py`,
```python
AUTH_ROLE_PUBLIC = 'Admin'
```
Got error on webserver, when going to localhost:8080/home,
```log
[2020-12-17 16:29:09,993] {app.py:1891} ERROR - Exception on /home [GET]
Traceback (most recent call last):
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/airflow/www/views.py", line 540, in index
user_permissions = current_app.appbuilder.sm.get_all_permissions_views()
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/airflow/www/security.py", line 226, in get_all_permissions_views
for role in self.get_user_roles():
File "/Users/Zshot0831/.local/share/virtualenvs/airflow_2-DimIlKMl/lib/python3.8/site-packages/airflow/www/security.py", line 219, in get_user_roles
public_role = current_app.appbuilder.config.get('AUTH_ROLE_PUBLIC')
AttributeError: 'AirflowAppBuilder' object has no attribute 'config'
```
**What you expected to happen**:
Reached homepage without the need for authentication as admin.
**How to reproduce it**:
1. Install airflow in a new environment (or to a new directory, set env AIRFLOW_HOME=[my new dir])
2. Uncomment and change in webserver_config.py
```python
AUTH_ROLE_PUBLIC = 'Admin'
```
3. Start `airflow webserver`
4. Look in localhost:8080/home or localhost:8080
*webserver_config.py file**
```python
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Default configuration for the Airflow webserver"""
import os
from flask_appbuilder.security.manager import AUTH_DB
# from flask_appbuilder.security.manager import AUTH_LDAP
# from flask_appbuilder.security.manager import AUTH_OAUTH
# from flask_appbuilder.security.manager import AUTH_OID
# from flask_appbuilder.security.manager import AUTH_REMOTE_USER
basedir = os.path.abspath(os.path.dirname(__file__))
# Flask-WTF flag for CSRF
WTF_CSRF_ENABLED = True
# ----------------------------------------------------
# AUTHENTICATION CONFIG
# ----------------------------------------------------
# For details on how to set up each of the following authentication, see
# http://flask-appbuilder.readthedocs.io/en/latest/security.html# authentication-methods
# for details.
# The authentication type
# AUTH_OID : Is for OpenID
# AUTH_DB : Is for database
# AUTH_LDAP : Is for LDAP
# AUTH_REMOTE_USER : Is for using REMOTE_USER from web server
# AUTH_OAUTH : Is for OAuth
AUTH_TYPE = AUTH_DB
# Uncomment to setup Full admin role name
# AUTH_ROLE_ADMIN = 'Admin'
# Uncomment to setup Public role name, no authentication needed
AUTH_ROLE_PUBLIC = 'Admin'
# Will allow user self registration
# AUTH_USER_REGISTRATION = True
# The default user self registration role
# AUTH_USER_REGISTRATION_ROLE = "Public"
# When using OAuth Auth, uncomment to setup provider(s) info
# Google OAuth example:
# OAUTH_PROVIDERS = [{
# 'name':'google',
# 'token_key':'access_token',
# 'icon':'fa-google',
# 'remote_app': {
# 'api_base_url':'https://www.googleapis.com/oauth2/v2/',
# 'client_kwargs':{
# 'scope': 'email profile'
# },
# 'access_token_url':'https://accounts.google.com/o/oauth2/token',
# 'authorize_url':'https://accounts.google.com/o/oauth2/auth',
# 'request_token_url': None,
# 'client_id': GOOGLE_KEY,
# 'client_secret': GOOGLE_SECRET_KEY,
# }
# }]
# When using LDAP Auth, setup the ldap server
# AUTH_LDAP_SERVER = "ldap://ldapserver.new"
# When using OpenID Auth, uncomment to setup OpenID providers.
# example for OpenID authentication
# OPENID_PROVIDERS = [
# { 'name': 'Yahoo', 'url': 'https://me.yahoo.com' },
# { 'name': 'AOL', 'url': 'http://openid.aol.com/<username>' },
# { 'name': 'Flickr', 'url': 'http://www.flickr.com/<username>' },
# { 'name': 'MyOpenID', 'url': 'https://www.myopenid.com' }]
# ----------------------------------------------------
# Theme CONFIG
# ----------------------------------------------------
# Flask App Builder comes up with a number of predefined themes
# that you can use for Apache Airflow.
# http://flask-appbuilder.readthedocs.io/en/latest/customizing.html#changing-themes
# Please make sure to remove "navbar_color" configuration from airflow.cfg
# in order to fully utilize the theme. (or use that property in conjunction with theme)
# APP_THEME = "bootstrap-theme.css" # default bootstrap
# APP_THEME = "amelia.css"
# APP_THEME = "cerulean.css"
# APP_THEME = "cosmo.css"
# APP_THEME = "cyborg.css"
# APP_THEME = "darkly.css"
# APP_THEME = "flatly.css"
# APP_THEME = "journal.css"
# APP_THEME = "lumen.css"
# APP_THEME = "paper.css"
# APP_THEME = "readable.css"
# APP_THEME = "sandstone.css"
# APP_THEME = "simplex.css"
# APP_THEME = "slate.css"
# APP_THEME = "solar.css"
# APP_THEME = "spacelab.css"
# APP_THEME = "superhero.css"
# APP_THEME = "united.css"
# APP_THEME = "yeti.css"
```

| https://github.com/apache/airflow/issues/13142 | https://github.com/apache/airflow/pull/13191 | 641f63c2c4d38094cb85389fb50f25345d622e23 | 4be27af04df047a9d1b95fca09eb25e88385f0a8 | 2020-12-17T21:42:09Z | python | 2020-12-28T06:37:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,132 | ["airflow/providers/microsoft/winrm/operators/winrm.py", "docs/spelling_wordlist.txt"] | Let user specify the decode encoding of stdout/stderr of WinRMOperator | **Description**
Let user specify the decode encoding used in WinRMOperator.
**Use case / motivation**
I'm trying to use winrm, but the task failed. After checked, I find https://github.com/apache/airflow/blob/master/airflow/providers/microsoft/winrm/operators/winrm.py#L117
```python
for line in stdout.decode('utf-8').splitlines():
self.log.info(line)
for line in stderr.decode('utf-8').splitlines():
self.log.warning(line)
```
But my remote host powershell's default encoding is 'gb2312'. I try https://stackoverflow.com/questions/40098771/changing-powershells-default-output-encoding-to-utf-8 's solution, i.e., put `PSDefaultParameterValues['Out-File:Encoding'] = 'utf8'` in `$PROFILE`. But it doesn't work in the WinRMOperator case.
The alternative way may be set the decode encoding in the operator to avoid error.
| https://github.com/apache/airflow/issues/13132 | https://github.com/apache/airflow/pull/13153 | d9e4454c66051a9e8bb5b2f3814d46f29332b89d | a1d060c7f4e09c617f39e2b8df2a043bfeac9d82 | 2020-12-17T11:24:41Z | python | 2021-03-01T14:00:49Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,099 | ["airflow/jobs/scheduler_job.py", "airflow/models/dagbag.py", "airflow/models/serialized_dag.py", "airflow/serialization/serialized_objects.py", "tests/jobs/test_scheduler_job.py"] | Unable to start scheduler after stopped | **Apache Airflow version**: 2.0.0rc3
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: Linux
- **OS** (e.g. from /etc/os-release): Ubuntu
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
After shutting down the scheduler, while tasks were in running state, trying to restart the scheduler results in pk violations..
```[2020-12-15 22:43:29,673] {scheduler_job.py:1293} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/jcoder/git/airflow_2.0/pyenv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/home/jcoder/git/airflow_2.0/pyenv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "dag_run_dag_id_run_id_key"
DETAIL: Key (dag_id, run_id)=(example_task_group, scheduled__2020-12-14T04:31:00+00:00) already exists.
```
**What you expected to happen**:
Scheduler restarts and picks up where it left off.
**How to reproduce it**:
Set example dag ( I used task_group) to schedule_interval `* * * * *` and start the scheduler and let it run for a few minutes.
Shut down the scheduler
Attempt to restart the scheduler
**Anything else we need to know**:
I came across this doing testing using the LocalExecutor in a virtual env. If no else is able to reproduce it, I'll try again in a clean virtual env.
| https://github.com/apache/airflow/issues/13099 | https://github.com/apache/airflow/pull/13932 | 25d68a7a9e0b4481486552ece9e77bcaabfa4de2 | 70345293031b56a6ce4019efe66ea9762d96c316 | 2020-12-16T04:08:32Z | python | 2021-01-30T20:32:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,086 | ["airflow/models/baseoperator.py", "airflow/serialization/schema.json", "airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | max_retry_delay should be a timedelta for type hinting |
**Apache Airflow version**:
master
**What happened**:
https://github.com/apache/airflow/blob/master/airflow/models/baseoperator.py#L356 --> should be timedelta not datetime
| https://github.com/apache/airflow/issues/13086 | https://github.com/apache/airflow/pull/14436 | b16b9ee6894711a8af7143286189c4a3cc31d1c4 | 59c459fa2a6aafc133db4a89980fb3d3d0d25589 | 2020-12-15T15:31:00Z | python | 2021-02-26T11:42:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 13,081 | ["docs/apache-airflow/upgrading-to-2.rst"] | OAuth2 login process is not stateless | **Apache Airflow version**: 1.10.14
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-eks-ad4801", GitCommit:"ad4801fd44fe0f125c8d13f1b1d4827e8884476d", GitTreeState:"clean", BuildDate:"2020-10-20T23:27:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
**Environment**:
- **Cloud provider or hardware configuration**: AWS / EKS
- **OS** (e.g. from /etc/os-release): N/A
- **Kernel** (e.g. `uname -a`): N/A
- **Install tools**: N/A
- **Others**: N/A
**What happened**:
Cognito login does not work if second request is not handled by first pod receiving access_token headers.
**What you expected to happen**:
Logging in via Cognito OAuth2 mode / Code should work via any pod.
**How to reproduce it**:
Override `webserver_config.py` with the following code:
```
"""Default configuration for the Airflow webserver"""
import logging
import os
import json
from airflow.configuration import conf
from airflow.www_rbac.security import AirflowSecurityManager
from flask_appbuilder.security.manager import AUTH_OAUTH
log = logging.getLogger(__name__)
basedir = os.path.abspath(os.path.dirname(__file__))
# The SQLAlchemy connection string.
SQLALCHEMY_DATABASE_URI = conf.get('core', 'SQL_ALCHEMY_CONN')
# Flask-WTF flag for CSRF
WTF_CSRF_ENABLED = True
CSRF_ENABLED = True
# ----------------------------------------------------
# AUTHENTICATION CONFIG
# ----------------------------------------------------
# For details on how to set up each of the following authentication, see
# http://flask-appbuilder.readthedocs.io/en/latest/security.html# authentication-methods
# for details.
# The authentication type
AUTH_TYPE = AUTH_OAUTH
SECRET_KEY = os.environ.get("FLASK_SECRET_KEY")
OAUTH_PROVIDERS = [{
'name': 'aws_cognito',
'whitelist': ['@ga.gov.au'],
'token_key': 'access_token',
'icon': 'fa-amazon',
'remote_app': {
'api_base_url': os.environ.get("OAUTH2_BASE_URL") + "/",
'client_kwargs': {
'scope': 'openid email aws.cognito.signin.user.admin'
},
'authorize_url': os.environ.get("OAUTH2_BASE_URL") + "/authorize",
'access_token_url': os.environ.get("OAUTH2_BASE_URL") + "/token",
'request_token_url': None,
'client_id': os.environ.get("COGNITO_CLIENT_ID"),
'client_secret': os.environ.get("COGNITO_CLIENT_SECRET"),
}
}]
class CognitoAirflowSecurityManager(AirflowSecurityManager):
def oauth_user_info(self, provider, resp):
# log.info("Requesting user info from AWS Cognito: {0}".format(resp))
assert provider == "aws_cognito"
# log.info("Requesting user info from AWS Cognito: {0}".format(resp))
me = self.appbuilder.sm.oauth_remotes[provider].get("userInfo")
return {
"username": me.json().get("username"),
"email": me.json().get("email"),
"first_name": me.json().get("given_name", ""),
"last_name": me.json().get("family_name", ""),
"id": me.json().get("sub", ""),
}
SECURITY_MANAGER_CLASS = CognitoAirflowSecurityManager
```
- Setup an airflow-app linked a to Cognito user pull and run multiple replicas of the airflow-web pod.
- Login will start failing and work may be 1 in 9 attempts.
**Anything else we need to know**:
There are 3 possible work arounds using infrastructure changes instead of airflow-web code changes.
- Use a single pod for airflow-web to avoid session issues
- Make ALB sticky via ingress to give users the same pod consistently
- Sharing the same secret key across all airflow-web pods using the environment
| https://github.com/apache/airflow/issues/13081 | https://github.com/apache/airflow/pull/13094 | 484f95f55cda4ca4fd3157135199623c9e37cc8a | 872350bac5bebea09bd52d50734a3b7517af712c | 2020-12-15T06:41:18Z | python | 2020-12-21T23:26:06Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.