status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 15,365 | ["docs/docker-stack/docker-images-recipes/hadoop.Dockerfile"] | Hadoop Dockerfile missing `mkdir` | [hadoop.Dockerfile](https://github.com/apache/airflow/blob/master/docs/docker-stack/docker-images-recipes/hadoop.Dockerfile) missing `mkdir -p "${HADOOP_HOME}"` after [Line 50](https://github.com/apache/airflow/blob/master/docs/docker-stack/docker-images-recipes/hadoop.Dockerfile#L50-L51).
| https://github.com/apache/airflow/issues/15365 | https://github.com/apache/airflow/pull/15871 | 512f3969e2c207d96a6b50cb4a022165b5dbcccf | 96e2915a009cdb2d4979ca4e411589864c078ec6 | 2021-04-14T06:42:41Z | python | 2021-05-15T08:54:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,363 | ["airflow/www/package.json", "airflow/www/yarn.lock"] | There is a vulnerability in lodash 4.17.20 ,upgrade recommended | https://github.com/apache/airflow/blob/7490c6b8109adcc5aec517fc8d39cfc31912d92f/airflow/www/yarn.lock#L4490-L4493
CVE-2021-23337 CVE-2020-28500
Recommended upgrade version:4.17.21 | https://github.com/apache/airflow/issues/15363 | https://github.com/apache/airflow/pull/15383 | 15e044c7e412a85946a8831dd7eb68424d96c164 | f69bb8255d2ed60be275d1466255c874aef600f0 | 2021-04-14T02:52:24Z | python | 2021-04-15T14:34:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,353 | ["docs/apache-airflow/howto/custom-view-plugin.rst", "docs/apache-airflow/howto/index.rst", "docs/spelling_wordlist.txt", "metastore_browser/hive_metastore.py"] | Some more information regarding custom view plugins would be really nice! |
**Description**
Some more information regarding custom view plugins would be really nice
**Use case / motivation**
I have created a custom view for airflow which was a little tricky since the Airflow docs are quite short and most of the information in the www is out of date.
Additionally the only example cannot simply be copied and pasted.
Maybe one example view would be nice or at least some more information (especially how to implement the standard Airflow layout or where to find it)
Maybe some additional documentation would be nice or even a quick start guide?
**Are you willing to submit a PR?**
Would be a pleasure after some discussion!
**Related Issues**
I haven't found any related issues
Some feedback would be nice since this is my first issue:)
| https://github.com/apache/airflow/issues/15353 | https://github.com/apache/airflow/pull/27244 | 1447158e690f3d63981b3d8ec065665ec91ca54e | 544c93f0a4d2673c8de64d97a7a8128387899474 | 2021-04-13T19:08:56Z | python | 2022-10-31T04:33:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,332 | ["airflow/providers/sftp/hooks/sftp.py", "airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/hooks/test_sftp.py", "tests/providers/sftp/sensors/test_sftp.py"] | SftpSensor w/ possibility to use RegEx or fnmatch | **Description**
SmartSftpSensor with possibility to search for patterns (RegEx or UNIX fnmatch) in filenames or folders
**Use case / motivation**
I would like to have the possibility to use wildcards and/or regular expressions to look for certain files when using an SftpSensor.
At the moment I tried to do something like this:
```python
from airflow.providers.sftp.sensors.sftp import SFTPSensor
from airflow.plugins_manager import AirflowPlugin
from airflow.utils.decorators import apply_defaults
from typing import Any
import os
import fnmatch
class SmartSftpSensor(SFTPSensor):
poke_context_fields = ('path', 'filepattern', 'sftp_conn_id', ) # <- Required fields
template_fields = ['filepattern', 'path']
@apply_defaults
def __init__(
self,
filepattern="",
**kwargs: Any):
super().__init__(**kwargs)
self.filepath = self.path
self.filepattern = filepattern
def poke(self, context):
full_path = self.filepath
directory = os.listdir(full_path)
for file in directory:
if not fnmatch.fnmatch(file, self.filepattern):
pass
else:
context['task_instance'].xcom_push(key='file_name', value=file)
return True
return False
def is_smart_sensor_compatible(self): # <- Required
result = (
not self.soft_fail
and super().is_smart_sensor_compatible()
)
return result
class MyPlugin(AirflowPlugin):
name = "my_plugin"
operators = [SmartSftpSensor]
```
And I call it by doing
```python
sense_file = SmartSftpSensor(
task_id='sense_file',
sftp_conn_id='my_sftp_connection',
path=templ_remote_filepath,
filepattern=filename,
timeout=3
)
```
where path is the folder containing the files and filepattern is a rendered filename with wildcards: `filename = """{{ execution_date.strftime("%y%m%d_%H00??_P??_???") }}.LV1"""`, which is rendered to e.g. `210412_1600??_P??_???.LV1`
but I am still not getting the expected result, as it's not capturing anything.
**Are you willing to submit a PR?**
Yes!
**Related Issues**
I didn't find any | https://github.com/apache/airflow/issues/15332 | https://github.com/apache/airflow/pull/24084 | ec84ffe71cfa8246155b9b4cb10bf2167e75adcf | e656e1de55094e8369cab80b9b1669b1d1225f54 | 2021-04-12T17:01:24Z | python | 2022-06-06T12:54:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,326 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator unclear error message when name is missing | in KubernetesPodOperator the name parameter is mandatory.
However if you forget to add it you will get a broken dag message with:

This is because
https://github.com/apache/airflow/blob/1dfbb8d2031cb8a3e95e4bf91aa478857c5c3a85/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L277
it calls validate_data which produce this error:
https://github.com/apache/airflow/blob/1dfbb8d2031cb8a3e95e4bf91aa478857c5c3a85/airflow/utils/helpers.py#L40
The error is not useful. It doesn't explain what the issue is of where the problem is. it also says problem with `key` which isn't a parameter of anything.
I couldn't even know that the issue is from the KubernetesPodOperator.
Suggestions:
make better error message for `KubernetesPodOperator` when name parameter is missing OR invalid.
I would also suggest to make validate_key produce a better error message specifying what object produce the error | https://github.com/apache/airflow/issues/15326 | https://github.com/apache/airflow/pull/15373 | a835bec0878258cc22ef11eddb9b78520284d46e | 44480d3673e8349fe784c10d38e4915f08b82b94 | 2021-04-12T09:35:22Z | python | 2021-04-14T20:21:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,325 | ["airflow/hooks/base.py"] | Custom XCom backends circular import when using cli command like airflow connections list | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
While running `airflow connections list` with a `xcom_backend = xcom.MyXComBackend` in `airflow.cfg` I will get a
ImportError: cannot import name 'BaseHook' from partially initialized module 'airflow.hooks.base' (most likely due to a circular import) (/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/hooks/base.py)
(see below for complete traceback).
The issue seems to be that `airflow/cli/commands/connection_command.py` imports `BaseHook` directly and `BaseHook -> Connection -> BaseOperator -> TaskInstance -> XCom -> MyXComBackend -> S3Hook -> BaseHook ` (again see complete traceback below)
```
airflow connections list
[2021-04-12 10:51:34,020] {configuration.py:459} ERROR - cannot import name 'BaseHook' from partially initialized module 'airflow.hooks.base' (most likely due to a circular import) (/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/hooks/base.py)
Traceback (most recent call last):
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/configuration.py", line 457, in getimport
return import_string(full_qualified_path)
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/Users/rubelagu/.pyenv/versions/3.8.8/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/rubelagu/tmp/airflow-xcom-backend/plugins/xcom.py", line 4, in <module>
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 37, in <module>
from airflow.providers.amazon.aws.hooks.base_aws import AwsBaseHook
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 41, in <module>
from airflow.hooks.base import BaseHook
ImportError: cannot import name 'BaseHook' from partially initialized module 'airflow.hooks.base' (most likely due to a circular import) (/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/hooks/base.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 47, in command
func = import_string(import_path)
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/Users/rubelagu/.pyenv/versions/3.8.8/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/cli/commands/connection_command.py", line 30, in <module>
from airflow.hooks.base import BaseHook
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/hooks/base.py", line 23, in <module>
from airflow.models.connection import Connection
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/models/__init__.py", line 20, in <module>
from airflow.models.baseoperator import BaseOperator, BaseOperatorLink
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 55, in <module>
from airflow.models.taskinstance import Context, TaskInstance, clear_task_instances
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 58, in <module>
from airflow.models.xcom import XCOM_RETURN_KEY, XCom
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/models/xcom.py", line 289, in <module>
XCom = resolve_xcom_backend()
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/models/xcom.py", line 279, in resolve_xcom_backend
clazz = conf.getimport("core", "xcom_backend", fallback=f"airflow.models.xcom.{BaseXCom.__name__}")
File "/Users/rubelagu/tmp/airflow-xcom-backend/venv/lib/python3.8/site-packages/airflow/configuration.py", line 460, in getimport
raise AirflowConfigException(
airflow.exceptions.AirflowConfigException: The object could not be loaded. Please check "xcom_backend" key in "core" section. Current value: "xcom.MyXComBackend".
```
**What you expected to happen**:
I expected to be able to use S3Hook or GCSHook from a cusom XCom backedn following the example in what I though was the canonical example at https://medium.com/apache-airflow/airflow-2-0-dag-authoring-redesigned-651edc397178
**How to reproduce it**:
* In `airflow.cfg` set `xcom_backend = xcom.MyXComBackend`
* Create file `plugins/xcom.py` with the following contents
```
from airflow.models.xcom import BaseXCom
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
class MyXComBackend(BaseXCom):
pass
```
**Anything else we need to know**:
This can be workaround by not importing the hooks on the module level but at the method level, but that is
* really ugly
* not evident for anybody trying to create a custom xcom backend for the first time.
```
from airflow.models.xcom import BaseXCom
class MyXComBackend(BaseXCom):
@staticmethod
def serialize_value(value):
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
hook = S3Hook()
pass
@staticmethod
def deserialize_value(value):
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
hook = S3Hook()
pass
```
| https://github.com/apache/airflow/issues/15325 | https://github.com/apache/airflow/pull/15361 | a0b217ae3de0a180e746e1e2238ede795b47fb23 | 75603160848e4199ed368809dfd441dcc5ddbd82 | 2021-04-12T09:06:15Z | python | 2021-04-14T13:33:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,318 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/role_command.py", "tests/cli/commands/test_role_command.py"] | Add CLI to delete roles | Currently there is no option to delete a role from CLI which is very limiting.
I think it would be good if CLI will allow to delete a role (assuming no users are assigned to the role)
| https://github.com/apache/airflow/issues/15318 | https://github.com/apache/airflow/pull/25854 | 3c806ff32d48e5b7a40b92500969a0597106d7db | 799b2695bb09495fc419d3ea2a8d29ff27fc3037 | 2021-04-11T06:39:05Z | python | 2022-08-27T00:37:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,315 | ["chart/templates/NOTES.txt"] | Improve message after Airflow is installed with Helm Chart | Currently we get the following message when Airflow is installed with the Helm Chart.
```bash
$ helm install airflow --namespace airflow .
NAME: airflow
LAST DEPLOYED: Sat Apr 10 23:03:35 2021
NAMESPACE: airflow
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Airflow!
Your release is named airflow.
You can now access your dashboard(s) by executing the following command(s) and visiting the corresponding port at localhost in your browser:
Airflow dashboard: kubectl port-forward svc/airflow-webserver 8080:8080 --namespace airflow
```
This is defined in https://github.com/apache/airflow/blob/master/chart/templates/NOTES.txt
We should include the following instructions too:
- Get Username and Password to access Webserver if a default `.Values.webserver.defaultUser.enabled` is `true`
- Provide kubectl command to get values for
- Fernet Key
- Postgres password if postgres is created by Airflow chart
We could get some inspiration from the following charts:
- https://github.com/bitnami/charts/blob/f4dde434ef7c1fdb1949fb3a796e5a40b5f1dc03/bitnami/airflow/templates/NOTES.txt
- https://github.com/helm/charts/blob/7e45e678e39b88590fe877f159516f85f3fd3f38/stable/wordpress/templates/NOTES.txt
### References:
- https://helm.sh/docs/chart_template_guide/notes_files/ | https://github.com/apache/airflow/issues/15315 | https://github.com/apache/airflow/pull/15820 | 8ab9c0c969559318417b9e66454f7a95a34aeeeb | 717dcfd60d4faab39e3ee6fcb4d815a73d92ed5a | 2021-04-10T23:10:33Z | python | 2021-05-13T18:48:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,289 | ["setup.py"] | reporting deprecated warnings saw in breeze | when working with breeze I saw these warnings many times in the logs:
```
=============================== warnings summary ===============================
tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py::TestCloudwatchTaskHandler::test_close_prevents_duplicate_calls
/usr/local/lib/python3.6/site-packages/jose/backends/cryptography_backend.py:18: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes, int_to_bytes
tests/always/test_example_dags.py::TestExampleDags::test_should_be_importable
/usr/local/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
return f(*args, **kwds)
tests/always/test_example_dags.py::TestExampleDags::test_should_be_importable
/usr/local/lib/python3.6/site-packages/scrapbook/__init__.py:8: FutureWarning: 'nteract-scrapbook' package has been renamed to `scrapbook`. No new releases are going out for this old package name.
warnings.warn("'nteract-scrapbook' package has been renamed to `scrapbook`. No new releases are going out for this old package name.", FutureWarning)
```
| https://github.com/apache/airflow/issues/15289 | https://github.com/apache/airflow/pull/15290 | 594d93d3b0882132615ec26770ea77ff6aac5dff | 9ba467b388148f4217b263d2518e8a24407b9d5c | 2021-04-08T19:07:53Z | python | 2021-04-09T15:28:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,280 | ["airflow/providers/apache/spark/hooks/spark_submit.py", "tests/providers/apache/spark/hooks/test_spark_submit.py"] | Incorrect handle stop DAG when use spark-submit in cluster mode on yarn cluster. on yarn | **Apache Airflow version**: v2.0.1
**Environment**:
- **Cloud provider or hardware configuration**:
- bare metal
- **OS** (e.g. from /etc/os-release):
- Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-65-generic x86_64)
- **Kernel** (e.g. `uname -a`):
- Linux 5.4.0-65-generic #73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- miniconda
- **Others**:
- python 3.7
- hadoop 2.9.2
- spark 2.4.7
**What happened**:
I have two problems:
1. When starts DAG with spark-submit on yarn with deploy="cluster" airflow doesn't track state driver.
Therefore when yarn fails, the DAG's state remains in "running mode.
2. When I manual stop DAG's job, for example, mark it as "failed", in Yarn cluster the same job is still running.
This error occurs because empty environment is passed to subprocess.Popen and hadoop bin directory doesn't exist in PATH.
ERROR - [Errno 2] No such file or directory: 'yarn': 'yarn'
**What you expected to happen**:
In the first case, move the task to the "failed" state.
In the second case, stopping the task on yarn cluster.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Reproduce the first issue: Start spark_submit on yarn cluster in deploy="cluster" and master="yarn", then kill task on yarn UI. On airflow task state remains "running".
Reproduce the second issue: Start spark_submit on yarn cluster in deploy="cluster" and master="yarn", then manually change this running job's state to "failed", on Hadoop cluster the same job remains in running state.
**Anything else we need to know**:
I propose the following changes to the to the airflow\providers\apache\hooks\SparkSubmitHook.py:
1. line 201:
```python
return 'spark://' in self._connection['master'] and self._connection['deploy_mode'] == 'cluster'
```
change to
```python
return ('spark://' in self._connection['master'] or self._connection['master'] == "yarn") and \
(self._connection['deploy_mode'] == 'cluster')
```
2. line 659
```python
env = None
```
change to
``` python
env = {**os.environ.copy(), **(self._env if self._env else {})}
```
Applying this patch solved the above issues. | https://github.com/apache/airflow/issues/15280 | https://github.com/apache/airflow/pull/15304 | 9dd14aae40f4c2164ce1010cd5ee67d2317ea3ea | 9015beb316a7614616c9d8c5108f5b54e1b47843 | 2021-04-08T15:15:44Z | python | 2021-04-09T23:04:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,279 | ["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "setup.py"] | Error on logging empty line to Cloudwatch | **Apache Airflow version**: 2.0.1
**Environment**:
- **Cloud provider or hardware configuration**: AWS
**What happened**:
I have Airflow with Cloudwatch-based remote logging running. I also have `BashOperator` that does, for example, `rsync` with invalid parameters, for example `rsync -av test test`. The output of the `rsync` error is formatted and contains empty line. Once that empty line is logged to the Cloudwatch, i receive an error:
```
2021-04-06 19:29:22,318] /home/airflow/.local/lib/python3.6/site-packages/watchtower/__init__.py:154 WatchtowerWarning: Failed to deliver logs: Parameter validation failed:
Invalid length for parameter logEvents[5].message, value: 0, valid range: 1-inf
[2021-04-06 19:29:22,320] /home/airflow/.local/lib/python3.6/site-packages/watchtower/__init__.py:158 WatchtowerWarning: Failed to deliver logs: None
```
So basically empty lines can't be submitted to the Cloudwatch and as result the whole output of the process doesn't appear in logs.
**What you expected to happen**:
I expect to have an output of the bash command in logs. Empty lines can be skipped or replaced with something.
**How to reproduce it**:
For example: run `BashOperator` with `rsync` command that fails on Airflow with Cloudwatch-based remote logging. It could be any other command that produces empty line in the output. | https://github.com/apache/airflow/issues/15279 | https://github.com/apache/airflow/pull/19907 | 5b50d610d4f1288347392fac4a6eaaed78d1bc41 | 2539cb44b47d78e81a88fde51087f4cc77c924c5 | 2021-04-08T15:10:23Z | python | 2021-12-01T17:53:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,261 | ["airflow/www/static/js/task_instance.js", "airflow/www/templates/airflow/task_instance.html"] | Changing datetime will never show task instance logs | This is an extension of #15103
**Apache Airflow version**: 2.x.x
**What happened**:
Once you get to the task instance logs page, the date will successfully load at first. But if you change the time of the `execution_date` from the datetimepicker in any way the logs will be blank.
The logs seem to require exact datetime match, which can be down 6 decimal points beyond a second.
**What you expected to happen**:
Logs should interpret a date and allow for at least only to the nearest whole second, which the UI can then handle. Although a better UX should allow flexibility beyond that too.
or
Remove the datetimepicker because logs would only happen at the exact point a task instance occurs.
**How to reproduce it**:

**Anything else we need to know**:
Occurs every time
| https://github.com/apache/airflow/issues/15261 | https://github.com/apache/airflow/pull/15284 | de9567f3f5dc212cee4e83f41de75c1bbe43bfe6 | 56a03710a607376a01cb201ec81eb9d87d7418fe | 2021-04-07T20:52:49Z | python | 2021-04-09T00:51:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,260 | ["docs/apache-airflow-providers-mysql/connections/mysql.rst"] | Documentation - MySQL Connection - example contains a typo | **What happened**:
There is an extra single quote after /tmp/server-ca.pem in the example.
[MySQL Connections](https://airflow.apache.org/docs/apache-airflow-providers-mysql/stable/connections/mysql.html)
Example “extras” field:
{
"charset": "utf8",
"cursor": "sscursor",
"local_infile": true,
"unix_socket": "/var/socket",
"ssl": {
"cert": "/tmp/client-cert.pem",
**"ca": "/tmp/server-ca.pem'",**
"key": "/tmp/client-key.pem"
}
}
| https://github.com/apache/airflow/issues/15260 | https://github.com/apache/airflow/pull/15265 | c594d9cfb32bbcfe30af3f5dcb452c6053cacc95 | 7ab4b2707669498d7278113439a13f58bd12ea1a | 2021-04-07T20:31:20Z | python | 2021-04-08T11:09:55Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,259 | ["chart/templates/scheduler/scheduler-deployment.yaml", "chart/tests/test_scheduler.py", "chart/values.schema.json", "chart/values.yaml"] | Scheduler livenessprobe and k8s v1.20+ | Pre Kubernetes v1.20, exec livenessprobes `timeoutSeconds` wasn't functional, and defaults to 1 second. The livenessprobe for the scheduler, however, takes longer than 1 second to finish so the scheduler will have consistent livenessprobe failures when running on v1.20.
> Before Kubernetes 1.20, the field timeoutSeconds was not respected for exec probes: probes continued running indefinitely, even past their configured deadline, until a result was returned.
```
...
Warning Unhealthy 23s kubelet Liveness probe failed:
```
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
**Kubernetes version**: v1.20.2
**What happened**:
Livenessprobe failures keeps restarting the scheduler due to timing out.
**What you expected to happen**:
Livenessprobe succeeds.
**How to reproduce it**:
Deploy the helm chart on v1.20+ with the default livenessprobe `timeoutSeconds` of 1 and watch the scheduler livenessprobe fail.
| https://github.com/apache/airflow/issues/15259 | https://github.com/apache/airflow/pull/15333 | 18c5b8af1020a86a82c459b8a26615ba6f1d8df6 | 8b56629ecd44d346e35c146779e2bb5422af1b5d | 2021-04-07T20:04:27Z | python | 2021-04-12T22:46:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,255 | ["tests/jobs/test_scheduler_job.py", "tests/test_utils/asserts.py"] | [QUARANTINED] TestSchedulerJob.test_scheduler_keeps_scheduling_pool_full is flaky |
For example here:
https://github.com/apache/airflow/runs/2288380184?check_suite_focus=true#step:6:8759
```
=================================== FAILURES ===================================
__________ TestSchedulerJob.test_scheduler_keeps_scheduling_pool_full __________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_scheduler_keeps_scheduling_pool_full>
def test_scheduler_keeps_scheduling_pool_full(self):
"""
Test task instances in a pool that isn't full keep getting scheduled even when a pool is full.
"""
dag_d1 = DAG(dag_id='test_scheduler_keeps_scheduling_pool_full_d1', start_date=DEFAULT_DATE)
BashOperator(
task_id='test_scheduler_keeps_scheduling_pool_full_t1',
dag=dag_d1,
owner='airflow',
pool='test_scheduler_keeps_scheduling_pool_full_p1',
bash_command='echo hi',
)
dag_d2 = DAG(dag_id='test_scheduler_keeps_scheduling_pool_full_d2', start_date=DEFAULT_DATE)
BashOperator(
task_id='test_scheduler_keeps_scheduling_pool_full_t2',
dag=dag_d2,
owner='airflow',
pool='test_scheduler_keeps_scheduling_pool_full_p2',
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag_d1, root_dag=dag_d1)
dagbag.bag_dag(dag=dag_d2, root_dag=dag_d2)
dagbag.sync_to_db()
session = settings.Session()
pool_p1 = Pool(pool='test_scheduler_keeps_scheduling_pool_full_p1', slots=1)
pool_p2 = Pool(pool='test_scheduler_keeps_scheduling_pool_full_p2', slots=10)
session.add(pool_p1)
session.add(pool_p2)
session.commit()
dag_d1 = SerializedDAG.from_dict(SerializedDAG.to_dict(dag_d1))
scheduler = SchedulerJob(executor=self.null_exec)
scheduler.processor_agent = mock.MagicMock()
# Create 5 dagruns for each DAG.
# To increase the chances the TIs from the "full" pool will get retrieved first, we schedule all
# TIs from the first dag first.
date = DEFAULT_DATE
for _ in range(5):
dr = dag_d1.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=date,
state=State.RUNNING,
)
scheduler._schedule_dag_run(dr, {}, session)
date = dag_d1.following_schedule(date)
date = DEFAULT_DATE
for _ in range(5):
dr = dag_d2.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=date,
state=State.RUNNING,
)
scheduler._schedule_dag_run(dr, {}, session)
date = dag_d2.following_schedule(date)
scheduler._executable_task_instances_to_queued(max_tis=2, session=session)
task_instances_list2 = scheduler._executable_task_instances_to_queued(max_tis=2, session=session)
# Make sure we get TIs from a non-full pool in the 2nd list
assert len(task_instances_list2) > 0
> assert all(
task_instance.pool != 'test_scheduler_keeps_scheduling_pool_full_p1'
for task_instance in task_instances_list2
)
E AssertionError: assert False
E + where False = all(<generator object TestSchedulerJob.test_scheduler_keeps_scheduling_pool_full.<locals>.<genexpr> at 0x7fb6ecb90c10>)
``` | https://github.com/apache/airflow/issues/15255 | https://github.com/apache/airflow/pull/19860 | d1848bcf2460fa82cd6c1fc1e9e5f9b103d95479 | 9b277dbb9b77c74a9799d64e01e0b86b7c1d1542 | 2021-04-07T18:12:23Z | python | 2021-12-13T17:55:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,248 | ["airflow/example_dags/tutorial_taskflow_api_etl_virtualenv.py", "airflow/exceptions.py", "airflow/models/dagbag.py", "airflow/providers/papermill/example_dags/example_papermill.py", "tests/api_connexion/endpoints/test_log_endpoint.py", "tests/core/test_impersonation_tests.py", "tests/dags/test_backfill_pooled_tasks.py", "tests/dags/test_dag_with_no_tags.py", "tests/models/test_dagbag.py", "tests/www/test_views.py"] | Clear notification in UI when duplicate dag names are present | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
When using decorators to define dags, e.g. dag_1.py:
```python
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
DEFAULT_ARGS = {
"owner": "airflow",
}
@task
def some_task():
pass
@dag(
default_args=DEFAULT_ARGS,
schedule_interval=None,
start_date=days_ago(2),
)
def my_dag():
some_task()
DAG_1 = my_dag()
```
and
dag_2.py:
```python
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
DEFAULT_ARGS = {
"owner": "airflow",
}
@task
def some_other_task():
pass
@dag(
default_args=DEFAULT_ARGS,
schedule_interval=None,
start_date=days_ago(2),
)
def my_dag():
some_other_task()
DAG_2 = my_dag()
```
We have two different dags which have been written in isolation, but by sheer bad luck both define `my_dag()`. This seems fine for each file in isolation, but on the airflow UI, we only end up seeing one entry for `my_dag`, where it has picked up `dag_1.py` and ignored `dag_2.py`.
**Use case / motivation**
We currently end up with only one DAG showing up on the UI, and no indication as to why the other one hasn't appeared.
Suggestion: popup similar to 'DAG import error' to highlight what needs changing in one of the DAG files in order for both to show up ("DAG import error: duplicate dag names found - please review {duplicate files} and ensure all dag definitions are unique"?)
**Are you willing to submit a PR?**
No time to spare on this at present
**Related Issues**
I haven't found any related issues with the search function. | https://github.com/apache/airflow/issues/15248 | https://github.com/apache/airflow/pull/15302 | faa4a527440fb1a8f47bf066bb89bbff380b914d | 09674537cb12f46ca53054314aea4d8eec9c2e43 | 2021-04-07T10:04:57Z | python | 2021-05-06T11:59:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,245 | ["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | Passing Custom Image Family Name to the DataprocClusterCreateOperator() | **Description**
Currently, we can only pass custom Image name to **DataprocClusterCreateOperator(),**
as the custom image expires after 60 days, we either need to update the image or we need to pass the expiration token.
Functionality is already available in **gcloud** and **REST**.
`gcloud dataproc clusters test_cluster ......
--image projects/<custom_image_project_id>/global/images/family/<family_name>
.......
`
**Use case / motivation**
The user should be able to pass either Custom Image or Custom Image family name,
so we don't have to update the image up on expiration or use expiration token.
**Are you willing to submit a PR?**
Yes
**Related Issues**
None | https://github.com/apache/airflow/issues/15245 | https://github.com/apache/airflow/pull/15250 | 99ec208024933d790272a09a6f20b241410a7df7 | 6da36bad2c5c86628284d91ad6de418bae7cd029 | 2021-04-07T06:17:45Z | python | 2021-04-18T17:26:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,234 | ["airflow/models/taskinstance.py", "airflow/utils/strings.py", "airflow/www/templates/airflow/confirm.html", "airflow/www/templates/airflow/dag.html", "airflow/www/views.py", "tests/www/views/test_views.py", "tests/www/views/test_views_acl.py", "tests/www/views/test_views_tasks.py"] | DAG Failure Mark Success link produces a 404 | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Kubernetes 1.20.2
**Environment**: Debian Buster image running in Kubernetes 1.20.2
- **Cloud provider or hardware configuration**: Ubuntu on VMware on premise
- **OS** (e.g. from /etc/os-release): Buster
- **Kernel** (e.g. `uname -a`):4.15.0-129-generic
- **Install tools**: Docker with a helm chart
**What happened**: After enabling an SMTP back end and setting my DAG to notify on failure with
```
'email_on_failure': True,
'email_on_retry': True,
'email': '<my email>'
```
The email is sent and the Mark success portion fails with a 404
**What you expected to happen**: When clicking the Mark Success link, expect the link to mark the DAG as a success and forward on to some where in the Airflow UI
**How to reproduce it**: Enable email on your server and have a DAG that fails. Click the link in the email
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
Enable an SMTP connection for alerts. Use this DAG to always fail
Failure Dag:
```
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from airflow.version import version
from datetime import datetime, timedelta
def error_function():
raise Exception('Something wrong')
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': True,
'email_on_retry': True,
'email': '<your email>',
'retries': 0,
'retry_delay': timedelta(minutes=5)
}
with DAG('failure_dag',
start_date=datetime(2019, 1, 1),
max_active_runs=3,
schedule_interval=timedelta(minutes=30), # https://airflow.apache.org/docs/stable/scheduler.html#dag-runs
default_args=default_args,
catchup=False) as dag:
t0 = PythonOperator(
task_id='failing_task',
python_callable=error_function
)
```
Receive the email and then click the Mark Success link generated from the method [here](https://github.com/apache/airflow/blob/bc5ced3e54b3cf855808e04f09543159fd3fa79f/airflow/models/taskinstance.py#L531-L543)
**Anything else we need to know**:
How often does this problem occur: Every time the link is created in email
Possible Causes: The mark success view requires a POST request but the link produces a GET request.
@ephraimbuddy | https://github.com/apache/airflow/issues/15234 | https://github.com/apache/airflow/pull/16233 | 70bf1b12821e5ac3869cee27ef54b3ee5cc66f47 | 7432c4d7ea17ad95cc47c6e772c221d5d141f5e0 | 2021-04-06T18:20:15Z | python | 2021-06-11T08:42:55Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,218 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/executors/kubernetes_executor.py", "airflow/jobs/scheduler_job.py", "airflow/kubernetes/kube_config.py", "airflow/utils/event_scheduler.py", "tests/executors/test_kubernetes_executor.py", "tests/utils/test_event_scheduler.py"] | Task stuck in queued state with pending pod | **Apache Airflow version**: 2.0.1
**Kubernetes version**: v1.19.7
**Executor**: KubernetesExecutor
**What happened**:
If you have a worker that gets stuck in pending forever, say with a missing volume mount, the task will stay in the queued state forever. Nothing is applying a timeout on it actually being able to start.
**What you expected to happen**:
Eventually the scheduler will notice that the worker hasn't progressed past pending after a given amount of time and will mark it as a failure.
**How to reproduce it**:
```python
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.utils.dates import days_ago
from kubernetes.client import models as k8s
default_args = {
"owner": "airflow",
}
with DAG(
dag_id="pending",
default_args=default_args,
schedule_interval=None,
start_date=days_ago(2),
) as dag:
BashOperator(
task_id="forever_pending",
bash_command="date; sleep 30; date",
executor_config={
"pod_override": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
volume_mounts=[
k8s.V1VolumeMount(mount_path="/foo/", name="vol")
],
)
],
volumes=[
k8s.V1Volume(
name="vol",
persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(
claim_name="missing"
),
)
],
)
),
},
)
```
**Anything else we need to know**:
Related to:
* #15149 (This is reporting that these pending pods don't get deleted via "Mark Failed")
* #14556 (This handles when these pending pods get deleted and is already fixed)
| https://github.com/apache/airflow/issues/15218 | https://github.com/apache/airflow/pull/15263 | 1e425fe6459a39d93a9ada64278c35f7cf0eab06 | dd7ff4621e003421521960289a323eb1139d1d91 | 2021-04-05T22:04:15Z | python | 2021-04-20T18:24:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,193 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/kubernetes_command.py", "tests/cli/commands/test_kubernetes_command.py"] | CLI 'kubernetes cleanup-pods' should only clean up Airflow-created Pods | **Update**
We discussed and decide this CLI should always only clean up pods created by Airflow.
<hr>
### Description
Currently command `kubernetes cleanup-pods` cleans up Pods that meet specific conditions in a given namespace.
Underlying code: https://github.com/apache/airflow/blob/2.0.1/airflow/cli/commands/kubernetes_command.py#L70
### Use case / motivation
The problem to me is: users may have other non-Airflow stuff running in this specific namespace, and they may want to have different strategy/logic for Pod cleaning-up for these non-Airflow pods.
We may want to have an additional boolean flag for this command, like `--only-airflow-pods`, so that users can decide if they only want to clean up Airflow-created Pods.
It should not be hard to identify Airflow-created pods, given the specific labels Airflow adds to the Pods it creates (https://github.com/apache/airflow/blob/2.0.1/airflow/kubernetes/pod_generator.py#L385)
| https://github.com/apache/airflow/issues/15193 | https://github.com/apache/airflow/pull/15204 | 076eaeaa2bec38960fb4d9a24c28c03321e9a00c | c594d9cfb32bbcfe30af3f5dcb452c6053cacc95 | 2021-04-04T20:12:28Z | python | 2021-04-08T10:11:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,188 | [".github/actions/cancel-workflow-runs"] | In some cases 'cancel-workflow-run' action crashes | In some cases, when the PR is `weird` - when there is an empty PR that gets closed quickly I guess such as:
https://github.com/apache/airflow/runs/2265280777?check_suite_focus=true,
the 'cancel-workflow-runs' action crashes:
```
Adding the run: 717090708 to candidates with :1029499:astronomer/airflow:bump-openapi:pull_request key.
Checking run number: 21885 RunId: 717087278 Url: https://api.github.com/repos/apache/airflow/actions/runs/717087278 Status in_progress Created at 2021-04-04T17:25:26Z
Adding the run: 717087278 to candidates with :1029499:apache/airflow:master:push key.
Checking run number: 21884 RunId: 717051432 Url: https://api.github.com/repos/apache/airflow/actions/runs/717051432 Status in_progress Created at 2021-04-04T17:02:39Z
Error: Cannot read property 'full_name' of null
```
This results in cancelling all other workflows and is an easy way to DDOs airflow. It can be handled by cancelling the offending workflow. This should be handled better
UPDATE: this hppens where "head_repository" is `null` - which is somewhat error on GitHub side. But still we should handle it better.
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15188 | https://github.com/apache/airflow/pull/15189 | 53dafa593fd7ce0be2a48dc9a9e993bb42b6abc5 | 041a09f3ee6bc447c3457b108bd5431a2fd70ad9 | 2021-04-04T17:55:53Z | python | 2021-04-04T18:30:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,179 | ["chart/templates/NOTES.txt"] | Kubernetes does not show logs for task instances if remote logging is not configured | Without configuring remote logging, logs from Kubernetes for task instances are not complete.
Without remote logging configured, the logging for task instances are outputted as :
logging_level: INFO
```log
BACKEND=postgresql
DB_HOST=airflow-postgresql.airflow.svc.cluster.local
DB_PORT=5432
[2021-04-03 12:35:52,047] {dagbag.py:448} INFO - Filling up the DagBag from /opt/airflow/dags/k8pod.py
/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:26 DeprecationWarning: This module is deprecated. Please use `kubernetes.client.models.V1Volume`.
/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:27 DeprecationWarning: This module is deprecated. Please use `kubernetes.client.models.V1VolumeMount`.
Running <TaskInstance: k8_pod_operator_xcom.task322 2021-04-03T12:25:49.515523+00:00 [queued]> on host k8podoperatorxcomtask322.7f2ee45d4d6443c5ad26bd8fbefb8292
```
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
**Environment**:
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04
**What happened**:
The logs for task instances run are not shown without remote logging configured
**What you expected to happen**:
I expected to see complete logs for tasks
**How to reproduce it**:
Start airflow using the helm chart without configuring remote logging.
Run a task and check the logs.
It's necessary to set `delete_worker_pods` to False so you can view the logs after the task has ended
| https://github.com/apache/airflow/issues/15179 | https://github.com/apache/airflow/pull/16784 | 1eed6b4f37ddf2086bf06fb5c4475c68fadac0f9 | 8885fc1d9516b30b316487f21e37d34bdd21e40e | 2021-04-03T21:20:52Z | python | 2021-07-06T18:37:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,178 | ["airflow/example_dags/tutorial.py", "airflow/models/baseoperator.py", "airflow/serialization/schema.json", "airflow/www/utils.py", "airflow/www/views.py", "docs/apache-airflow/concepts.rst", "tests/serialization/test_dag_serialization.py", "tests/www/test_utils.py"] | Task doc is not shown on Airflow 2.0 Task Instance Detail view | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 932f8c2e9360de6371031d4d71df00867a2776e6
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**: locally run `airflow server`
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): mac
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
Task doc is shown on Airflow v1 Task Instance Detail view but not shown on v2.
**What you expected to happen**:
<!-- What do you think went wrong? -->
Task doc is shown.
**How to reproduce it**:
- install airflow latest master
- `airflow server`
- open `tutorial_etl_dag` in `example_dags`
- run dag(I don't know why but task instance detail can't open with error if no dag runs) and open task instance detail
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15178 | https://github.com/apache/airflow/pull/15191 | 7c17bf0d1e828b454a6b2c7245ded275b313c792 | e86f5ca8fa5ff22c1e1f48addc012919034c672f | 2021-04-03T20:48:59Z | python | 2021-04-05T02:46:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,171 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | scheduler does not apply ordering when querying which task instances to queue | Issue type:
Bug
Airflow version:
2.0.1 (although bug may have existed earlier, and master still has the bug)
Issue:
The scheduler sometimes queues tasks in alphabetical order instead of in priority weight and execution date order. This causes priorities to not work at all, and causes some tasks with name later in the alphabet to never run as long as new tasks with names earlier in the alphabet are ready.
Where the issue is in code (I think):
The scheduler will query the DB to get a set of task instances that are ready to run: https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L915-L924
And will simply get the first `max_tis` task instances from the result (with the `limit` call in the last line of the query), where `max_tis` is computed earlier in the code as cumulative pools slots available. The code in master improved the query to filter out tasks from starved pools, but still it will get the first `max_tis` tasks only with no ordering or reasoning on which `max_tis` to take.
Later, the scheduler is smart and will queue tasks based on priority and execution order:
https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L978-L980
However, the correct sorting (second code link here) will only happen on the subset picked by the query (first code link here), but the query will not pick tasks following correct sorting.
This causes tasks with lower priority and / or later execution date to be queued BEFORE tasks with higher priority and / or earlier execution date, just because the first are higher in alphabet than the second, and therefore the first are returned by the unsorted limited SQL query only.
Proposed fix:
Add a "sort by" in the query that gets the tasks to examine (first code link here), so that tasks are sorted by priority weight and execution time (meaning, same logic as the list sorting done later). I am willing to submit a PR if at least I get some feedback on the proposed fix here. | https://github.com/apache/airflow/issues/15171 | https://github.com/apache/airflow/pull/15210 | 4752fb3eb8ac8827e6af6022fbcf751829ecb17a | 943292b4e0c494f023c86d648289b1f23ccb0ee9 | 2021-04-03T06:35:46Z | python | 2021-06-14T11:34:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,150 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | "duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key" when triggering a DAG | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
1.14
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
[2021-04-02 07:23:30,513] [ERROR] app.py:1892 - Exception on /api/v1/dags/auto_test/dagRuns [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key"
DETAIL: Key (dag_id, execution_date)=(auto_test, 1967-12-13 20:57:42.043+01) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 184, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 384, in wrapper
return function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/response.py", line 103, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
return function(**kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/api_connexion/security.py", line 47, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/api_connexion/endpoints/dag_run_endpoint.py", line 231, in post_dag_run
session.commit()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1046, in commit
self.transaction.commit()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 504, in commit
self._prepare_impl()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 483, in _prepare_impl
self.session.flush()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2540, in flush
self._flush(objects)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2682, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
with_traceback=exc_tb,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2642, in _flush
flush_context.execute()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/unitofwork.py", line 589, in execute
uow,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
insert,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1136, in _emit_insert_statements
statement, params
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1130, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1511, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key"
DETAIL: Key (dag_id, execution_date)=(auto_test, 1967-12-13 20:57:42.043+01) already exists.
[SQL: INSERT INTO dag_run (dag_id, execution_date, start_date, end_date, state, run_id, creating_job_id, external_trigger, run_type, conf, last_scheduling_decision, dag_hash) VALUES (%(dag_id)s, %(execution_date)s, %(start_date)s, %(end_date)s, %(state)s, %(run_id)s, %(creating_job_id)s, %(external_trigger)s, %(run_type)s, %(conf)s, %(last_scheduling_decision)s, %(dag_hash)s) RETURNING dag_run.id]
[parameters: {'dag_id': 'auto_test', 'execution_date': datetime.datetime(1967, 12, 13, 19, 57, 42, 43000, tzinfo=Timezone('UTC')), 'start_date': datetime.datetime(2021, 4, 2, 7, 23, 30, 511735, tzinfo=Timezone('UTC')), 'end_date': None, 'state': 'running', 'run_id': 'dag_run_id_zstp_4435_postman11', 'creating_job_id': None, 'external_trigger': True, 'run_type': <DagRunType.MANUAL: 'manual'>, 'conf': <psycopg2.extensions.Binary object at 0x7f07b30b71e8>, 'last_scheduling_decision': None, 'dag_hash': None}]
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
The second trigger success
<!-- What do you think went wrong? -->
**How to reproduce it**:
Trigger a dag with the following conf:
`{
"dag_run_id": "dag_run_id_zstp_4435_postman",
"execution_date": "1967-12-13T19:57:42.043Z",
"conf": {}
}`
Then trigerring the same dag only with a different dag_run_id:
`{
"dag_run_id": "dag_run_id_zstp_4435_postman111",
"execution_date": "1967-12-13T19:57:42.043Z",
"conf": {}
}`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15150 | https://github.com/apache/airflow/pull/15174 | 36d9274f4ea87f28e2dcbab393b21e34a04eec30 | d89bcad26445c8926093680aac84d969ac34b54c | 2021-04-02T07:33:21Z | python | 2021-04-06T14:05:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,145 | ["airflow/providers/google/cloud/example_dags/example_bigquery_to_mssql.py", "airflow/providers/google/cloud/transfers/bigquery_to_mssql.py", "airflow/providers/google/provider.yaml", "tests/providers/google/cloud/transfers/test_bigquery_to_mssql.py"] | Big Query to MS SQL operator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
A new transfer operator for transferring records from Big Query to MSSQL.
**Use case / motivation**
Very similar to Bigquery to mysql, this will be an operator for transferring rows from Big Query to MSSQL.
**Are you willing to submit a PR?**
Yes
**Related Issues**
No
| https://github.com/apache/airflow/issues/15145 | https://github.com/apache/airflow/pull/15422 | 70cfe0135373d1f0400e7d9b275ebb017429794b | 7f8f75eb80790d4be3167f5e1ffccc669a281d55 | 2021-04-01T20:36:55Z | python | 2021-06-12T21:07:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,131 | ["airflow/utils/cli.py", "tests/utils/test_cli_util.py"] | airflow scheduler -p command not working in airflow 2.0.1 | **Apache Airflow version**:
2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: 4GB RAM, Processor - Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
- **OS** (e.g. from /etc/os-release): 18.04.5 LTS (Bionic Beaver)
- **Kernel** (e.g. `uname -a`): Linux 4.15.0-136-generic
- **Install tools**: bare metal installation as per commands given [here](https://airflow.apache.org/docs/apache-airflow/stable/installation.html)
**What happened**:
On running `airflow scheduler -p` got following error:-
```
Traceback (most recent call last):
File "/home/vineet/Documents/projects/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/utils/cli.py", line 86, in wrapper
metrics = _build_metrics(f.__name__, args[0])
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/utils/cli.py", line 118, in _build_metrics
full_command[idx + 1] = "*" * 8
IndexError: list assignment index out of range
```
As per docs, `-p` flag is a valid flag and gives correct result for `airflow scheduler --do-pickle`
**How to reproduce it**:
Install `airflow` 2.0.1 and run `airflow scheduler -p` | https://github.com/apache/airflow/issues/15131 | https://github.com/apache/airflow/pull/15143 | 6822665102c973d6e4d5892564294489ca094580 | 486b76438c0679682cf98cb88ed39c4b161cbcc8 | 2021-04-01T11:44:03Z | python | 2021-04-01T21:02:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,113 | ["setup.py"] | ImportError: cannot import name '_check_google_client_version' from 'pandas_gbq.gbq' | **What happened**:
`pandas-gbq` released version [0.15.0](https://github.com/pydata/pandas-gbq/releases/tag/0.15.0) which broke `apache-airflow-backport-providers-google==2020.11.23`
```
../lib/python3.7/site-packages/airflow/providers/google/cloud/hooks/bigquery.py:49: in <module>
from pandas_gbq.gbq import (
E ImportError: cannot import name '_check_google_client_version' from 'pandas_gbq.gbq' (/usr/local/lib/python3.7/site-packages/pandas_gbq/gbq.py)
```
The fix is to pin `pandas-gpq==0.14.1`. | https://github.com/apache/airflow/issues/15113 | https://github.com/apache/airflow/pull/15114 | 64b00896d905abcf1fbae195a29b81f393319c5f | b3b412523c8029b1ffbc600952668dc233589302 | 2021-03-31T14:39:00Z | python | 2021-04-04T17:25:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,107 | ["Dockerfile", "chart/values.yaml", "docs/docker-stack/build-arg-ref.rst", "docs/docker-stack/build.rst", "docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile", "docs/docker-stack/entrypoint.rst", "scripts/in_container/prod/entrypoint_prod.sh"] | Make the entrypoint in Prod image fail in case the user/group is not properly set | Airflow Production image accepts two types of uid/gid setting:
* airflow user (50000) with any GID
* any other user wit GID = 0 (this is to accommodate OpenShift Guidelines https://docs.openshift.com/enterprise/3.0/creating_images/guidelines.html)
We should check the uid/gid at entrypoint and fail it with clear error message if the uid/gid are set wrongly.
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
<!-- A short description of your feature -->
**Use case / motivation**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/15107 | https://github.com/apache/airflow/pull/15162 | 1d635ef0aefe995553059ee5cf6847cf2db65b8c | ce91872eccceb8fb6277012a909ad6b529a071d2 | 2021-03-31T10:30:38Z | python | 2021-04-08T17:28:36Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,103 | ["airflow/www/static/js/task_instance.js"] | Airflow web server redirects to a non-existing log folder - v2.1.0.dev0 | **Apache Airflow version**: v2.1.0.dev0
**Environment**:
- **Others**: Docker + docker compose
```
docker pull apache/airflow:master-python3.8
```
**What happened**:
Once the tasks finish successfully, I click on the Logs button in the web server, then I got redirected to this URL:
`http://localhost:8080/task?dag_id=testing&task_id=testing2&execution_date=2021-03-30T22%3A50%3A17.075509%2B00%3A00`
Everything looks fine just for 0.5-ish seconds (the screenshots below were taken by disabling the page refreshing):


Then, it instantly gets redirected to the following URL:
`http://localhost:8080/task?dag_id=testing&task_id=testing2&execution_date=2021-03-30+22%3A50%3A17%2B00%3A00#
`
In which I cannot see any info:


The problems lies in the log format specified in the URL:
```
2021-03-30T22%3A50%3A17.075509%2B00%3A00
2021-03-30+22%3A50%3A17%2B00%3A00#
```
This is my python code to run the DAG:
```python
args = {
'owner': 'airflow',
}
dag = DAG(
dag_id='testing',
default_args=args,
schedule_interval=None,
start_date=datetime(2019,1,1),
catchup=False,
tags=['example'],
)
task = PythonOperator(
task_id="testing2",
python_callable=test_python,
depends_on_past=False,
op_kwargs={'test': 'hello'},
dag=dag,
)
```
**Configuration details**
Environment variables from docker-compose.yml file:
```
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW_HOME: /opt/airflow
AIRFLOW__CORE__DEFAULT_TIMEZONE: Europe/Madrid
AIRFLOW__WEBSERVER__DEFAULT_UI_TIMEZONE: Europe/Madrid
AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'
```
| https://github.com/apache/airflow/issues/15103 | https://github.com/apache/airflow/pull/15258 | 019241be0c839ba32361679ffecd178c0506d10d | 523fb5c3f421129aea10045081dc5e519859c1ae | 2021-03-30T23:29:48Z | python | 2021-04-07T20:38:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,097 | ["airflow/providers/cncf/kubernetes/utils/pod_launcher.py", "tests/providers/cncf/kubernetes/utils/test_pod_launcher.py"] | Errors when launching many pods simultaneously on GKE | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18.15-gke.1500
**Environment**:
- **Cloud provider or hardware configuration**: Google Cloud
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
When many pods are launched at the same time (typically through the kubernetesPodOperator), some will fail due to a 409 error encountered when modifying a resourceQuota object.
Full stack trace:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1310, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 339, in execute
final_state, _, result = self.create_new_pod_for_operator(labels, launcher)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 485, in create_new_pod_for_operator
launcher.start_pod(self.pod, startup_timeout=self.startup_timeout_seconds)
File "/usr/local/lib/python3.8/site-packages/airflow/kubernetes/pod_launcher.py", line 109, in start_pod
resp = self.run_pod_async(pod)
File "/usr/local/lib/python3.8/site-packages/airflow/kubernetes/pod_launcher.py", line 87, in run_pod_async
raise e
File "/usr/local/lib/python3.8/site-packages/airflow/kubernetes/pod_launcher.py", line 81, in run_pod_async
resp = self._client.create_namespaced_pod(
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/apis/core_v1_api.py", line 6115, in create_namespaced_pod
(data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs)
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/apis/core_v1_api.py", line 6193, in create_namespaced_pod_with_http_info
return self.api_client.call_api('/api/v1/namespaces/{namespace}/pods', 'POST',
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 330, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 163, in __call_api
response_data = self.request(method, url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 371, in request
return self.rest_client.POST(url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 260, in POST
return self.request("POST", url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (409)
Reason: Conflict
HTTP response headers: HTTPHeaderDict({'Audit-Id': '9e2e6081-4e52-41fc-8caa-6db9d546990c', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Tue, 30 Mar 2021 15:41:33 GMT', 'Content-Length': '342'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Operation cannot be fulfilled on resourcequotas \"gke-resource-quotas\": the object has been modified; please apply your changes to the latest version and try again","reason":"Conflict","details":{"name":"gke-resource-quotas","kind":"resourcequotas"},"code":409}
```
This is a known issue in kubernetes, as outlined in this issue (in which other users specifically mention airflow): https://github.com/kubernetes/kubernetes/issues/67761
While this can be handled by task retries, I would like to discuss whether its worth handling this error within the kubernetespodoperator itself. We could probably check for the error in the pod launcher and automatically retry a few times in this case.
Let me know if you think this is something worth fixing on our end. If so, please assign this issue to me and I can put up a PR in the next week or so.
If you think that this issue is best handled via task retries or fixed upstream in kubernetes, feel free to close this.
**What you expected to happen**:
I would expect that Airflow could launch many pods at the same time.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Create a DAG which runs 30+ kubernetespodoperator tasks at the same time. Likely a few will fail.
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15097 | https://github.com/apache/airflow/pull/15137 | 8567420678d2a3e320bce3f3381ede43c7747a27 | 18066703832319968ee3d6122907746fdfda5d4c | 2021-03-30T17:40:00Z | python | 2021-04-06T23:20:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,088 | ["airflow/providers/google/cloud/hooks/bigquery_dts.py", "airflow/providers/google/cloud/operators/bigquery_dts.py"] | GCP BigQuery Data Transfer Run Issue | **Apache Airflow version**: composer-1.15.1-airflow-1.10.14
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: Google Composer
**What happened**:
After a `TransferConfig` is created successfully by operator `BigQueryCreateDataTransferOperator`, and also confirmed in GCP console, the created Resource name is:
```
projects/<project-id>/locations/europe/transferConfigs/<transfer-config-id>
```
Then I use operator `BigQueryDataTransferServiceStartTransferRunsOperator` to run the transfer, I got this error:
```
Traceback (most recent call last)
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_tas
result = task_copy.execute(context=context
File "/usr/local/lib/airflow/airflow/providers/google/cloud/operators/bigquery_dts.py", line 290, in execut
metadata=self.metadata
File "/usr/local/lib/airflow/airflow/providers/google/common/hooks/base_google.py", line 425, in inner_wrappe
return func(self, *args, **kwargs
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/bigquery_dts.py", line 235, in start_manual_transfer_run
metadata=metadata or ()
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery_datatransfer_v1/services/data_transfer_service/client.py", line 1110, in start_manual_transfer_run
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,
File "/opt/python3.6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call_
return wrapped_func(*args, **kwargs
File "/opt/python3.6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 75, in error_remapped_callabl
six.raise_from(exceptions.from_grpc_error(exc), exc
File "<string>", line 3, in raise_fro
google.api_core.exceptions.NotFound: 404 Requested entity was not found
```
**What you expected to happen**:
`BigQueryDataTransferServiceStartTransferRunsOperator` should run the data transfer job.
**How to reproduce it**:
1. Create a `cross_region_copy` TransferConfig with operator`BigQueryCreateDataTransferOperator`
2. Run the job with operator `BigQueryDataTransferServiceStartTransferRunsOperator`
```python
create_transfer = BigQueryCreateDataTransferOperator(
task_id=f'create_{ds}_transfer',
transfer_config={
'destination_dataset_id': ds,
'display_name': f'Copy {ds}',
'data_source_id': 'cross_region_copy',
'schedule_options': {'disable_auto_scheduling': True},
'params': {
'source_project_id': source_project,
'source_dataset_id': ds,
'overwrite_destination_table': True
},
},
project_id=target_project,
)
transfer_config_id = f"{{{{ task_instance.xcom_pull('create_{ds}_transfer', key='transfer_config_id') }}}}"
start_transfer = BigQueryDataTransferServiceStartTransferRunsOperator(
task_id=f'start_{ds}_transfer',
transfer_config_id=transfer_config_id,
requested_run_time={"seconds": int(time.time() + 60)},
project_id=target_project,
)
run_id = f"{{{{ task_instance.xcom_pull('start_{ds}_transfer', key='run_id') }}}}"
```
**Anything else we need to know**:
So I went to the [Google's API reference page](https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest/v1/projects.locations.transferConfigs/startManualRuns) to run some test. When I use this parent parameter `projects/{projectId}/transferConfigs/{configId}`, it thrown the same error. But it works when I use `rojects/{projectId}/locations/{locationId}/transferConfigs/{configId}`
I guess the piece of code that causes this issue is here in the hook, why does it use `projects/{projectId}/transferConfigs/{configId}` instead of `rojects/{projectId}/locations/{locationId}/transferConfigs/{configId}`?
https://github.com/apache/airflow/blob/def961512904443db90e0a980c43dc4d8f8328d5/airflow/providers/google/cloud/hooks/bigquery_dts.py#L226-L232
| https://github.com/apache/airflow/issues/15088 | https://github.com/apache/airflow/pull/20221 | bc76126a9f6172a360fd4301eeb82372d000f70a | 98514cc1599751d7611b3180c60887da0a25ff5e | 2021-03-30T13:18:34Z | python | 2021-12-12T23:40:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,071 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/scheduler_command.py", "chart/templates/scheduler/scheduler-deployment.yaml", "tests/cli/commands/test_scheduler_command.py"] | Run serve_logs process as part of scheduler command | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
- The `airflow serve_logs` command has been removed from the CLI as of 2.0.0
- When using `CeleryExecutor`, the `airflow celery worker` command runs the `serve_logs` process in the background.
- We should do the same with `airflow scheduler` command when using `LocalExecutor` or `SequentialExecutor`
<!-- A short description of your feature -->
**Use case / motivation**
- This will allow for viewing task logs in the UI when using `LocalExecutor` or `SequentialExecutor` without elasticsearch configured.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
Yes. Working with @dimberman .
<!--- We accept contributions! -->
**Related Issues**
- https://github.com/apache/airflow/issues/14222
- https://github.com/apache/airflow/issues/13331
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/15071 | https://github.com/apache/airflow/pull/15557 | 053d903816464f699876109b50390636bf617eff | 414bb20fad6c6a50c5a209f6d81f5ca3d679b083 | 2021-03-29T17:46:46Z | python | 2021-04-29T15:06:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,059 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/user_schema.py", "tests/api_connexion/endpoints/test_user_endpoint.py", "tests/api_connexion/schemas/test_user_schema.py"] | Remove 'user_id', 'role_id' from User and Role in OpenAPI schema | Would be good to remove the 'id' of both User and Role schemas from what is dumped in REST API endpoints. ID of User and Role table are sensitive data that would be fine to hide from the endpoints
| https://github.com/apache/airflow/issues/15059 | https://github.com/apache/airflow/pull/15117 | b62ca0ad5d8550a72257ce59c8946e7f134ed70b | 7087541a56faafd7aa4b9bf9f94eb6b75eed6851 | 2021-03-28T15:40:00Z | python | 2021-04-07T13:54:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,023 | ["airflow/www/api/experimental/endpoints.py", "airflow/www/templates/airflow/trigger.html", "airflow/www/views.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/www/api/experimental/test_endpoints.py", "tests/www/views/test_views_trigger_dag.py"] | DAG task execution and API fails if dag_run.conf is provided with an array or string (instead of dict) | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Tried both pip install and k8s image
**Environment**: Dev Workstation of K8s execution - both the same
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04 LTS
- **Others**: Python 3.6
**What happened**:
We use Airflow 1.10.14 currently in production and have a couple of DAGs defined today which digest a batch call. We implemented the batch (currently) in a way that the jobs are provided as dag_run.conf as an array of dicts, e.g. "[ { "job": "1" }, { "job": "2" } ]".
Trying to upgrade to Airflow 2.0.1 we see that such calls are still possible to submit but all further actions are failing:
- It is not possible to query status via REST API, generates a HTTP 500
- DAG starts but all tasks fail.
- Logs can not be displayed (actually there are none produced on the file system)
- Error logging is a bit complex, Celery worker does not provide meaningful logs on console nor produces log files, running a scheduler as SequentialExecutor reveals at least one meaningful sack trace as below
- (probably a couple of other internal logic is also failing
- Note that the dag_run.conf can be seen as submitted (so is correctly received) in Browse--> DAG Runs menu
As a regression using the same dag and passing a dag_run.conf = "{ "batch": [ { "job": "1" }, { "job": "2" } ] }" as well as "{}".
Example (simple) DAG to reproduce:
```
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.utils.dates import days_ago
from datetime import timedelta
dag = DAG(
'test1',
description='My first DAG',
default_args={
'owner': 'jscheffl',
'email': ['***@***.de'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 5,
'retry_delay': timedelta(minutes=5),
},
start_date=days_ago(2)
)
hello_world = BashOperator(
task_id='hello_world',
bash_command='echo hello world',
dag=dag,
)
```
Stack trace from SequentialExecutor:
```
Traceback (most recent call last):
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 225, in task_run
ti.init_run_context(raw=args.raw)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1987, in init_run_context
self._set_context(self)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/logging_mixin.py", line 54, in _set_context
set_context(self.log, context)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/logging_mixin.py", line 174, in set_context
handler.set_context(value)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 56, in set_context
local_loc = self._init_file(ti)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 245, in _init_file
relative_path = self._render_filename(ti, ti.try_number)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 77, in _render_filename
jinja_context = ti.get_template_context()
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1606, in get_template_context
self.overwrite_params_with_dag_run_conf(params=params, dag_run=dag_run)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1743, in overwrite_params_with_dag_run_conf
params.update(dag_run.conf)
ValueError: dictionary update sequence element #0 has length 4; 2 is required
{sequential_executor.py:66} ERROR - Failed to execute task Command '['airflow', 'tasks', 'run', 'test1', 'hello_world', '2021-03-25T22:22:36.732899+00:00', '--local', '--pool', 'default_pool', '--subdir', '/home/jscheffl/Programmieren/Python/Airflow/airflow-home/dags/test1.py']' returned non-zero exit status 1..
[2021-03-25 23:42:47,209] {scheduler_job.py:1199} INFO - Executor reports execution of test1.hello_world execution_date=2021-03-25 22:22:36.732899+00:00 exited with status failed for try_number 5
```
**What you expected to happen**:
- EITHER the submission of arrays as dag_run.conf is supported like in 1.10.14
- OR I would expect that the submission contains a validation if array values are not supported by Airflow (which it seems it was at least working in 1.10)
**How to reproduce it**: See DAG code above, reproduce the error e.g. by triggering with "[ "test" ]" as dag_run.conf
**Anything else we need to know**: I assume not :-) | https://github.com/apache/airflow/issues/15023 | https://github.com/apache/airflow/pull/15057 | eeb97cff9c2cef46f2eb9a603ccf7e1ccf804863 | 01c9818405107271ee8341c72b3d2d1e48574e08 | 2021-03-25T22:50:15Z | python | 2021-06-22T12:31:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,019 | ["airflow/ui/src/api/index.ts", "airflow/ui/src/components/TriggerRunModal.tsx", "airflow/ui/src/interfaces/api.ts", "airflow/ui/src/utils/memo.ts", "airflow/ui/src/views/Pipelines/Row.tsx", "airflow/ui/src/views/Pipelines/index.tsx", "airflow/ui/test/Pipelines.test.tsx"] | Establish mutation patterns via the API | https://github.com/apache/airflow/issues/15019 | https://github.com/apache/airflow/pull/15068 | 794922649982b2a6c095f7fa6be4e5d6a6d9f496 | 9ca49b69113bb2a1eaa0f8cec2b5f8598efc19ea | 2021-03-25T21:24:01Z | python | 2021-03-30T00:32:11Z |
|
closed | apache/airflow | https://github.com/apache/airflow | 15,018 | ["airflow/ui/package.json", "airflow/ui/src/api/index.ts", "airflow/ui/src/components/Table.tsx", "airflow/ui/src/interfaces/react-table-config.d.ts", "airflow/ui/src/views/Pipelines/PipelinesTable.tsx", "airflow/ui/src/views/Pipelines/Row.tsx", "airflow/ui/test/Pipelines.test.tsx", "airflow/ui/yarn.lock"] | Build out custom Table components | https://github.com/apache/airflow/issues/15018 | https://github.com/apache/airflow/pull/15805 | 65519ab83ddf4bd6fc30c435b5bfccefcb14d596 | 2c6b003fbe619d5d736cf97f20a94a3451e1a14a | 2021-03-25T21:22:50Z | python | 2021-05-27T20:23:02Z |
|
closed | apache/airflow | https://github.com/apache/airflow | 15,005 | ["airflow/providers/google/cloud/transfers/gcs_to_local.py", "tests/providers/google/cloud/transfers/test_gcs_to_local.py"] | `GCSToLocalFilesystemOperator` unnecessarily downloads objects when it checks object size | `GCSToLocalFilesystemOperator` in `airflow/providers/google/cloud/transfers/gcs_to_local.py` checks the file size if `store_to_xcom_key` is `True`.
https://github.com/apache/airflow/blob/b40dffa08547b610162f8cacfa75847f3c4ca364/airflow/providers/google/cloud/transfers/gcs_to_local.py#L137-L142
How it checks size is to download the object as `bytes` then check the size. This unnecessarily downloads the objects. `google.cloud.storage.blob.Blob` itself already has a `size` property ([documentation reference](https://googleapis.dev/python/storage/1.30.0/blobs.html#google.cloud.storage.blob.Blob.size)), and it should be used instead.
In extreme cases, if the object is big size, it adds unnecessary burden on the instance resources.
A new method, `object_size()`, can be added to `GCSHook`, then this can be addressed in `GCSToLocalFilesystemOperator`. | https://github.com/apache/airflow/issues/15005 | https://github.com/apache/airflow/pull/16171 | 19eb7ef95741e10d712845bc737b86615cbb8e7a | e1137523d4e9cb5d5cfe8584963620677a4ad789 | 2021-03-25T13:07:02Z | python | 2021-05-30T22:48:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,001 | ["airflow/providers/amazon/aws/sensors/s3_prefix.py", "tests/providers/amazon/aws/sensors/test_s3_prefix.py"] | S3MultipleKeysSensor operator | **Description**
Currently we have an operator, S3KeySensor which polls for the given prefix in the bucket. At times, there is need to poll for multiple prefixes in given bucket in one go. To have that - I propose to have a S3MultipleKeysSensor, which would poll for multiple prefixes in the given bucket in one go.
**Use case / motivation**
To make it easier for users to poll multiple S3 prefixes in a given bucket.
**Are you willing to submit a PR?**
Yes, I have an implementation ready for that.
**Related Issues**
NA
| https://github.com/apache/airflow/issues/15001 | https://github.com/apache/airflow/pull/18807 | ec31b2049e7c3b9f9694913031553f2d7eb66265 | 176165de3b297c0ed7d2b60cf6b4c37fc7a2337f | 2021-03-25T07:24:52Z | python | 2021-10-11T21:15:16Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,000 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | When an ECS Task fails to start, ECS Operator raises a CloudWatch exception | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.13
**Environment**:
- **Cloud provider or hardware configuration**:AWS
- **OS** (e.g. from /etc/os-release): Amazon Linux 2
- **Kernel** (e.g. `uname -a`): 4.14.209-160.339.amzn2.x86_64
- **Install tools**: pip
- **Others**:
**What happened**:
When an ECS Task exits with `stopCode: TaskFailedToStart`, the ECS Operator will exit with a ResourceNotFoundException for the GetLogEvents operation. This is because the task has failed to start, so no log is created.
```
[2021-03-14 02:32:49,792] {ecs_operator.py:147} INFO - ECS Task started: {'tasks': [{'attachments': [], 'availabilityZone': 'ap-northeast-1c', 'clusterArn': 'arn:aws:ecs:ap-northeast-1:xxxx:cluster/ecs-cluster', 'containerInstanceArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container-instance/ecs-cluster/xxxx', 'containers': [{'containerArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container/xxxx', 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'name': 'container_image', 'image': 'xxxx.dkr.ecr.ap-northeast-1.amazonaws.com/ecr/container_image:latest', 'lastStatus': 'PENDING', 'networkInterfaces': [], 'cpu': '128', 'memoryReservation': '128'}], 'cpu': '128', 'createdAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'desiredStatus': 'RUNNING', 'group': 'family:task', 'lastStatus': 'PENDING', 'launchType': 'EC2', 'memory': '128', 'overrides': {'containerOverrides': [{'name': 'container_image', 'command': ['/bin/bash', '-c', 'xxxx']}], 'inferenceAcceleratorOverrides': []}, 'startedBy': 'airflow', 'tags': [], 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'taskDefinitionArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task-definition/task:1', 'version': 1}], 'failures': [], 'ResponseMetadata': {'RequestId': 'xxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'xxxx', 'content-type': 'application/x-amz-json-1.1', 'content-length': '1471', 'date': 'Sun, 14 Mar 2021 02:32:48 GMT'}, 'RetryAttempts': 0}}
[2021-03-14 02:34:15,022] {ecs_operator.py:168} INFO - ECS Task stopped, check status: {'tasks': [{'attachments': [], 'availabilityZone': 'ap-northeast-1c', 'clusterArn': 'arn:aws:ecs:ap-northeast-1:xxxx:cluster/ecs-cluster', 'connectivity': 'CONNECTED', 'connectivityAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'containerInstanceArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container-instance/ecs-cluster/xxxx', 'containers': [{'containerArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container/xxxx', 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'name': 'container_image', 'image': 'xxxx.dkr.ecr.ap-northeast-1.amazonaws.com/ecr/container_image:latest', 'lastStatus': 'STOPPED', 'reason': 'CannotPullContainerError: failed to register layer: Error processing tar file(exit status 1): write /var/lib/xxxx: no space left on device', 'networkInterfaces': [], 'healthStatus': 'UNKNOWN', 'cpu': '128', 'memoryReservation': '128'}], 'cpu': '128', 'createdAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'desiredStatus': 'STOPPED', 'executionStoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 810000, tzinfo=tzlocal()), 'group': 'family:task', 'healthStatus': 'UNKNOWN', 'lastStatus': 'STOPPED', 'launchType': 'EC2', 'memory': '128', 'overrides': {'containerOverrides': [{'name': 'container_image', 'command': ['/bin/bash', '-c', 'xxxx']}], 'inferenceAcceleratorOverrides': []}, 'pullStartedAt': datetime.datetime(2021, 3, 14, 2, 32, 51, 68000, tzinfo=tzlocal()), 'pullStoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 13, 584000, tzinfo=tzlocal()), 'startedBy': 'airflow', 'stopCode': 'TaskFailedToStart', 'stoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 821000, tzinfo=tzlocal()), 'stoppedReason': 'Task failed to start', 'stoppingAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 821000, tzinfo=tzlocal()), 'tags': [], 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'taskDefinitionArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task-definition/task:1', 'version': 2}], 'failures': [], 'ResponseMetadata': {'RequestId': 'xxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'xxxx', 'content-type': 'application/x-amz-json-1.1', 'content-length': '1988', 'date': 'Sun, 14 Mar 2021 02:34:14 GMT'}, 'RetryAttempts': 0}}
[2021-03-14 02:34:15,024] {ecs_operator.py:172} INFO - ECS Task logs output:
[2021-03-14 02:34:15,111] {credentials.py:1094} INFO - Found credentials in environment variables.
[2021-03-14 02:34:15,416] {taskinstance.py:1150} ERROR - An error occurred (ResourceNotFoundException) when calling the GetLogEvents operation: The specified log stream does not exist.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 984, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/ecs_operator.py", line 152, in execute
self._check_success_task()
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/ecs_operator.py", line 175, in _check_success_task
for event in self.get_logs_hook().get_log_events(self.awslogs_group, stream_name):
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/aws_logs_hook.py", line 85, in get_log_events
**token_arg)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetLogEvents operation: The specified log stream does not exist.
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
ResourceNotFoundException is misleading because it feels like a problem with CloudWatchLogs. Expect AirflowException to indicate that the task has failed.
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
This can be reproduced by running an ECS Task that fails to start, for example by specifying a non-existent entry_point.
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
I suspect Issue #11663 has the same problem, i.e. it's not a CloudWatch issue, but a failure to start an ECS Task.
| https://github.com/apache/airflow/issues/15000 | https://github.com/apache/airflow/pull/18733 | a192b4afbd497fdff508b2a06ec68cd5ca97c998 | 767a4f5207f8fc6c3d8072fa780d84460d41fc7a | 2021-03-25T05:55:31Z | python | 2021-10-05T21:34:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,991 | ["scripts/ci/libraries/_md5sum.sh", "scripts/ci/libraries/_verify_image.sh", "scripts/docker/compile_www_assets.sh"] | Static file not being loaded in web server in docker-compose | Apache Airflow version: apache/airflow:master-python3.8
Environment:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release): Mac OS 10.16.5
Kernel (e.g. uname -a): Darwin Kernel Version 19.6.0
Browser:
Google Chrome Version 89.0.4389.90
What happened:
I am having an issue with running `apache/airflow:master-python3.8` with docker-compose.
The log of the webserver says` Please make sure to build the frontend in static/ directory and restart the server` when it is running. Due to static files not being loaded, login and dags are not working.
What you expected to happen:
static files being loaded correctly.
How to reproduce it:
My docker-compose is based on the official example.
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
Anything else we need to know:
It used to work until 2 days ago when the new docker image was released. Login prompt looks like this.

| https://github.com/apache/airflow/issues/14991 | https://github.com/apache/airflow/pull/14995 | 775ee51d0e58aeab5d29683dd2ff21b8c9057095 | 5dc634bf74bbec68bbe1c7b6944d0a9efd85181d | 2021-03-24T20:54:58Z | python | 2021-03-25T13:04:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,989 | [".github/workflows/ci.yml", "docs/exts/docs_build/fetch_inventories.py", "scripts/ci/docs/ci_docs.sh", "scripts/ci/docs/ci_docs_prepare.sh"] | Make Docs builds fallback in case external docs sources are missing | Every now and then our docs builds start to fail because of external dependency (latest example here #14985). And while we are doing caching now of that information, it does not help when the initial retrieval fails. This information does not change often but with the number of dependencies we have it will continue to fail regularly simply because many of those depenencies are not very reliable - they are just a web page hosted somewhere. They are nowhere near the stabilty of even PyPI or Apt sources and we have no mirroring in case of problem.
Maybe we could
a) see if we can use some kind of mirroring scheme (do those sites have mirrrors ? )
b) if not, simply write a simple script that will dump the cached content for those to S3, refresh it in the CI scheduled (nightly) master builds ad have a fallback mechanism to download that from there in case of any problems in CI?
| https://github.com/apache/airflow/issues/14989 | https://github.com/apache/airflow/pull/15109 | 2ac4638b7e93d5144dd46f2c09fb982c374db79e | 8cc8d11fb87d0ad5b3b80907874f695a77533bfa | 2021-03-24T18:15:48Z | python | 2021-04-02T22:11:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,985 | ["docs/exts/docs_build/fetch_inventories.py", "docs/exts/docs_build/third_party_inventories.py"] | Docs build are failing with `requests.exceptions.TooManyRedirects` error | Our docs build is failing with the following error on PRs and locally, example https://github.com/apache/airflow/pull/14983/checks?check_run_id=2185523525#step:4:282:
```
Fetched inventory: https://googleapis.dev/python/videointelligence/latest/objects.inv
Traceback (most recent call last):
File "/opt/airflow/docs/build_docs.py", line 278, in <module>
main()
File "/opt/airflow/docs/build_docs.py", line 218, in main
priority_packages = fetch_inventories()
File "/opt/airflow/docs/exts/docs_build/fetch_inventories.py", line 126, in fetch_inventories
failed, success = list(failed), list(failed)
File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 586, in result_iterator
yield fs.pop().result()
File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/airflow/docs/exts/docs_build/fetch_inventories.py", line 53, in _fetch_file
response = session.get(url, allow_redirects=True, stream=True)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 677, in send
history = [resp for resp in gen]
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 677, in <listcomp>
history = [resp for resp in gen]
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 166, in resolve_redirects
raise TooManyRedirects('Exceeded {} redirects.'.format(self.max_redirects), response=resp)
requests.exceptions.TooManyRedirects: Exceeded 30 redirects.
###########################################################################################
```
To reproduce locally run:
```
./breeze build-docs -- --package-filter apache-airflow
``` | https://github.com/apache/airflow/issues/14985 | https://github.com/apache/airflow/pull/14986 | a2b285825323da5a72dc0201ad6dc7d258771d0d | f6a1774555341f6a82c7cae1ce65903676bde61a | 2021-03-24T16:31:53Z | python | 2021-03-24T16:57:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,959 | ["airflow/providers/docker/operators/docker_swarm.py", "tests/providers/docker/operators/test_docker_swarm.py"] | Support all terminus task states for Docker Swarm Operator | **Apache Airflow version**: latest
**What happened**:
There are more terminus task states than the ones we currently check in Docker Swarm Operator. This makes the operator run infinitely when the service goes into these states.
**What you expected to happen**:
The operator should terminate.
**How to reproduce it**:
Run a Airflow task via the Docker Swarm operator and return failed status code from it.
**Anything else we need to know**:
So as a fix I have added the complete list of tasks from the Docker reference (https://docs.docker.com/engine/swarm/how-swarm-mode-works/swarm-task-states/)
We would like to send this patch back upstream to Apache Airflow.
| https://github.com/apache/airflow/issues/14959 | https://github.com/apache/airflow/pull/14960 | 6b78394617c7e699dda1acf42e36161d2fc29925 | ab477176998090e8fb94d6f0e6bf056bad2da441 | 2021-03-23T15:44:21Z | python | 2021-04-07T12:39:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,957 | [".github/workflows/ci.yml", ".pre-commit-config.yaml", "BREEZE.rst", "STATIC_CODE_CHECKS.rst", "breeze-complete", "scripts/ci/selective_ci_checks.sh", "scripts/ci/static_checks/eslint.sh"] | Run selective CI pipeline for UI-only PRs | For PRs that only touch files in `airflow/ui/` we'd like to run a selective set of CI actions. We only need linting and UI tests.
Additionally, this update should pull the test runs out of the pre-commit. | https://github.com/apache/airflow/issues/14957 | https://github.com/apache/airflow/pull/15009 | a2d99293c9f5bdf1777fed91f1c48230111f53ac | 7417f81d36ad02c2a9d7feb9b9f881610f50ceba | 2021-03-23T14:32:41Z | python | 2021-03-31T22:10:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,945 | ["docs/apache-airflow/concepts.rst", "tests/cluster_policies/__init__.py"] | Circular import issues with airflow_local_settings.py | Our docs on dag policy demonstrate usage with type annotations:
https://airflow.apache.org/docs/apache-airflow/stable/concepts.html?highlight=dag_policy#dag-level-cluster-policy
```python
def dag_policy(dag: DAG):
"""Ensure that DAG has at least one tag"""
if not dag.tags:
raise AirflowClusterPolicyViolation(
f"DAG {dag.dag_id} has no tags. At least one tag required. File path: {dag.filepath}"
)
```
The problem is, by the time you import DAG with `from airflow import DAG`, airflow will have already loaded up the `settings.py` file (where processing of airflow_local_settings.py is done), and it seems nothing in your airflow local settings file gets imported.
So none of these examples would actually work.
To test this you can add a local settings file containing this:
```
from airflow import DAG
raise Exception('hello')
```
Now run `airflow dags list` and observe that no error will be raised.
Remove the `DAG` import. Now you'll see the error.
Any suggestions how to handle this?
There are a couple conf settings imported from settings. We could move these to `airflow/__init__.py`. But less straightforward would be what to do about `settings.initialize()`. Perhaps we could make it so that initialized is called within settings?
| https://github.com/apache/airflow/issues/14945 | https://github.com/apache/airflow/pull/14973 | f6a1774555341f6a82c7cae1ce65903676bde61a | eb91bdc1bcd90519e9ae16607f9b0e82b33590f8 | 2021-03-23T03:53:21Z | python | 2021-03-24T17:05:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,933 | ["airflow/task/task_runner/__init__.py"] | A Small Exception Message Typo | I guess, line 59 in file "[airflow/airflow/task/task_runner/__init__.py](https://github.com/apache/airflow/blob/5a864f0e456348e0a871cf4678e1ffeec541c52d/airflow/task/task_runner/__init__.py#L59)" should change like:
**OLD**: f'The task runner could not be loaded. Please check "executor" key in "core" section.'
**NEW**: f'The task runner could not be loaded. Please check "task_runner" key in "core" section.'
So, "executor" should be replaced with "task_runner". | https://github.com/apache/airflow/issues/14933 | https://github.com/apache/airflow/pull/15067 | 6415489390c5ec3679f8d6684c88c1dd74414951 | 794922649982b2a6c095f7fa6be4e5d6a6d9f496 | 2021-03-22T12:22:52Z | python | 2021-03-29T23:20:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,924 | ["airflow/utils/cli.py", "airflow/utils/log/file_processor_handler.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/non_caching_file_handler.py"] | Scheduler Memory Leak in Airflow 2.0.1 | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.4
**Environment**: Dev
- **OS** (e.g. from /etc/os-release): RHEL7
**What happened**:
After running fine for some time my airflow tasks got stuck in scheduled state with below error in Task Instance Details:
"All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless: - The scheduler is down or under heavy load If this task instance does not start soon please contact your Airflow administrator for assistance."
**What you expected to happen**:
I restarted the scheduler then it started working fine. When i checked my metrics i realized the scheduler has a memory leak and over past 4 days it has reached up to 6GB of memory utilization
In version >2.0 we don't even have the run_duration config option to restart scheduler periodically to avoid this issue until it is resolved.
**How to reproduce it**:
I saw this issue in multiple dev instances of mine all running Airflow 2.0.1 on kubernetes with KubernetesExecutor.
Below are the configs that i changed from the default config.
max_active_dag_runs_per_dag=32
parallelism=64
dag_concurrency=32
sql_Alchemy_pool_size=50
sql_Alchemy_max_overflow=30
**Anything else we need to know**:
The scheduler memory leaks occurs consistently in all instances i have been running. The memory utilization keeps growing for scheduler.
| https://github.com/apache/airflow/issues/14924 | https://github.com/apache/airflow/pull/18054 | 6acb9e1ac1dd7705d9bfcfd9810451dbb549af97 | 43f595fe1b8cd6f325d8535c03ee219edbf4a559 | 2021-03-21T15:35:14Z | python | 2021-09-09T10:50:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,897 | ["airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", "airflow/providers/cncf/kubernetes/sensors/spark_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py", "tests/providers/cncf/kubernetes/sensors/test_spark_kubernetes.py"] | Add ability to specify API group and version for Spark operators | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
In Spark Kubernetes operator we have hard-coded API group and version for spark application k8s object:
https://github.com/apache/airflow/blob/b23a4cd6631ba3346787b285a7bdafd5e71b43b0/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py#L63-L65
I think it is not good, because we can have situation that some custom or not opensource Spark k8s operators can have different API group and version (like `sparkoperator.mydomain.com` instead of `sparkoperator.k8s.io`).
<!-- A short description of your feature -->
**Use case / motivation**
User have got installed custom or commercial Spark Kubernetes operator and he wants to launch DAG with `SprkKubernetesOperator` from providers. User cannot do that without this implemented feature.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
Yes
<!--- We accept contributions! -->
**Related Issues**
No
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14897 | https://github.com/apache/airflow/pull/14898 | 5539069ea5f70482fed3735640a1408e91fef4f2 | 00453dc4a2d41da6c46e73cd66cac88e7556de71 | 2021-03-19T15:33:45Z | python | 2021-03-20T19:43:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,888 | ["airflow/providers/amazon/aws/transfers/s3_to_redshift.py", "tests/providers/amazon/aws/transfers/test_s3_to_redshift.py"] | S3ToRedshiftOperator is not transaction safe for truncate | **Apache Airflow version**: 2.0.0
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): Amazon Linux 2
**What happened**:
The TRUNCATE operation has a fine print in Redshift that it is committing the transaction.
See https://docs.aws.amazon.com/redshift/latest/dg/r_TRUNCATE.html
> However, be aware that TRUNCATE commits the transaction in which it is run.
and
> The TRUNCATE command commits the transaction in which it is run; therefore, you can't roll back a TRUNCATE operation, and a TRUNCATE command may commit other operations when it commits itself.
Currently with truncate=True, the operator would generate a statement like:
```sql
BEGIN;
TRUNCATE TABLE schema.table; -- this commits the transaction
--- the table is now empty for any readers until the end of the copy
COPY ....
COMMIT;
```
**What you expected to happen**:
Replacing with a DELETE operation would solve the problem, in a normal database it is not considered a fast operation but with Redshift, a 1B+ rows table is deleted in less than 5 seconds on a 2-node ra3.xlplus. (not counting vacuum or analyze) and a vacuum of the empty table taking less than 3 minutes.
```sql
BEGIN;
DELETE FROM schema.table;
COPY ....
COMMIT;
```
It should be mentioned in the documentation that a delete is done for the reason above and that vacuum and analyze operations are left to manage.
**How often does this problem occur? Once? Every time etc?**
Always.
| https://github.com/apache/airflow/issues/14888 | https://github.com/apache/airflow/pull/17117 | 32582b5bf1432e7c7603b959a675cf7edd76c9e6 | f44d7bd9cfe00b1409db78c2a644516b0ab003e9 | 2021-03-19T00:33:07Z | python | 2021-07-21T16:33:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,880 | ["airflow/providers/slack/operators/slack.py", "tests/providers/slack/operators/test_slack.py"] | SlackAPIFileOperator is broken | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Environment**: Docker
- **Cloud provider or hardware configuration**: Local file system
- **OS** (e.g. from /etc/os-release): Arch Linux
- **Kernel** (e.g. `uname -a`): 5.11.5-arch1-1
**What happened**:
I tried to post a file from a long Python string to a Slack channel through the SlackAPIFileOperator.
I defined the operator this way:
```
SlackAPIFileOperator(
task_id="{}-notifier".format(self.task_id),
channel="#alerts-metrics",
token=MY_TOKEN,
initial_comment=":warning: alert",
filename="{{ ds }}.csv",
filetype="csv",
content=df.to_csv()
)
```
Task failed with the following error:
```
DEBUG - Sending a request - url: https://www.slack.com/api/files.upload, query_params: {}, body_params: {}, files: {}, json_body: {'channels': '#alerts-metrics', 'content': '<a long pandas.DataFrame.to_csv output>', 'filename': '{{ ds }}.csv', 'filetype': 'csv', 'initial_comment': ':warning: alert'}, headers: {'Content-Type': 'application/json;charset=utf-8', 'Authorization': '(redacted)', 'User-Agent': 'Python/3.6.12 slackclient/3.3.2 Linux/5.11.5-arch1-1'}
DEBUG - Received the following response - status: 200, headers: {'date': 'Thu, 18 Mar 2021 13:28:44 GMT', 'server': 'Apache', 'x-xss-protection': '0', 'pragma': 'no-cache', 'cache-control': 'private, no-cache, no-store, must-revalidate', 'access-control-allow-origin': '*', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'x-slack-req-id': '0ff5fd17ca7e2e8397559b6347b34820', 'x-content-type-options': 'nosniff', 'referrer-policy': 'no-referrer', 'access-control-expose-headers': 'x-slack-req-id, retry-after', 'x-slack-backend': 'r', 'x-oauth-scopes': 'incoming-webhook,files:write,chat:write', 'x-accepted-oauth-scopes': 'files:write', 'expires': 'Mon, 26 Jul 1997 05:00:00 GMT', 'vary': 'Accept-Encoding', 'access-control-allow-headers': 'slack-route, x-slack-version-ts, x-b3-traceid, x-b3-spanid, x-b3-parentspanid, x-b3-sampled, x-b3-flags', 'content-type': 'application/json; charset=utf-8', 'x-envoy-upstream-service-time': '37', 'x-backend': 'files_normal files_bedrock_normal_with_overflow files_canary_with_overflow files_bedrock_canary_with_overflow files_control_with_overflow files_bedrock_control_with_overflow', 'x-server': 'slack-www-hhvm-files-iad-xg4a', 'x-via': 'envoy-www-iad-xvw3, haproxy-edge-lhr-u1ge', 'x-slack-shared-secret-outcome': 'shared-secret', 'via': 'envoy-www-iad-xvw3', 'connection': 'close', 'transfer-encoding': 'chunked'}, body: {'ok': False, 'error': 'no_file_data'}
[2021-03-18 13:28:43,601] {taskinstance.py:1455} ERROR - The request to the Slack API failed.
The server responded with: {'ok': False, 'error': 'no_file_data'}
```
**What you expected to happen**:
I expect the operator to succeed and see a new message in Slack with a snippet of a downloadable CSV file.
**How to reproduce it**:
Just declare a DAG this way:
```
from airflow import DAG
from airflow.providers.slack.operators.slack import SlackAPIFileOperator
from pendulum import datetime
with DAG(dag_id="SlackFile",
default_args=dict(start_date=datetime(2021, 1, 1), owner='airflow', catchup=False)) as dag:
SlackAPIFileOperator(
task_id="Slack",
token=YOUR_TOKEN,
content="test-content"
)
```
And try to run it.
**Anything else we need to know**:
This seems to be a known issue: https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1616079965083200
I workaround it with this following re-implementation:
```
from typing import Optional, Any
from airflow import AirflowException
from airflow.providers.slack.hooks.slack import SlackHook
from airflow.providers.slack.operators.slack import SlackAPIOperator
from airflow.utils.decorators import apply_defaults
class SlackAPIFileOperator(SlackAPIOperator):
"""
Send a file to a slack channel
Examples:
.. code-block:: python
slack = SlackAPIFileOperator(
task_id="slack_file_upload",
dag=dag,
slack_conn_id="slack",
channel="#general",
initial_comment="Hello World!",
file="hello_world.csv",
filename="hello_world.csv",
filetype="csv",
content="hello,world,csv,file",
)
:param channel: channel in which to sent file on slack name (templated)
:type channel: str
:param initial_comment: message to send to slack. (templated)
:type initial_comment: str
:param file: the file (templated)
:type file: str
:param filename: name of the file (templated)
:type filename: str
:param filetype: slack filetype. (templated)
- see https://api.slack.com/types/file
:type filetype: str
:param content: file content. (templated)
:type content: str
"""
template_fields = ('channel', 'initial_comment', 'file', 'filename', 'filetype', 'content')
ui_color = '#44BEDF'
@apply_defaults
def __init__(
self,
channel: str = '#general',
initial_comment: str = 'No message has been set!',
file: Optional[str] = None,
filename: str = 'default_name.csv',
filetype: str = 'csv',
content: Optional[str] = None,
**kwargs,
) -> None:
if (content is None) and (file is None):
raise AirflowException('At least one of "content" or "file" should be defined.')
self.method = 'files.upload'
self.channel = channel
self.initial_comment = initial_comment
self.file = file
self.filename = filename
self.filetype = filetype
self.content = content
super().__init__(method=self.method, **kwargs)
def execute(self, **kwargs):
slack = SlackHook(token=self.token, slack_conn_id=self.slack_conn_id)
args = dict(
channels=self.channel,
filename=self.filename,
filetype=self.filetype,
initial_comment=self.initial_comment
)
if self.content is not None:
args['content'] = self.content
elif self.file is not None:
args['file'] = self.content
slack.call(self.method, data=args)
def construct_api_call_params(self) -> Any:
pass
```
Maybe it is not the best solution as it does not leverage work from `SlackAPIOperator`.
But at least, it fullfill my use case.
| https://github.com/apache/airflow/issues/14880 | https://github.com/apache/airflow/pull/17247 | 797b515a23136d1f00c6bd938960882772c1c6bd | 07c8ee01512b0cc1c4602e356b7179cfb50a27f4 | 2021-03-18T16:07:03Z | python | 2021-08-01T23:08:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,879 | [".github/boring-cyborg.yml"] | Add area:providers for general providers to boring-cyborg | **Description**
Add label **area-providers** to general providers.
For example the **provider:Google**
https://github.com/apache/airflow/blob/16f43605f3370f20611ba9e08b568ff8a7cd433d/.github/boring-cyborg.yml#L21-L25
**Use case / motivation**
Help better issue/pull request monitoring.
**Are you willing to submit a PR?**
Yes
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14879 | https://github.com/apache/airflow/pull/14941 | 01a5d36e6bbc1d9e7afd4e984376301ea378a94a | 3bbf9aea0b54a7cb577eb03f805e0b0566b759c3 | 2021-03-18T14:19:39Z | python | 2021-03-22T23:03:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,864 | ["airflow/exceptions.py", "airflow/utils/task_group.py", "tests/utils/test_task_group.py"] | Using TaskGroup without context manager (Graph view visual bug) | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**What happened**:
When I do not use the context manager for the task group and instead call the add function to add the tasks, those tasks show up on the Graph view.

However, when I click on the task group item on the Graph UI, it will fix the issue. When I close the task group item, the tasks will not be displayed as expected.

**What you expected to happen**:
I expected the tasks inside the task group to not display on the Graph view.

**How to reproduce it**:
Render this DAG in Airflow
```python
from airflow.models import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.task_group import TaskGroup
from datetime import datetime
with DAG(dag_id="example_task_group", start_date=datetime(2021, 1, 1), tags=["example"], catchup=False) as dag:
start = BashOperator(task_id="start", bash_command='echo 1; sleep 10; echo 2;')
tg = TaskGroup("section_1", tooltip="Tasks for section_1")
task_1 = DummyOperator(task_id="task_1")
task_2 = BashOperator(task_id="task_2", bash_command='echo 1')
task_3 = DummyOperator(task_id="task_3")
tg.add(task_1)
tg.add(task_2)
tg.add(task_3)
``` | https://github.com/apache/airflow/issues/14864 | https://github.com/apache/airflow/pull/23071 | 9caa511387f92c51ab4fc42df06e0a9ba777e115 | 337863fa35bba8463d62e5cf0859f2bb73cf053a | 2021-03-17T22:25:05Z | python | 2022-06-05T13:52:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,857 | ["airflow/providers/mysql/hooks/mysql.py", "tests/providers/mysql/hooks/test_mysql.py"] | MySQL hook uses wrong autocommit calls for mysql-connector-python | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**Environment**:
* **Cloud provider or hardware configuration**: WSL2/Docker running `apache/airflow:2.0.1-python3.7` image
* **OS** (e.g. from /etc/os-release): Host: Ubuntu 20.04 LTS, Docker Image: Debian GNU/Linux 10 (buster)
* **Kernel** (e.g. `uname -a`): 5.4.72-microsoft-standard-WSL2 x86_64
* **Others**: Docker version 19.03.8, build afacb8b7f0
**What happened**:
Received a `'bool' object is not callable` error when attempting to use the mysql-connector-python client for a task:
```
[2021-03-17 10:20:13,247] {{taskinstance.py:1455}} ERROR - 'bool' object is not callable
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1310, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/mysql/operators/mysql.py", line 74, in execute
hook.run(self.sql, autocommit=self.autocommit, parameters=self.parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/hooks/dbapi.py", line 175, in run
self.set_autocommit(conn, autocommit)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/mysql/hooks/mysql.py", line 55, in set_autocommit
conn.autocommit(autocommit)
```
**What you expected to happen**:
The task to run without complaints.
**How to reproduce it**:
Create and use a MySQL connection with `{"client": "mysql-connector-python"}` specified in the Extra field.
**Anything else we need to know**:
The MySQL hook seems to be using `conn.get_autocommit()` and `conn.autocommit()` to get/set the autocommit flag for both mysqlclient and mysql-connector-python. These method don't actually exist in mysql-connector-python as it uses autocommit as a property rather than a method.
I was able to work around it by adding an `if not callable(conn.autocommit)` condition to detect when mysql-connector-python is being used, but I'm sure there's probably a more elegant way of detecting which client is being used.
mysql-connector-python documentation:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-autocommit.html
Autocommit calls:
https://github.com/apache/airflow/blob/2a2adb3f94cc165014d746102e12f9620f271391/airflow/providers/mysql/hooks/mysql.py#L55
https://github.com/apache/airflow/blob/2a2adb3f94cc165014d746102e12f9620f271391/airflow/providers/mysql/hooks/mysql.py#L66 | https://github.com/apache/airflow/issues/14857 | https://github.com/apache/airflow/pull/14869 | b8cf46a12fba5701d9ffc0b31aac8375fbca37f9 | 9b428bbbdf4c56f302a1ce84f7c2caf34b81ffa0 | 2021-03-17T17:39:28Z | python | 2021-03-29T03:33:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,839 | ["airflow/stats.py", "tests/core/test_stats.py"] | Enabling Datadog to tag metrics results in AttributeError | **Apache Airflow version**: 2.0.1
**Python version**: 3.8
**Cloud provider or hardware configuration**: AWS
**What happened**:
In order to add tags to [Airflow metrics,](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html), it's required to set `AIRFLOW__METRICS__STATSD_DATADOG_ENABLED` to `True` and add tags in the `AIRFLOW__METRICS__STATSD_DATADOG_TAGS` variable. We were routing our statsd metrics to Datadog anyway, so this should theoretically have not changed anything other than the addition of any specified tags.
Setting the environment variable `AIRFLOW__METRICS__STATSD_DATADOG_ENABLED` to `True` (along with the other required statsd connection variables) results in the following error, which causes the process to terminate. This is from the scheduler, but this would apply anywhere that `Stats.timer()` is being called.
```
AttributeError: 'DogStatsd' object has no attribute 'timer'
return Timer(self.dogstatsd.timer(stat, *args, tags=tags, **kwargs))
File "/usr/local/lib/python3.8/site-packages/airflow/stats.py", line 345, in timer
return fn(_self, stat, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/stats.py", line 233, in wrapper
timer = Stats.timer('scheduler.critical_section_duration')
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1538, in _do_scheduling
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1382, in _run_scheduler_loop
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
Traceback (most recent call last):
```
**What you expected to happen**:
The same default Airflow metrics get sent by connecting to datadog, tagged with the metrics specified in `AIRFLOW__METRICS__STATSD_DATADOG_TAGS`.
**What do you think went wrong?**:
There is a bug in the implementation of the `Timer` method of `SafeDogStatsdLogger`. https://github.com/apache/airflow/blob/master/airflow/stats.py#L341-L347
`DogStatsd` has no method called `timer`. Instead it should be `timed`: https://datadogpy.readthedocs.io/en/latest/#datadog.dogstatsd.base.DogStatsd.timed
**How to reproduce it**:
Set the environment variables (or their respective config values) `AIRFLOW__METRICS__STATSD_ON`, `AIRFLOW__METRICS__STATSD_HOST`, `AIRFLOW__METRICS__STATSD_PORT`, and then set `AIRFLOW__METRICS__STATSD_DATADOG_ENABLED` to `True` and start up Airflow.
**Anything else we need to know**:
How often does this problem occur? Every time
| https://github.com/apache/airflow/issues/14839 | https://github.com/apache/airflow/pull/15132 | 3a80b7076da8fbee759d9d996bed6e9832718e55 | b7cd2df056ac3ab113d77c5f6b65f02a77337907 | 2021-03-16T18:48:23Z | python | 2021-04-01T13:46:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,830 | ["airflow/api_connexion/endpoints/role_and_permission_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/security/permissions.py", "tests/api_connexion/endpoints/test_role_and_permission_endpoint.py"] | Add Create/Update/Delete API endpoints for Roles | To be able to fully manage the permissions in the UI we will need to be able to modify the roles and the permissions they have.
It probably makes sense to have one PR that adds CUD (Read is already done) endpoints for Roles.
Permissions are not createable via anything but code, so we only need these endpoints for Roles, but not Permissions. | https://github.com/apache/airflow/issues/14830 | https://github.com/apache/airflow/pull/14840 | 266384a63f4693b667f308d49fcbed9a10a41fce | 6706b67fecc00a22c1e1d6658616ed9dd96bbc7b | 2021-03-16T10:58:54Z | python | 2021-04-05T09:22:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,811 | ["setup.cfg"] | Latest SQLAlchemy (1.4) Incompatible with latest sqlalchemy_utils | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Mac OS Big Sur
- **Kernel** (e.g. `uname -a`):
- **Install tools**: pip 20.1.1
- **Others**:
**What happened**:
Our CI environment broke due to the release of SQLAlchemy 1.4, which is incompatible with the latest version of sqlalchemy-utils. ([Related issue](https://github.com/kvesteri/sqlalchemy-utils/issues/505))
Partial stacktrace:
```
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/airflow/www/utils.py", line 27, in <module>
from flask_appbuilder.models.sqla.interface import SQLAInterface
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 16, in <module>
from sqlalchemy_utils.types.uuid import UUIDType
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/__init__.py", line 1, in <module>
from .aggregates import aggregated # noqa
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/aggregates.py", line 372, in <module>
from .functions.orm import get_column_key
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/__init__.py", line 1, in <module>
from .database import ( # noqa
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/database.py", line 11, in <module>
from .orm import quote
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/orm.py", line 14, in <module>
from sqlalchemy.orm.query import _ColumnEntity
ImportError: cannot import name '_ColumnEntity' from 'sqlalchemy.orm.query' (/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py)
```
I'm not sure what the typical procedure is in the case of breaking changes to dependencies, but seeing as there's an upcoming release I thought it might be worth pinning sqlalchemy to 1.3.x? (Or pin the version of sqlalchemy-utils to a compatible version if one is released before Airflow 2.0.2)
**What you expected to happen**:
`airflow db init` to run successfully.
<!-- What do you think went wrong? -->
**How to reproduce it**:
1) Create a new virtualenv
2) `pip install apache-airflow`
3) `airflow db init`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
| https://github.com/apache/airflow/issues/14811 | https://github.com/apache/airflow/pull/14812 | 251eb7d170db3f677e0c2759a10ac1e31ac786eb | c29f6fb76b9d87c50713ae94fda805b9f789a01d | 2021-03-15T19:39:29Z | python | 2021-03-15T20:28:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,807 | ["airflow/ui/package.json", "airflow/ui/src/components/AppContainer/AppHeader.tsx", "airflow/ui/src/components/AppContainer/TimezoneDropdown.tsx", "airflow/ui/src/components/MultiSelect.tsx", "airflow/ui/src/providers/TimezoneProvider.tsx", "airflow/ui/src/views/Pipelines/PipelinesTable.tsx", "airflow/ui/src/views/Pipelines/index.tsx", "airflow/ui/test/TimezoneDropdown.test.tsx", "airflow/ui/test/utils.tsx", "airflow/ui/yarn.lock"] | Design/build timezone switcher modal | - Once we have the current user's preference set and available in Context, add a modal that allows the preferred display timezone to be changed.
- Modal will be triggered by clicking the time/TZ in the global navigation. | https://github.com/apache/airflow/issues/14807 | https://github.com/apache/airflow/pull/15674 | 46d62782e85ff54dd9dc96e1071d794309497983 | 3614910b4fd32c90858cd9731fc0421078ca94be | 2021-03-15T15:14:24Z | python | 2021-05-07T17:49:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,778 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINED] The test_scheduler_verify_pool_full occasionally fails | Probably same root cause as in #14773 and #14772 but there is another test that fails occassionally:
https://github.com/apache/airflow/runs/2106723579?check_suite_focus=true#step:6:10811
```
_______________ TestSchedulerJob.test_scheduler_verify_pool_full _______________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_scheduler_verify_pool_full>
def test_scheduler_verify_pool_full(self):
"""
Test task instances not queued when pool is full
"""
dag = DAG(dag_id='test_scheduler_verify_pool_full', start_date=DEFAULT_DATE)
BashOperator(
task_id='dummy',
dag=dag,
owner='airflow',
pool='test_scheduler_verify_pool_full',
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
session = settings.Session()
pool = Pool(pool='test_scheduler_verify_pool_full', slots=1)
session.add(pool)
session.flush()
dag = SerializedDAG.from_dict(SerializedDAG.to_dict(dag))
scheduler = SchedulerJob(executor=self.null_exec)
scheduler.processor_agent = mock.MagicMock()
# Create 2 dagruns, which will create 2 task instances.
dr = dag.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=DEFAULT_DATE,
state=State.RUNNING,
)
> scheduler._schedule_dag_run(dr, {}, session)
tests/jobs/test_scheduler_job.py:2586:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/jobs/scheduler_job.py:1688: in _schedule_dag_run
dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
airflow/utils/session.py:62: in wrapper
return func(*args, **kwargs)
airflow/models/dagbag.py:178: in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <airflow.models.dagbag.DagBag object at 0x7f45f1959a90>
dag_id = 'test_scheduler_verify_pool_full'
session = <sqlalchemy.orm.session.Session object at 0x7f45f19eef70>
def _add_dag_from_db(self, dag_id: str, session: Session):
"""Add DAG to DagBag from DB"""
from airflow.models.serialized_dag import SerializedDagModel
row = SerializedDagModel.get(dag_id, session)
if not row:
> raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
E airflow.exceptions.SerializedDagNotFound: DAG 'test_scheduler_verify_pool_full' not found in serialized_dag table
airflow/models/dagbag.py:234: SerializedDagNotFound
```
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14778 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | 2021-03-14T14:39:35Z | python | 2021-03-18T13:01:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,773 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINE] The test_verify_integrity_if_dag_changed occasionally fails | ERROR: type should be string, got "https://github.com/apache/airflow/pull/14531/checks?check_run_id=2106170690#step:6:10298\r\n\r\nLooks like it is connected with #14772 \r\n\r\n```\r\n____________ TestSchedulerJob.test_verify_integrity_if_dag_changed _____________\r\n \r\n self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_verify_integrity_if_dag_changed>\r\n \r\n def test_verify_integrity_if_dag_changed(self):\r\n # CleanUp\r\n with create_session() as session:\r\n session.query(SerializedDagModel).filter(\r\n SerializedDagModel.dag_id == 'test_verify_integrity_if_dag_changed'\r\n ).delete(synchronize_session=False)\r\n \r\n dag = DAG(dag_id='test_verify_integrity_if_dag_changed', start_date=DEFAULT_DATE)\r\n BashOperator(task_id='dummy', dag=dag, owner='airflow', bash_command='echo hi')\r\n \r\n scheduler = SchedulerJob(subdir=os.devnull)\r\n scheduler.dagbag.bag_dag(dag, root_dag=dag)\r\n scheduler.dagbag.sync_to_db()\r\n \r\n session = settings.Session()\r\n orm_dag = session.query(DagModel).get(dag.dag_id)\r\n assert orm_dag is not None\r\n \r\n scheduler = SchedulerJob(subdir=os.devnull)\r\n scheduler.processor_agent = mock.MagicMock()\r\n dag = scheduler.dagbag.get_dag('test_verify_integrity_if_dag_changed', session=session)\r\n scheduler._create_dag_runs([orm_dag], session)\r\n \r\n drs = DagRun.find(dag_id=dag.dag_id, session=session)\r\n assert len(drs) == 1\r\n dr = drs[0]\r\n \r\n dag_version_1 = SerializedDagModel.get_latest_version_hash(dr.dag_id, session=session)\r\n assert dr.dag_hash == dag_version_1\r\n assert scheduler.dagbag.dags == {'test_verify_integrity_if_dag_changed': dag}\r\n assert len(scheduler.dagbag.dags.get(\"test_verify_integrity_if_dag_changed\").tasks) == 1\r\n \r\n # Now let's say the DAG got updated (new task got added)\r\n BashOperator(task_id='bash_task_1', dag=dag, bash_command='echo hi')\r\n > SerializedDagModel.write_dag(dag=dag)\r\n \r\n tests/jobs/test_scheduler_job.py:2827: \r\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n airflow/utils/session.py:65: in wrapper\r\n return func(*args, session=session, **kwargs)\r\n /usr/local/lib/python3.6/contextlib.py:88: in __exit__\r\n next(self.gen)\r\n airflow/utils/session.py:32: in create_session\r\n session.commit()\r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py:1046: in commit\r\n self.transaction.commit()\r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py:504: in commit\r\n self._prepare_impl()\r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py:483: in _prepare_impl\r\n value_params,\r\n True,\r\n )\r\n \r\n if check_rowcount:\r\n if rows != len(records):\r\n raise orm_exc.StaleDataError(\r\n \"UPDATE statement on table '%s' expected to \"\r\n \"update %d row(s); %d were matched.\"\r\n > % (table.description, len(records), rows)\r\n )\r\n E sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'serialized_dag' expected to update 1 row(s); 0 were matched.\r\n \r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py:1028: StaleDataError\r\n```\r\n\r\n\r\n<!--\r\n\r\nWelcome to Apache Airflow! For a smooth issue process, try to answer the following questions.\r\nDon't worry if they're not all applicable; just try to include what you can :-)\r\n\r\nIf you need to include code snippets or logs, please put them in fenced code\r\nblocks. If they're super-long, please use the details tag like\r\n<details><summary>super-long log</summary> lots of stuff </details>\r\n\r\nPlease delete these comment blocks before submitting the issue.\r\n\r\n-->\r\n\r\n<!--\r\n\r\nIMPORTANT!!!\r\n\r\nPLEASE CHECK \"SIMILAR TO X EXISTING ISSUES\" OPTION IF VISIBLE\r\nNEXT TO \"SUBMIT NEW ISSUE\" BUTTON!!!\r\n\r\nPLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!\r\n\r\nPlease complete the next sections or the issue will be closed.\r\nThese questions are the first thing we need to know to understand the context.\r\n\r\n-->\r\n\r\n**Apache Airflow version**:\r\n\r\n\r\n**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):\r\n\r\n**Environment**:\r\n\r\n- **Cloud provider or hardware configuration**:\r\n- **OS** (e.g. from /etc/os-release):\r\n- **Kernel** (e.g. `uname -a`):\r\n- **Install tools**:\r\n- **Others**:\r\n\r\n**What happened**:\r\n\r\n<!-- (please include exact error messages if you can) -->\r\n\r\n**What you expected to happen**:\r\n\r\n<!-- What do you think went wrong? -->\r\n\r\n**How to reproduce it**:\r\n<!---\r\n\r\nAs minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.\r\n\r\nIf you are using kubernetes, please attempt to recreate the issue using minikube or kind.\r\n\r\n## Install minikube/kind\r\n\r\n- Minikube https://minikube.sigs.k8s.io/docs/start/\r\n- Kind https://kind.sigs.k8s.io/docs/user/quick-start/\r\n\r\nIf this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action\r\n\r\nYou can include images using the .md style of\r\n\r\n\r\nTo record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.\r\n\r\n--->\r\n\r\n\r\n**Anything else we need to know**:\r\n\r\n<!--\r\n\r\nHow often does this problem occur? Once? Every time etc?\r\n\r\nAny relevant logs to include? Put them here in side a detail tag:\r\n<details><summary>x.log</summary> lots of stuff </details>\r\n\r\n-->\r\n" | https://github.com/apache/airflow/issues/14773 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | 2021-03-14T11:55:00Z | python | 2021-03-18T13:01:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,772 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINE] Occasional failures from test_scheduler_verify_pool_full_2_slots_per_task | The test occasionally fails with:
DAG 'test_scheduler_verify_pool_full_2_slots_per_task' not found in serialized_dag table
https://github.com/apache/airflow/pull/14531/checks?check_run_id=2106170551#step:6:10314
```
______ TestSchedulerJob.test_scheduler_verify_pool_full_2_slots_per_task _______
self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_scheduler_verify_pool_full_2_slots_per_task>
def test_scheduler_verify_pool_full_2_slots_per_task(self):
"""
Test task instances not queued when pool is full.
Variation with non-default pool_slots
"""
dag = DAG(dag_id='test_scheduler_verify_pool_full_2_slots_per_task', start_date=DEFAULT_DATE)
BashOperator(
task_id='dummy',
dag=dag,
owner='airflow',
pool='test_scheduler_verify_pool_full_2_slots_per_task',
pool_slots=2,
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
session = settings.Session()
pool = Pool(pool='test_scheduler_verify_pool_full_2_slots_per_task', slots=6)
session.add(pool)
session.commit()
dag = SerializedDAG.from_dict(SerializedDAG.to_dict(dag))
scheduler = SchedulerJob(executor=self.null_exec)
scheduler.processor_agent = mock.MagicMock()
# Create 5 dagruns, which will create 5 task instances.
date = DEFAULT_DATE
for _ in range(5):
dr = dag.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=date,
state=State.RUNNING,
)
> scheduler._schedule_dag_run(dr, {}, session)
tests/jobs/test_scheduler_job.py:2641:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/jobs/scheduler_job.py:1688: in _schedule_dag_run
dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
airflow/utils/session.py:62: in wrapper
return func(*args, **kwargs)
airflow/models/dagbag.py:178: in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <airflow.models.dagbag.DagBag object at 0x7f05c934f5b0>
dag_id = 'test_scheduler_verify_pool_full_2_slots_per_task'
session = <sqlalchemy.orm.session.Session object at 0x7f05c838cb50>
def _add_dag_from_db(self, dag_id: str, session: Session):
"""Add DAG to DagBag from DB"""
from airflow.models.serialized_dag import SerializedDagModel
row = SerializedDagModel.get(dag_id, session)
if not row:
> raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
E airflow.exceptions.SerializedDagNotFound: DAG 'test_scheduler_verify_pool_full_2_slots_per_task' not found in serialized_dag table
```
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14772 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | 2021-03-14T11:51:36Z | python | 2021-03-18T13:01:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,771 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINE] Test_retry_still_in_executor sometimes fail | Occasional failures:
https://github.com/apache/airflow/pull/14531/checks?check_run_id=2106170532#step:6:10454
```
________________ TestSchedulerJob.test_retry_still_in_executor _________________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_retry_still_in_executor>
def test_retry_still_in_executor(self):
"""
Checks if the scheduler does not put a task in limbo, when a task is retried
but is still present in the executor.
"""
executor = MockExecutor(do_update=False)
dagbag = DagBag(dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"), include_examples=False)
dagbag.dags.clear()
dag = DAG(dag_id='test_retry_still_in_executor', start_date=DEFAULT_DATE, schedule_interval="@once")
dag_task1 = BashOperator(
task_id='test_retry_handling_op', bash_command='exit 1', retries=1, dag=dag, owner='airflow'
)
dag.clear()
dag.is_subdag = False
with create_session() as session:
orm_dag = DagModel(dag_id=dag.dag_id)
orm_dag.is_paused = False
session.merge(orm_dag)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
@mock.patch('airflow.jobs.scheduler_job.DagBag', return_value=dagbag)
def do_schedule(mock_dagbag):
# Use a empty file since the above mock will return the
# expected DAGs. Also specify only a single file so that it doesn't
# try to schedule the above DAG repeatedly.
scheduler = SchedulerJob(
num_runs=1, executor=executor, subdir=os.path.join(settings.DAGS_FOLDER, "no_dags.py")
)
scheduler.heartrate = 0
scheduler.run()
do_schedule() # pylint: disable=no-value-for-parameter
with create_session() as session:
ti = (
session.query(TaskInstance)
.filter(
TaskInstance.dag_id == 'test_retry_still_in_executor',
TaskInstance.task_id == 'test_retry_handling_op',
)
.first()
)
ti.task = dag_task1
def run_with_error(ti, ignore_ti_state=False):
try:
ti.run(ignore_ti_state=ignore_ti_state)
except AirflowException:
pass
assert ti.try_number == 1
# At this point, scheduler has tried to schedule the task once and
# heartbeated the executor once, which moved the state of the task from
# SCHEDULED to QUEUED and then to SCHEDULED, to fail the task execution
# we need to ignore the TaskInstance state as SCHEDULED is not a valid state to start
# executing task.
run_with_error(ti, ignore_ti_state=True)
assert ti.state == State.UP_FOR_RETRY
assert ti.try_number == 2
with create_session() as session:
ti.refresh_from_db(lock_for_update=True, session=session)
ti.state = State.SCHEDULED
session.merge(ti)
# To verify that task does get re-queued.
executor.do_update = True
do_schedule() # pylint: disable=no-value-for-parameter
ti.refresh_from_db()
> assert ti.state == State.SUCCESS
E AssertionError: assert None == 'success'
E + where None = <TaskInstance: test_retry_still_in_executor.test_retry_handling_op 2016-01-01 00:00:00+00:00 [None]>.state
E + and 'success' = State.SUCCESS
tests/jobs/test_scheduler_job.py:2934: AssertionError
```
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14771 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | 2021-03-14T11:49:38Z | python | 2021-03-18T13:01:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,770 | ["airflow/sensors/smart_sensor.py"] | [Smart sensor] Runtime error: dictionary changed size during iteration | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**What happened**:
<!-- (please include exact error messages if you can) -->
Smart Sensor TI crashes with a Runtime error. Here's the logs:
```
RuntimeError: dictionary changed size during iteration
File "airflow/sentry.py", line 159, in wrapper
return func(task_instance, *args, session=session, **kwargs)
File "airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "airflow/models/taskinstance.py", line 1315, in _execute_task
result = task_copy.execute(context=context)
File "airflow/sensors/smart_sensor.py", line 736, in execute
self.flush_cached_sensor_poke_results()
File "airflow/sensors/smart_sensor.py", line 681, in flush_cached_sensor_poke_results
for ti_key, sensor_exception in self.cached_sensor_exceptions.items():
```
**What you expected to happen**:
<!-- What do you think went wrong? -->
Smart sensor should always execute without any runtime error.
**How to reproduce it**:
I haven't been able to reproduce it consistently since it sometimes works and sometimes errors.
**Anything else we need to know**:
It's a really noisy error in Sentry. In just 4 days, 3.8k events were reported in Sentry.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14770 | https://github.com/apache/airflow/pull/14774 | 2ab2cbf93df9eddfb527fcfd9d7b442678a57662 | 4aec25a80e3803238cf658c416c8e6d3975a30f6 | 2021-03-14T11:46:11Z | python | 2021-06-23T22:22:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,755 | ["tests/jobs/test_backfill_job.py"] | [QUARANTINE] Backfill depends on past test is flaky | Test backfill_depends_on_past is flaky. The whole Backfill class was in Heisentest but I believe this is the only one that is problematic now so I remove the class from heisentests and move the depends_on_past to quarantine.
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14755 | https://github.com/apache/airflow/pull/19862 | 5ebd63a31b5bc1974fc8974f137b9fdf0a5f58aa | a804666347b50b026a8d3a1a14c0b2e27a369201 | 2021-03-13T13:00:28Z | python | 2021-11-30T12:59:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,754 | ["breeze"] | Breeze fails to run on macOS with the old version of bash | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
master/HEAD (commit 99c74968180ab7bc6d7152ec4233440b62a07969)
**Environment**:
- **Cloud provider or hardware configuration**: MacBook Air (13-inch, Early 2015)
- **OS** (e.g. from /etc/os-release): macOS Big Sur version 11.2.2
- **Install tools**: git
- **Others**: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)
Copyright (C) 2007 Free Software Foundation, Inc.
**What happened**:
When I executed `./breeze` initially following this section https://github.com/apache/airflow/blob/master/BREEZE.rst#installation, I got the following error.
```
$ ./breeze
./breeze: line 28: @: unbound variable
```
**What you expected to happen**:
> The First time you run Breeze, it pulls and builds a local version of Docker images.
It should start to pull and build a local version of Docker images.
**How to reproduce it**:
```
git clone [email protected]:apache/airflow.git
cd airflow
./breeze
```
**Anything else we need to know**:
The old version of bash reports the error when there's no argument with `${@}`.
https://github.com/apache/airflow/blob/master/breeze#L28 | https://github.com/apache/airflow/issues/14754 | https://github.com/apache/airflow/pull/14787 | feb6b8107e1e01aa1dae152f7b3861fe668b3008 | c613384feb52db39341a8d3a52b7f47695232369 | 2021-03-13T08:50:25Z | python | 2021-03-15T09:58:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,752 | ["BREEZE.rst"] | Less confusable description about Cleaning the environment for new comers | **Description**
I think the section of Cleaning the environment in this document is confusable for new comers.
https://github.com/apache/airflow/blob/master/BREEZE.rst#cleaning-the-environment
Actual
Suddenly, `./breeze stop` appears in the document. It's not necessary for those who first run breeze.
Exepected
It's better to write the condition like below.
Stop Breeze with `./breeze stop.` (If Breeze is already running)
Use case / motivation
Because `./breeze stop.` is not required for new comers, it's better to write the condition.
Are you willing to submit a PR?
Yes, I'm ready. | https://github.com/apache/airflow/issues/14752 | https://github.com/apache/airflow/pull/14753 | b9e8ca48e61fdb8d80960981de0ee5409e3a6df9 | 3326babd02d02c87ec80bf29439614de4e636e10 | 2021-03-13T08:16:31Z | python | 2021-03-13T09:38:55Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,750 | ["BREEZE.rst"] | Better description about Getopt and gstat when runnin Breeze | **Description**
For me, the section of Getopt and gstat in this document doesn't have enough description to run Breeze.
https://github.com/apache/airflow/blob/master/BREEZE.rst#getopt-and-gstat
Actual
After executing the following commands quoted from https://github.com/apache/airflow/blob/master/BREEZE.rst#getopt-and-gstat, I cannot know that the commads enabled the PATH properly.
```
echo 'export PATH="/usr/local/opt/gnu-getopt/bin:$PATH"' >> ~/.zprofile
. ~/.zprofile
```
Exepected
It's better to write commands for checking that `getopt` and `gstat` are succesfully installed.
```
$ getopt --version
getopt from util-linux 2.36.2
$ gstat --version
stat (GNU coreutils) 8.32
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Michael Meskes.
```
**Use case / motivation**
Because I'm not familiar with unix shell, with the exisiting document, I couldn't know those commands are properly installed or not.
**Are you willing to submit a PR?**
Yes, I'm ready.
| https://github.com/apache/airflow/issues/14750 | https://github.com/apache/airflow/pull/14751 | 99c74968180ab7bc6d7152ec4233440b62a07969 | b9e8ca48e61fdb8d80960981de0ee5409e3a6df9 | 2021-03-13T07:32:36Z | python | 2021-03-13T09:37:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,726 | [".pre-commit-config.yaml", "BREEZE.rst", "STATIC_CODE_CHECKS.rst", "airflow/ui/package.json", "airflow/ui/yarn.lock", "breeze-complete"] | Add precommit linting and testing to the new /ui | **Description**
We just initialized the new UI for AIP-38 under `/airflow/ui`. To continue development, it would be best to add a pre-commit hook to run the linting and testing commands for the new project.
**Use case / motivation**
The new UI already has linting and testing setup with `yarn lint` and `yarn test`. We just need a pre-commit hook for them.
**Are you willing to submit a PR?**
Yes
**Related Issues**
no
| https://github.com/apache/airflow/issues/14726 | https://github.com/apache/airflow/pull/14836 | 5f774fae530577e302c153cc8726c93040ebbde0 | e395fcd247b8aa14dbff2ee979c1a0a17c42adf4 | 2021-03-11T16:18:27Z | python | 2021-03-16T23:06:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,696 | ["UPDATING.md", "airflow/cli/cli_parser.py", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "airflow/config_templates/default_test.cfg", "airflow/configuration.py", "airflow/models/baseoperator.py", "chart/values.yaml", "docs/apache-airflow/executor/celery.rst"] | Decouple default_queue from celery config section |
**Description**
We are using a 3rd party executor which has the ability to use multiple queues, however the `default_queue` is defined under the `celery` heading and is used regardless of the executor that you are using.
See: https://github.com/apache/airflow/blob/2.0.1/airflow/models/baseoperator.py#L366
It would be nice to decouple the default_queue configuration argument away from the celery section to allow other executors to utilise this functionality with less confusion.
**Are you willing to submit a PR?**
Yep! Will open a pull request shortly
**Related Issues**
Not that I'm aware of.
| https://github.com/apache/airflow/issues/14696 | https://github.com/apache/airflow/pull/14699 | 7757fe32e0aa627cb849f2d69fbbb01f1d180a64 | 1d0c1684836fb0c3d1adf86a8f93f1b501474417 | 2021-03-10T13:06:20Z | python | 2021-03-31T09:59:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,682 | ["airflow/providers/amazon/aws/transfers/local_to_s3.py", "airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py", "airflow/providers/google/cloud/transfers/s3_to_gcs.py", "tests/providers/amazon/aws/transfers/test_local_to_s3.py"] | The S3ToGCSOperator fails on templated `dest_gcs` URL | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Docker
**What happened**:
When passing a templatized `dest_gcs` argument to the `S3ToGCSOperator` operator, the DAG fails to import because the constructor attempts to test the validity of the URL before the template has been populated in `execute`.
The error is:
```
Broken DAG: [/opt/airflow/dags/bad_gs_dag.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1051, in gcs_object_is_directory
_, blob = _parse_gcs_url(bucket)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1063, in _parse_gcs_url
raise AirflowException('Please provide a bucket name')
airflow.exceptions.AirflowException: Please provide a bucket name
```
**What you expected to happen**:
The DAG should successfully parse when using a templatized `dest_gcs` value.
**How to reproduce it**:
Instantiating a `S3ToGCSOperator` task with `dest_gcs="{{ var.gcs_url }}"` fails.
<details>
```python
from airflow.decorators import dag
from airflow.utils.dates import days_ago
from airflow.providers.google.cloud.transfers.s3_to_gcs import S3ToGCSOperator
@dag(
schedule_interval=None,
description="Demo S3-to-GS Bug",
catchup=False,
start_date=days_ago(1),
)
def demo_bug():
S3ToGCSOperator(
task_id="transfer_task",
bucket="example_bucket",
prefix="fake/prefix",
dest_gcs="{{ var.gcs_url }}",
)
demo_dag = demo_bug()
```
</details>
**Anything else we need to know**:
Should be fixable by moving the code that evaluates whether the URL is a folder to `execute()`.
| https://github.com/apache/airflow/issues/14682 | https://github.com/apache/airflow/pull/19048 | efdfd15477f92da059fa86b4fa18b6f29cb97feb | 3c08c025c5445ffc0533ac28d07ccf2e69a19ca8 | 2021-03-09T14:44:14Z | python | 2021-10-27T06:15:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,675 | ["airflow/utils/helpers.py", "tests/utils/test_helpers.py"] | TriggerDagRunOperator OperatorLink doesn't work when HTML base url doesn't match the Airflow base url | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**What happened**:
When I click on the "Triggered DAG" Operator Link in TriggerDagRunOperator, I am redirected with a relative link.

The redirect uses the HTML base URL and not the airflow base URL. This is only an issue if the URLs do not match.
**What you expected to happen**:
I expect the link to take me to the Triggered DAG tree view (default view) instead of the base url of the service hosting the webserver.
**How to reproduce it**:
Create an airflow deployment where the HTML base url doesn't match the airflow URL.
| https://github.com/apache/airflow/issues/14675 | https://github.com/apache/airflow/pull/14990 | 62aa7965a32f1f8dde83cb9c763deef5b234092b | aaa3bf6b44238241bd61178426b692df53770c22 | 2021-03-09T01:03:33Z | python | 2021-04-11T11:51:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,597 | ["airflow/models/taskinstance.py", "docs/apache-airflow/concepts/connections.rst", "docs/apache-airflow/macros-ref.rst", "tests/models/test_taskinstance.py"] | Provide jinja template syntax to access connections | **Description**
Expose the connection into the jinja template context via `conn.value.<connectionname>.{host,port,login,password,extra_config,etc}`
Today is possible to conveniently access [airflow's variables](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#variables) in jinja templates using `{{ var.value.<variable_name> }}`.
There is no equivalent (to my knowledge for [connections](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#connections)), I understand that most of the time connection are used programmatically in Operators and Hooks source code, but there are use cases where the connection info has to be pass as parameters to the operators and then it becomes cumbersome to do it without jinja template syntax.
I seen workarounds like using [user defined macros to provide get_login(my_conn_id)](https://stackoverflow.com/questions/65826404/use-airflow-connection-from-a-jinja-template/65873023#65873023
), but I'm after a consistent interface for accessing both variables and connections in the same way
**Workaround**
The following `user_defined_macro` (from my [stackoverflow answer](https://stackoverflow.com/a/66471911/90580)) provides the suggested syntax `connection.mssql.host` where `mssql` is the connection name:
```
class ConnectionGrabber:
def __getattr__(self, name):
return Connection.get_connection_from_secrets(name)
dag = DAG( user_defined_macros={'connection': ConnectionGrabber()}, ...)
task = BashOperator(task_id='read_connection', bash_command='echo {{connection.mssql.host }}', dag=dag)
```
This macro can be added to each DAG individually or to all DAGs via an Airflow's Plugin. What I suggest is to make this macro part of the default.
**Use case / motivation**
For example, passing credentials to a [KubernetesPodOperator](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#howto-operator-kubernetespodoperator) via [env_vars](https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator) today has to be done like this:
```
connection = Connection.get_connection_from_secrets('somecredentials')
k = KubernetesPodOperator(
task_id='task1',
env_vars={'MY_VALUE': '{{ var.value.my_value }}', 'PWD': conn.password,},
)
```
where I would prefer to use consistent syntax for both variables and connections like this:
```
# not needed anymore: connection = Connection.get_connection_from_secrets('somecredentials')
k = KubernetesPodOperator(
task_id='task1',
env_vars={'MY_VALUE': '{{ var.value.my_value }}', 'PWD': '{{ conn.somecredentials.password }}',},
)
```
The same applies to `BashOperator` where I sometimes feel the need to pass connection information to the templated script.
**Are you willing to submit a PR?**
yes, I can write the PR.
**Related Issues**
| https://github.com/apache/airflow/issues/14597 | https://github.com/apache/airflow/pull/16686 | 5034414208f85a8be61fe51d6a3091936fe402ba | d3ba80a4aa766d5eaa756f1fa097189978086dac | 2021-03-04T07:51:09Z | python | 2021-06-29T10:50:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,592 | ["airflow/configuration.py", "airflow/models/connection.py", "airflow/models/variable.py", "tests/core/test_configuration.py"] | Unreachable Secrets Backend Causes Web Server Crash | **Apache Airflow version**:
1.10.12
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
n/a
**Environment**:
- **Cloud provider or hardware configuration**:
Amazon MWAA
- **OS** (e.g. from /etc/os-release):
Amazon Linux (latest)
- **Kernel** (e.g. `uname -a`):
n/a
- **Install tools**:
n/a
**What happened**:
If an unreachable secrets.backend is specified in airflow.cfg the web server crashes
**What you expected to happen**:
An invalid secrets backend should be ignored with a warning, and the system should default back to the metadatabase secrets
**How to reproduce it**:
In an environment without access to AWS Secrets Manager, add the following to your airflow.cfg:
```
[secrets]
backend = airflow.contrib.secrets.aws_secrets_manager.SecretsManagerBackend
```
**or** an environment without access to SSM specifiy:
```
[secrets]
backend = airflow.contrib.secrets.aws_systems_manager.SystemsManagerParameterStoreBackend
```
Reference: https://airflow.apache.org/docs/apache-airflow/1.10.12/howto/use-alternative-secrets-backend.html#aws-ssm-parameter-store-secrets-backend | https://github.com/apache/airflow/issues/14592 | https://github.com/apache/airflow/pull/16404 | 4d4830599578ae93bb904a255fb16b81bd471ef1 | 0abbd2d918ad9027948fd8a33ebb42487e4aa000 | 2021-03-03T23:17:03Z | python | 2021-08-27T20:59:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,586 | ["airflow/executors/celery_executor.py"] | AttributeError: 'DatabaseBackend' object has no attribute 'task_cls' | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18.15
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): ubuntu 18.04
- **Kernel** (e.g. `uname -a`): Linux mlb-airflow-infra-workers-75f589bcd9-wtls8 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1 (2019-04-12) x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
Scheduler is constantly restarting due to this error:
```
[2021-03-03 17:31:19,393] {scheduler_job.py:1298} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1384, in _run_scheduler_loop
self.executor.heartbeat()
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/base_executor.py", line 162, in heartbeat
self.sync()
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 340, in sync
self.update_all_task_states()
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 399, in update_all_task_states
state_and_info_by_celery_task_id = self.bulk_state_fetcher.get_many(self.tasks.values())
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 552, in get_many
result = self._get_many_from_db_backend(async_results)
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 570, in _get_many_from_db_backend
task_cls = app.backend.task_cls
AttributeError: 'DatabaseBackend' object has no attribute 'task_cls'
[2021-03-03 17:31:20,396] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 3852
[2021-03-03 17:31:20,529] {process_utils.py:66} INFO - Process psutil.Process(pid=3996, status='terminated', started='17:31:19') (3996) terminated with exit code None
[2021-03-03 17:31:20,533] {process_utils.py:66} INFO - Process psutil.Process(pid=3997, status='terminated', started='17:31:19') (3997) terminated with exit code None
[2021-03-03 17:31:20,533] {process_utils.py:206} INFO - Waiting up to 5 seconds for processes to exit...
[2021-03-03 17:31:20,540] {process_utils.py:66} INFO - Process psutil.Process(pid=3852, status='terminated', exitcode=0, started='17:31:13') (3852) terminated with exit code 0
[2021-03-03 17:31:20,540] {scheduler_job.py:1301} INFO - Exited execute loop
```
**What you expected to happen**: Scheduler running
<!-- What do you think went wrong? -->
**How to reproduce it**:
Install airflow 2.0.1 in an ubuntu 18.04 instance:
`pip install apache-airflow[celery,postgres,s3,crypto,jdbc,google_auth,redis,slack,ssh,sentry,kubernetes,statsd]==2.0.1`
Use the following variables:
```
AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://airflow:<pwd>@<DB>:<port>/airflow
AIRFLOW__CORE__DEFAULT_TIMEZONE=<TZ>
AIRFLOW__CORE__LOAD_DEFAULTS=false
AIRFLOW__CELERY__BROKER_URL=sqs://
AIRFLOW__CELERY__DEFAULT_QUEUE=<queue>
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER=s3://<bucket>/<prefix>/
AIRFLOW__CORE__LOAD_EXAMPLES=false
AIRFLOW__CORE__REMOTE_LOGGING=True
AIRFLOW__CORE__FERNET_KEY=<fernet_key>
AIRFLOW__CORE__EXECUTOR=CeleryExecutor
AIRFLOW__CELERY__BROKER_TRANSPORT_OPTIONS__REGION=<region>
AIRFLOW__CELERY__RESULT_BACKEND=db+postgresql://airflow:<pwd>@<DB>:<port>/airflow
```
Start airflow components (webserver, celery workers, scheduler). With no DAGs running everything is stable and not failing but once a DAGs start running the scheduler starts getting the error and from then it constantly reboots until crash.
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
This happens every time and the scheduler keeps restarting
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14586 | https://github.com/apache/airflow/pull/14612 | 511f0426530bfabd9d93f4737df7add1080b4e8d | 33910d6c699b5528db4be40d31199626dafed912 | 2021-03-03T17:51:45Z | python | 2021-03-05T19:34:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,563 | ["airflow/example_dags/example_external_task_marker_dag.py", "airflow/models/dag.py", "airflow/sensors/external_task.py", "docs/apache-airflow/howto/operator/external_task_sensor.rst", "tests/sensors/test_external_task_sensor.py"] | TaskGroup Sensor | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
Enable the ability for a task in a DAG to wait upon the successful completion of an entire TaskGroup.
**Use case / motivation**
TaskGroups provide a great mechanism for authoring DAGs, however there are situations where it might be necessary for a task in an external DAG to wait upon the the completion of the TaskGroup as a whole.
At the moment this is only possible with one of the following workarounds:
1. Add an external task sensor for each task in the group.
2. Add a Dummy task after the TaskGroup which the external task sensor waits on.
I would envisage either adapting `ExternalTaskSensor` to also work with TaskGroups or creating a new `ExternalTaskGroupSensor`.
**Are you willing to submit a PR?**
Time permitting, yes!
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14563 | https://github.com/apache/airflow/pull/24902 | 0eb0b543a9751f3d458beb2f03d4c6ff22fcd1c7 | bc04c5ff0fa56e80d3d5def38b798170f6575ee8 | 2021-03-02T14:22:22Z | python | 2022-08-22T18:13:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,556 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Task stuck in queued state after pod fails to start | **Apache Airflow version**: `2.0.1`
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
`Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-10T21:53:58Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"darwin/amd64"}`
`Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-gke.7800", GitCommit:"cef3156c566a1d1a4b23ee360a760f45bfbaaac1", GitTreeState:"clean", BuildDate:"2020-12-14T09:12:37Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}`
**Environment**:
- **Cloud provider or hardware configuration**: `GKE`
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**: We use scheduler HA with 2 instances.
**What happened**:
- We are running Airflow 2.0.1 using KubernetesExecutor and PostgreSQL 9.6.2.
- Task is `up_for_retry` after its worker pod fails to start
- It gets stuck in `queued` state
- It runs only after a scheduler restart.
**What you expected to happen**:
- Task gets rescheduled and runs successfully.
**How to reproduce it**:
**Anything else we need to know**:
Logs below.
<details>
Its upstream task succeeds and schedules it via fast follow.
`[2021-03-01 17:08:51,306] {taskinstance.py:1166} INFO - Marking task as SUCCESS. dag_id=datalake_dag_id, task_id=processidlogs, execution_date=20210301T163000, start_date=20210301T170546, end_date=20210301T170851`
`[2021-03-01 17:08:51,339] {taskinstance.py:1220} INFO - 1 downstream tasks scheduled from follow-on schedule check`
`[2021-03-01 17:08:51,357] {local_task_job.py:146} INFO - Task exited with return code 0`
Task is attempted to be run.
`[2021-03-01 17:08:52,229] {scheduler_job.py:1105} INFO - Sending TaskInstanceKey(dag_id='datalake_dag_id', task_id='delta_id_logs', execution_date=datetime.datetime(2021, 3, 1, 16, 30, tzinfo=Timezone('UTC')), try_number=1) to executor with priority 1 and queue default`
`[2021-03-01 17:08:52,308] {kubernetes_executor.py:306} DEBUG - Kubernetes running for command ['airflow', 'tasks', 'run', 'datalake_dag_id', 'delta_id_logs', '2021-03-01T16:30:00+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/data_lake/some_tasks/some_tasks.py']`
`[2021-03-01 17:08:52,332] {scheduler_job.py:1206} INFO - Executor reports execution of datalake_dag_id.delta_id_logs execution_date=2021-03-01 16:30:00+00:00 exited with status queued for try_number 1`
Pod fails to start.
`[2021-03-01 17:12:17,319] {kubernetes_executor.py:197} INFO - Event: Failed to start pod datalakedagiddeltaidlogs.5fa98ae3856f4cb4b6c8810ac13e5c6a, will reschedule`
It is put as up_for_reschedule.
`[2021-03-01 17:12:23,912] {kubernetes_executor.py:343} DEBUG - Processing task ('datalakedagiddeltaidlogs.5fa98ae3856f4cb4b6c8810ac13e5c6a', 'prod', 'up_for_reschedule', {'dag_id': 'datalake_dag_id', 'task_id': 'delta_id_logs', 'execution_date': '2021-03-01T16:30:00+00:00', 'try_number': '1'}, '1172208829')`
`[2021-03-01 17:12:23,930] {kubernetes_executor.py:528} INFO - Changing state of (TaskInstanceKey(dag_id='datalake_dag_id', task_id='delta_id_logs', execution_date=datetime.datetime(2021, 3, 1, 16, 30, tzinfo=tzlocal()), try_number=1), 'up_for_reschedule', 'datalakedagiddeltaidlogs.5fa98ae3856f4cb4b6c8810ac13e5c6a', 'prod', '1172208829') to up_for_reschedule`
`[2021-03-01 17:12:23,941] {scheduler_job.py:1206} INFO - Executor reports execution of datalake_dag_id.delta_id_logs execution_date=2021-03-01 16:30:00+00:00 exited with status up_for_reschedule for try_number 1`
A few minutes later, another scheduler finds it in queued state.
`[2021-03-01 17:15:39,177] {taskinstance.py:851} DEBUG - Dependencies all met for <TaskInstance: datalake_dag_id.delta_id_logs 2021-03-01 16:30:00+00:00 [queued]>`
`[2021-03-01 17:15:40,477] {taskinstance.py:866} DEBUG - <TaskInstance: datalake_dag_id.delta_id_logs 2021-03-01 16:30:00+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The context specified that being in a retry period was permitted.`
`[2021-03-01 17:15:40,478] {taskinstance.py:866} DEBUG - <TaskInstance: datalake_dag_id.delta_id_logs 2021-03-01 16:30:00+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.`
It stays in that state for another hour and a half until the scheduler is restarted.
Finally, it is rescheduled.
`[2021-03-01 18:58:10,475] {kubernetes_executor.py:463} INFO - TaskInstance: <TaskInstance: datalake_dag_id.delta_id_logs 2021-03-01 16:30:00+00:00 [queued]> found in queued state but was not launched, rescheduling`
</details>
Other tasks run fine while that one is stuck in queued state. We have a cron job to restart the scheduler as a hack to recover from when such cases happen, but we would like to avoid it as much as possible.
We run 60 DAGs with 50-100 tasks each every 30 minutes. We have been seeing this issue at least once daily since we upgraded to Airflow 2.0.1.
I understand there are some open issues about the scheduler or tasks getting stuck. But I could not tell if this is related, since hundreds of other tasks run as expected. Apologies if this turns out to be a duplicate of an existing issue. Thank you. | https://github.com/apache/airflow/issues/14556 | https://github.com/apache/airflow/pull/14810 | 1efb17b7aa404013cd490ba3dad4f7d2a70d4cb2 | a639dd364865da7367f342d5721a5f46a7188a29 | 2021-03-02T08:49:22Z | python | 2021-03-15T21:16:49Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,518 | ["airflow/cli/commands/cheat_sheet_command.py", "airflow/cli/commands/info_command.py", "airflow/cli/simple_table.py"] | Airflow info command doesn't work properly with pbcopy on Mac OS | Hello,
Mac OS has a command for copying data to the clipboard - `pbcopy`. Unfortunately, with the [introduction of more fancy tables](https://github.com/apache/airflow/pull/12689) to this command, we can no longer use it together.
For example:
```bash
airflow info | pbcopy
```
<details>
<summary>Clipboard content</summary>
```
Apache Airflow: 2.1.0.dev0
System info
| Mac OS
| x86_64
| uname_result(system='Darwin', node='Kamils-MacBook-Pro.local',
| release='20.3.0', version='Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06
| PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64', machine='x86_64',
| processor='i386')
| (None, 'UTF-8')
| 3.8.7 (default, Feb 14 2021, 09:58:39) [Clang 12.0.0 (clang-1200.0.32.29)]
| /Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/bin/python3.8
Tools info
| git version 2.24.3 (Apple Git-128)
| OpenSSH_8.1p1, LibreSSL 2.7.3
| Client Version: v1.19.3
| Google Cloud SDK 326.0.0
| NOT AVAILABLE
| NOT AVAILABLE
| 3.32.3 2020-06-18 14:16:19
| 02c344aceaea0d177dd42e62c8541e3cab4a26c757ba33b3a31a43ccc7d4aapl
| psql (PostgreSQL) 13.2
Paths info
| /Users/kamilbregula/airflow
| /Users/kamilbregula/.pyenv/versions/airflow-py38/bin:/Users/kamilbregula/.pye
| v/libexec:/Users/kamilbregula/.pyenv/plugins/python-build/bin:/Users/kamilbre
| ula/.pyenv/plugins/pyenv-virtualenv/bin:/Users/kamilbregula/.pyenv/plugins/py
| nv-update/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-installer/bin:/Users/k
| milbregula/.pyenv/plugins/pyenv-doctor/bin:/Users/kamilbregula/.pyenv/plugins
| python-build/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-virtualenv/bin:/Use
| s/kamilbregula/.pyenv/plugins/pyenv-update/bin:/Users/kamilbregula/.pyenv/plu
| ins/pyenv-installer/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-doctor/bin:/
| sers/kamilbregula/.pyenv/plugins/pyenv-virtualenv/shims:/Users/kamilbregula/.
| yenv/shims:/Users/kamilbregula/.pyenv/bin:/usr/local/opt/gnu-getopt/bin:/usr/
| ocal/opt/[email protected]/bin:/usr/local/opt/[email protected]/bin:/usr/local/opt/ope
| ssl/bin:/Users/kamilbregula/Library/Python/2.7/bin/:/Users/kamilbregula/bin:/
| sers/kamilbregula/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin
| /sbin:/Users/kamilbregula/.cargo/bin
| /Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/bin:/Users/kamilb
| egula/.pyenv/versions/3.8.7/lib/python38.zip:/Users/kamilbregula/.pyenv/versi
| ns/3.8.7/lib/python3.8:/Users/kamilbregula/.pyenv/versions/3.8.7/lib/python3.
| /lib-dynload:/Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/lib/
| ython3.8/site-packages:/Users/kamilbregula/devel/airflow/airflow:/Users/kamil
| regula/airflow/dags:/Users/kamilbregula/airflow/config:/Users/kamilbregula/ai
| flow/plugins
| True
Config info
| SequentialExecutor
| airflow.utils.log.file_task_handler.FileTaskHandler
| sqlite:////Users/kamilbregula/airflow/airflow.db
| /Users/kamilbregula/airflow/dags
| /Users/kamilbregula/airflow/plugins
| /Users/kamilbregula/airflow/logs
Providers info
| 1.2.0
| 1.0.1
| 1.0.1
| 1.1.0
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.2
| 1.0.2
| 1.1.1
| 1.0.1
| 1.0.1
| 2.1.0
| 1.0.1
| 1.0.1
| 1.1.1
| 1.0.1
| 1.0.1
| 1.1.0
| 1.0.1
| 1.2.0
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.1.1
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.2
| 1.0.1
| 2.0.0
| 1.0.1
| 1.0.1
| 1.0.2
| 1.1.1
| 1.0.1
| 3.0.0
| 1.0.2
| 1.2.0
| 1.0.0
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.1
```
</details>
CC: @turbaszek
| https://github.com/apache/airflow/issues/14518 | https://github.com/apache/airflow/pull/14528 | 1b0851c9b75f0d0a15427898ae49a2f67d076f81 | a1097f6f29796bd11f8ed7b3651dfeb3e40eec09 | 2021-02-27T21:07:49Z | python | 2021-02-28T15:42:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,517 | ["airflow/cli/cli_parser.py", "airflow/cli/simple_table.py", "docs/apache-airflow/usage-cli.rst", "docs/spelling_wordlist.txt"] | The tables are not parsable by standard linux utilities. | Hello,
I changed the format of the tables a long time ago so that they could be parsed in standard Linux tools such as AWK.
https://github.com/apache/airflow/pull/8409
For example, to list the files that contain the DAG, I could run the command below.
```
airflow dags list | grep -v "dag_id" | awk '{print $2}' | sort | uniq
```
To pause all dags:
```bash
airflow dags list | awk '{print $1}' | grep -v "dag_id"| xargs airflow dags pause
```
Unfortunately [that has changed](https://github.com/apache/airflow/pull/12704) and we now have more fancy tables, but harder to use in standard Linux tools.
Alternatively, we can use JSON output, but I don't always have JQ installed on production environment, so performing administrative tasks is difficult for me.
```bash
$ docker run apache/airflow:2.0.1 bash -c "jq"
/bin/bash: jq: command not found
```
Best regards,
Kamil Breguła
CC: @turbaszek | https://github.com/apache/airflow/issues/14517 | https://github.com/apache/airflow/pull/14546 | 8801a0cc3b39cf3d2a3e5ef6af004d763bdb0b93 | 0ef084c3b70089b9b061090f7d88ce86e3651ed4 | 2021-02-27T20:56:59Z | python | 2021-03-02T19:12:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,515 | ["airflow/models/pool.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_pool.py"] | Tasks in an infinite slots pool are never scheduled | **Apache Airflow version**: v2.0.0 and up
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): not tested with K8
**Environment**:
all
**What happened**:
Executing the unit test included below, or create an infinite pool ( `-1` slots ) and tasks that should be executed in that pool.
```
INFO airflow.jobs.scheduler_job.SchedulerJob:scheduler_job.py:991 Not scheduling since there are -1 open slots in pool test_scheduler_verify_infinite_pool
```
**What you expected to happen**:
To schedule tasks, or to drop support for infinite slots pools?
**How to reproduce it**:
easiest one is this unit test:
```
def test_scheduler_verify_infinite_pool(self):
"""
Test that TIs are still scheduled if we only have one infinite pool.
"""
dag = DAG(dag_id='test_scheduler_verify_infinite_pool', start_date=DEFAULT_DATE)
BashOperator(
task_id='test_scheduler_verify_infinite_pool_t0',
dag=dag,
owner='airflow',
pool='test_scheduler_verify_infinite_pool',
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
session = settings.Session()
pool = Pool(pool='test_scheduler_verify_infinite_pool', slots=-1)
session.add(pool)
session.commit()
dag = SerializedDAG.from_dict(SerializedDAG.to_dict(dag))
scheduler = SchedulerJob(executor=self.null_exec)
scheduler.processor_agent = mock.MagicMock()
dr = dag.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=DEFAULT_DATE,
state=State.RUNNING,
)
scheduler._schedule_dag_run(dr, {}, session)
task_instances_list = scheduler._executable_task_instances_to_queued(max_tis=32, session=session)
# Let's make sure we don't end up with a `max_tis` == 0
assert len(task_instances_list) >= 1
```
**Anything else we need to know**:
Overall I'm not sure whether it's worth fixing in those various spots:
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L908
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L971
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L988
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L1041
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L1056
Or whether to restrict `-1` ( infinite ) slots in pools:
https://github.com/bperson/airflow/blob/master/airflow/models/pool.py#L49 | https://github.com/apache/airflow/issues/14515 | https://github.com/apache/airflow/pull/15247 | 90f0088c5752b56177597725cc716f707f2f8456 | 96f764389eded9f1ea908e899b54bf00635ec787 | 2021-02-27T17:42:33Z | python | 2021-06-22T08:31:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,489 | ["airflow/providers/ssh/CHANGELOG.rst", "airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/provider.yaml"] | Add a retry with wait interval for SSH operator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
currently SSH operator fails if authentication fails without retying. We can add two more parameters as mentioned below which will retry in case of Authentication failure after waiting for configured time.
- max_retry - maximum time SSH operator should retry in case of exception
- wait - how many seconds it should wait before next retry
<!-- A short description of your feature -->
**Use case / motivation**
We are using SSH operator heavily in our production jobs, And what I have noticed is sometimes SSH operator fails to authenticate, however open re-running jobs it run successfully. And this happens ofthen. We have ended up writing our own custom operator for this. However, if we can implement this, this could help others as well.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
Implement the suggested feature for ssh operator.
**Are you willing to submit a PR?**
I will submit the PR if feature gets approval.
<!--- We accept contributions! -->
**Related Issues**
N/A
<!-- Is there currently another issue associated with this? -->
No | https://github.com/apache/airflow/issues/14489 | https://github.com/apache/airflow/pull/19981 | 4a73d8f3d1f0c2cb52707901f9e9a34198573d5e | b6edc3bfa1ed46bed2ae23bb2baeefde3f9a59d3 | 2021-02-26T21:22:34Z | python | 2022-02-01T09:30:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,486 | ["airflow/www/static/js/tree.js"] | tree view task instances have too much left padding in webserver UI | **Apache Airflow version**: 2.0.1
Here is tree view of a dag with one task:

For some reason the task instances render partially off the page, and there's a large amount of empty space that could have been used instead.
**Environment**
MacOS
Chrome
| https://github.com/apache/airflow/issues/14486 | https://github.com/apache/airflow/pull/14566 | 8ef862eee6443cc2f34f4cc46425357861e8b96c | 3f7ebfdfe2a1fa90b0854028a5db057adacd46c1 | 2021-02-26T19:02:06Z | python | 2021-03-04T00:00:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,481 | ["airflow/api_connexion/schemas/dag_schema.py", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | DAG /details endpoint returning empty array objects | When testing the following two endpoints, I get different results for the array of owners and tags. The former should be identical to the response of the latter endpoint.
`/api/v1/dags/{dag_id}/details`:
```json
{
"owners": [],
"tags": [
{},
{}
],
}
```
`/api/v1/dags/{dag_id}`:
```json
{
"owners": [
"airflow"
],
"tags": [
{
"name": "example"
},
{
"name": "example2"
}
]
}
``` | https://github.com/apache/airflow/issues/14481 | https://github.com/apache/airflow/pull/14490 | 9c773bbf0174a8153720d594041f886b2323d52f | 4424d10f05fa268b54c81ef8b96a0745643690b6 | 2021-02-26T14:59:56Z | python | 2021-03-03T14:39:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,473 | ["airflow/www/static/js/tree.js"] | DagRun duration not visible in tree view tooltip if not currently running | On airflow 2.0.1
On tree view if dag run is running, duration shows as expected:

But if dag run is complete, duration is null:

| https://github.com/apache/airflow/issues/14473 | https://github.com/apache/airflow/pull/14566 | 8ef862eee6443cc2f34f4cc46425357861e8b96c | 3f7ebfdfe2a1fa90b0854028a5db057adacd46c1 | 2021-02-26T02:58:19Z | python | 2021-03-04T00:00:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,469 | ["setup.cfg"] | Upgrade Flask-AppBuilder to 3.2.0 for improved OAUTH/LDAP | Version `3.2.0` of Flask-AppBuilder added support for LDAP group binding (see PR: https://github.com/dpgaspar/Flask-AppBuilder/pull/1374), we should update mainly for the `AUTH_ROLES_MAPPING` feature, which lets users bind to RBAC roles based on their LDAP/OAUTH group membership.
Here are the docs about Flask-AppBuilder security:
https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-ldap
This will resolve https://github.com/apache/airflow/issues/8179 | https://github.com/apache/airflow/issues/14469 | https://github.com/apache/airflow/pull/14665 | b718495e4caecb753742c3eb22919411a715f24a | 97b5e4cd6c001ec1a1597606f4e9f1c0fbea20d2 | 2021-02-25T23:00:08Z | python | 2021-03-08T17:12:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,460 | ["BREEZE.rst", "breeze", "breeze-complete", "scripts/ci/libraries/_initialization.sh", "scripts/ci/libraries/_verbosity.sh"] | breeze: build-image output docker build parameters to be used in scripts | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
I would like to see a `--print-docker-args` option for `breeze build-image`
**Use case / motivation**
`breeze build-image --production-image --additional-extras microsoft.mssql --print-docker-args` should print something like
```--build-arg XXXX=YYYY --build-arg WWW=ZZZZ```
I would like to use that output in a script so that I can use `kaniko-executor` to build the image instead of `docker build` .
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14460 | https://github.com/apache/airflow/pull/14468 | 4a54292b69bb9a68a354c34246f019331270df3d | aa28e4ed77d8be6558dbeb8161a5af82c4395e99 | 2021-02-25T14:27:09Z | python | 2021-02-26T20:50:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,423 | ["docs/apache-airflow/start/airflow.sh", "docs/apache-airflow/start/docker-compose.yaml"] | Quick Start for Docker - REST API returns Unauthorized | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Apache Airflow version**: 2.0.1
**Environment**:
- **Cloud provider or hardware configuration**: NA
- **OS** (e.g. from /etc/os-release): Mac
- **Kernel** (e.g. `uname -a`): NA
- **Install tools**: docker-compose
- **Others**:
**What happened**:
When following [this Quickstart for Docker](https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html), the section under "Sending requests to the web api" do not work. They return Unauthorized. It looks like it's because in the docker config access defaults to `deny_all`.
**What you expected to happen**:
The request in the guide should return a 200 response.
`ENDPOINT_URL="http://localhost:8080/" curl -X GET -u "airflow:airflow" "${ENDPOINT_URL}/api/v1/pools" `
**How to reproduce it**:
Walk through the Quick Start for Docker, follow each command. Do not change any of the configuration. Then try the curl example.
```
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.0.1/docker-compose.yaml'
mkdir -p ./dags ./logs ./plugins
echo -e "AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0" > .env
docker-compose up airflow-init
docker-compose up
```
in a separate terminal window
```
ENDPOINT_URL="http://localhost:8080/" \
curl -X GET \
--user "airflow:airflow" \
"${ENDPOINT_URL}/api/v1/pools"
```
returns
```
{
"detail": null,
"status": 401,
"title": "Unauthorized",
"type": "https://airflow.apache.org/docs/2.0.1/stable-rest-api-ref.html#section/Errors/Unauthenticated"
}
```
Trying the same via the swaqqer endpoint also fails, even when entering the credential into the "Authorize" dialog.
**Anything else we need to know**:
checking the permissions via `./airflow.sh airflow config get-value api auth_backend` returns `airflow.api.auth.backend.deny_all`
Creating a new `airflow.cfg` file and adding the below does not change the configuration upon restart, with either `default` or `basic_auth`
```
[api]
auth_backend = airflow.api.auth.backend.default
``` | https://github.com/apache/airflow/issues/14423 | https://github.com/apache/airflow/pull/14516 | 7979b7581cc21f9b946ca66f1f243731f4a39d74 | 7d181508ef5383d36eae584ceedb9845b7467776 | 2021-02-24T17:52:57Z | python | 2021-02-27T23:32:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,422 | ["airflow/jobs/local_task_job.py", "tests/jobs/test_local_task_job.py"] | on_failure_callback does not seem to fire on pod deletion/eviction | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.16.x
**Environment**: KubernetesExecutor with single scheduler pod
**What happened**: On all previous versions we used (from 1.10.x to 2.0.0), evicting or deleting a running task pod triggered the `on_failure_callback` from `BaseOperator`. We use this functionality quite a lot to detect eviction and provide work carry-over and automatic task clear.
We've recently updated our dev environment to 2.0.1 and it seems that now `on_failure_callback` is only fired when pod completes naturally, i.e. not evicted / deleted with kubectl
Everything looks the same on task log level when pod is removed with `kubectl delete pod...`:
```
Received SIGTERM. Terminating subprocesses
Sending Signals.SIGTERM to GPID 16
Received SIGTERM. Terminating subprocesses.
Task received SIGTERM signal
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1315, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/li_operator.py", line 357, in execute
self.operator_task_code(context)
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/yarn_jar_operator.py", line 62, in operator_task_code
ssh_connection=_ssh_con
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/li_mapreduce_cluster_operator.py", line 469, in watch_application
existing_apps=_associated_applications.keys()
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/li_mapreduce_cluster_operator.py", line 376, in get_associated_application_info
logger=self.log
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/yarn_api/yarn_api_ssh_client.py", line 26, in send_request
_response = requests.get(request)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.7/http/client.py", line 1277, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1323, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1272, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1032, in _send_output
self.send(msg)
File "/usr/local/lib/python3.7/http/client.py", line 972, in send
self.connect()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 187, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1241, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
Marking task as FAILED. dag_id=mock_dag_limr, task_id=SetupMockScaldingDWHJob, execution_date=20190910T000000, start_date=20210224T162811, end_date=20210224T163044
Process psutil.Process(pid=16, status='terminated', exitcode=1, started='16:28:10') (16) terminated with exit code 1
```
But `on_failure_callback` is not triggered. For simplicity, let's assume the callback does this:
```
def act_on_failure(context):
send_slack_message(
message=f"{context['task_instance_key_str']} fired failure callback",
channel=get_stored_variable('slack_log_channel')
)
def get_stored_variable(variable_name, deserialize=False):
try:
return Variable.get(variable_name, deserialize_json=deserialize)
except KeyError:
if os.getenv('PYTEST_CURRENT_TEST'):
_root_dir = str(Path(__file__).parent)
_vars_path = os.path.join(_root_dir, "vars.json")
_vars_json = json.loads(open(_vars_path, 'r').read())
if deserialize:
return _vars_json.get(variable_name, {})
else:
return _vars_json.get(variable_name, "")
else:
raise
def send_slack_message(message, channel):
_web_hook_url = get_stored_variable('slack_web_hook')
post = {
"text": message,
"channel": channel
}
try:
json_data = json.dumps(post)
req = request.Request(
_web_hook_url,
data=json_data.encode('ascii'),
headers={'Content-Type': 'application/json'}
)
request.urlopen(req)
except request.HTTPError as em:
print('Failed to send slack messsage to the hook {hook}: {msg}, request: {req}'.format(
hook=_web_hook_url,
msg=str(em),
req=str(post)
))
```
Scheduler logs related to this event:
```
21-02-24 16:33:04,968] {kubernetes_executor.py:147} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 had an event of type MODIFIED
[2021-02-24 16:33:04,968] {kubernetes_executor.py:202} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 Pending
[2021-02-24 16:33:04,979] {kubernetes_executor.py:147} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 had an event of type DELETED
[2021-02-24 16:33:04,979] {kubernetes_executor.py:197} INFO - Event: Failed to start pod mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3, will reschedule
[2021-02-24 16:33:05,406] {kubernetes_executor.py:354} INFO - Attempting to finish pod; pod_id: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3; state: up_for_reschedule; annotations: {'dag_id': 'mock_dag_limr', 'task_id': 'SetupMockSparkDwhJob', 'execution_date': '2019-09-10T00:00:00+00:00', 'try_number': '9'}
[2021-02-24 16:33:05,419] {kubernetes_executor.py:528} INFO - Changing state of (TaskInstanceKey(dag_id='mock_dag_limr', task_id='SetupMockSparkDwhJob', execution_date=datetime.datetime(2019, 9, 10, 0, 0, tzinfo=tzlocal()), try_number=9), 'up_for_reschedule', 'mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3', 'airflow', '173647183') to up_for_reschedule
[2021-02-24 16:33:05,422] {scheduler_job.py:1206} INFO - Executor reports execution of mock_dag_limr.SetupMockSparkDwhJob execution_date=2019-09-10 00:00:00+00:00 exited with status up_for_reschedule for try_number 9
```
However task stays in failed state (not what scheduler says)
When pod completes on its own (fails, exits with 0), callbacks are triggered correctly
**What you expected to happen**: `on_failure_callback` is called regardless of how pod exists, including SIGTERM-based interruptions: pod eviction, pod deletion
<!-- What do you think went wrong? --> Not sure really. We believe this code is executed since we get full stack trace
https://github.com/apache/airflow/blob/2.0.1/airflow/models/taskinstance.py#L1149
But then it is unclear why `finally` clause here does not run:
https://github.com/apache/airflow/blob/master/airflow/models/taskinstance.py#L1422
**How to reproduce it**:
With Airflow 2.0.1 running KubernetesExecutor, execute `kubectl delete ...` on any running task pod. Task operator should define `on_failure_callback`. In order to check that it is/not called, send data from it to any external logging system
**Anything else we need to know**:
Problem is persistent and only exists in 2.0.1 version
| https://github.com/apache/airflow/issues/14422 | https://github.com/apache/airflow/pull/15172 | e5d69ad6f2d25e652bb34b6bcf2ce738944de407 | def1e7c5841d89a60f8972a84b83fe362a6a878d | 2021-02-24T16:55:21Z | python | 2021-04-23T22:47:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,421 | ["airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | NULL values in the operator column of task_instance table cause API validation failures | **Apache Airflow version**: 2.0.1
**Environment**: Docker on Linux Mint 20.1, image based on apache/airflow:2.0.1-python3.8
**What happened**:
I'm using the airflow API and the following exception occurred:
```python
>>> import json
>>> import requests
>>> from requests.auth import HTTPBasicAuth
>>> payload = {"dag_ids": ["{my_dag_id}"]}
>>> r = requests.post("https://localhost:8080/api/v1/dags/~/dagRuns/~/taskInstances/list", auth=HTTPBasicAuth('username', 'password'), data=json.dumps(payload), headers={'Content-Type': 'application/json'})
>>> r.status_code
500
>>> print(r.text)
{
"detail": "None is not of type 'string'\n\nFailed validating 'type' in schema['allOf'][0]['properties'][
'task_instances']['items']['properties']['operator']:\n {'type': 'string'}\n\nOn instance['task_instanc
es'][5]['operator']:\n None",
"status": 500,
"title": "Response body does not conform to specification",
"type": "https://airflow.apache.org/docs/2.0.1/stable-rest-api-ref.html#section/Errors/Unknown"
}
None is not of type 'string'
Failed validating 'type' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['ope
rator']:
{'type': 'string'}
On instance['task_instances'][5]['operator']:
None
```
This happens on all the "old" task instances before upgrading to 2.0.0
There is no issue with new task instances created after the upgrade.
<!-- (please include exact error messages if you can) -->
**What do you think went wrong?**:
The `operator` column was introduced in 2.0.0. But during migration, all the existing database entries are filled with `NULL` values. So I had to execute this manually in my database
```sql
UPDATE task_instance SET operator = 'NoOperator' WHERE operator IS NULL;
```
**How to reproduce it**:
* Run airflow 1.10.14
* Create a DAG with multiple tasks and run them
* Upgrade airflow to 2.0.0 or 2.0.1
* Make the API call as above
**Anything else we need to know**:
Similar to https://github.com/apache/airflow/issues/13799 but not exactly the same
| https://github.com/apache/airflow/issues/14421 | https://github.com/apache/airflow/pull/16516 | 60925453b1da9fe54ca82ed59889cd65a0171516 | 087556f0c210e345ac1749933ff4de38e40478f6 | 2021-02-24T15:24:05Z | python | 2021-06-18T07:56:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,417 | ["airflow/providers/apache/druid/operators/druid.py", "tests/providers/apache/druid/operators/test_druid.py"] | DruidOperator failing to submit ingestion tasks : Getting 500 error code from Druid | Issue : When trying to submit an ingestion task using DruidOperator, getting 500 error code in response from Druid. And can see no task submitted in Druid console.
In Airflow 1.10.x, everything is working fine. But when upgraded to 2.0.1, it is failing to submit the task. There is absolutely no change in the code/files except the import statements.
Resolution : I compared DruidOperator code for both Airflow 1.10.x & 2.0.1 and found one line causing the issue.
In Airflow 2.0.x, before submitting the indexing job json string is converted to python object. But it should be json string only.
In Airflow 1.10.x there is no conversion happening and hence it is working fine. (Please see below code snippets.)
I have already tried this change in my setup and re-ran the ingestion tasks. It is all working fine.
~~hook.submit_indexing_job(json.loads(self.json_index_file))~~
**hook.submit_indexing_job(self.json_index_file)**
Airflow 1.10.x - airflow/contrib/operators/druid_operator.py
```
def execute(self, context):
hook = DruidHook(
druid_ingest_conn_id=self.conn_id,
max_ingestion_time=self.max_ingestion_time
)
self.log.info("Submitting %s", self.index_spec_str)
hook.submit_indexing_job(self.index_spec_str)
```
Airflow 2.0.1 - airflow/providers/apache/druid/operators/druid.py
```
def execute(self, context: Dict[Any, Any]) -> None:
hook = DruidHook(druid_ingest_conn_id=self.conn_id, max_ingestion_time=self.max_ingestion_time)
self.log.info("Submitting %s", self.json_index_file)
hook.submit_indexing_job(json.loads(self.json_index_file))
```
**Apache Airflow version**: 2.0.x
**Error Logs**:
```
[2021-02-24 06:42:24,287] {{connectionpool.py:452}} DEBUG - http://druid-master:8081 "POST /druid/indexer/v1/task HTTP/1.1" 500 15714
[2021-02-24 06:42:24,287] {{taskinstance.py:570}} DEBUG - Refreshing TaskInstance <TaskInstance: druid_compact_daily 2021-02-23T01:20:00+00:00 [running]> from DB
[2021-02-24 06:42:24,296] {{taskinstance.py:605}} DEBUG - Refreshed TaskInstance <TaskInstance: druid_compact_daily 2021-02-23T01:20:00+00:00 [running]>
[2021-02-24 06:42:24,298] {{taskinstance.py:1455}} ERROR - Did not get 200 when submitting the Druid job to http://druid-master:8081/druid/indexer/v1/task
```
| https://github.com/apache/airflow/issues/14417 | https://github.com/apache/airflow/pull/14418 | c2a0cb958835d0cecd90f82311e2aa8b1bbd22a0 | 59065400ff6333e3ff085f3d9fe9005a0a849aef | 2021-02-24T11:31:24Z | python | 2021-03-05T22:48:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,393 | ["scripts/ci/libraries/_build_images.sh"] | breeze: it relies on GNU date which is not the default on macOS | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **OS** macOS Catalina
**What happened**:
`breeze build-image` gives
```
Pulling the image apache/airflow:master-python3.6-ci
master-python3.6-ci: Pulling from apache/airflow
Digest: sha256:92351723f04bec6e6ef27c0be2b5fbbe7bddc832911988b2d18b88914bcc1256
Status: Downloaded newer image for apache/airflow:master-python3.6-ci
docker.io/apache/airflow:master-python3.6-ci
date: illegal option -- -
usage: date [-jnRu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
[-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
```
As I explained in [Slack](https://apache-airflow.slack.com/archives/CQ9QHSFQX/p1614092075007500) this is caused by
https://github.com/apache/airflow/blob/b9951279a0007db99a6f4c52197907ebfa1bf325/scripts/ci/libraries/_build_images.sh#L770
```
--build-arg AIRFLOW_IMAGE_DATE_CREATED="$(date --rfc-3339=seconds | sed 's/ /T/')" \
```
`--rfc-3339` is an option supported by GNU date but not by the regular `date` command present macOS.
**What you expected to happen**:
`--rfc-3339` is an option supported by GNU date but not by the regular `date` command present macOS.
**How to reproduce it**:
on macOS: `breeze build-image`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
It happens every time
I think this can be solve either by checking for the presence of `gdate` and use that if present or by adhering to POSIX date options (I'm not 100% but I do believe the regular POSIX options are available in macOS's `date`)
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14393 | https://github.com/apache/airflow/pull/14458 | 997a009715fb82c241a47405cc8647d23580af25 | 64cf2aedd94d27be3ab7829b7c92bd6b1866295b | 2021-02-23T15:09:59Z | python | 2021-02-25T14:57:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,390 | ["BREEZE.rst", "breeze"] | breeze: capture output to a file | Reference: [Slack conversation with Jarek Potiuk](https://apache-airflow.slack.com/archives/CQ9QHSFQX/p1614073317003900) @potiuk
**Description**
Suggestion: it would be great if breeze captured the output into a log file by default (for example when running breeze build-image) so that is easier to review the build process, I have seen at least errors invoking date utility and now I would need to run the whole thing again to capture that. I
**Use case / motivation**
I want to be able to review what happened during breeze command that output lots of text
and take a lot of time like `breeze build-image`.
Ideally I want to happen automatically as it's very time consuming to rerun the `breeze` command to get the error.
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14390 | https://github.com/apache/airflow/pull/14470 | 8ad2f9c64e9ce89c252cc61f450947d53935e0f2 | 4a54292b69bb9a68a354c34246f019331270df3d | 2021-02-23T14:46:09Z | python | 2021-02-26T20:49:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,384 | ["airflow/models/dagrun.py", "airflow/www/views.py"] | Scheduler ocassionally crashes with a TypeError when updating DagRun state | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**: Python 3.8
**What happened**:
Occasionally, the Airflow scheduler crashes with the following exception:
```
Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1382, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1521, in _do_scheduling
self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1760, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/models/dagrun.py", line 478, in update_state
self._emit_duration_stats_for_finished_state()
File "/usr/local/lib/python3.8/site-packages/airflow/models/dagrun.py", line 615, in _emit_duration_stats_for_finished_state
duration = self.end_date - self.start_date
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
Just before this error surfaces, Airflow typically logs either one of the following messages:
```
Marking run <DagRun sessionization @ 2021-02-15 08:05:00+00:00: scheduled__2021-02-15T08:05:00+00:00, externally triggered: False> failed
```
or
```
Marking run <DagRun sessionization @ 2021-02-16 08:05:00+00:00: scheduled__2021-02-16T08:05:00+00:00, externally triggered: False> successful
```
The cause of this issue appears to be that the scheduler is attempting to update the state of a `DagRun` instance that is in a _running_ state, but does **not** have a `start_date` set. This will eventually cause a `TypeError` to be raised at L615 in `_emit_duration_stats_for_finished_state()` because `None` is subtracted from a `datetime` object.
During my testing I was able to resolve the issue by manually updating any records in the `DagRun` table which are missing a `start_date`.
However, it is a bit unclear to me _how_ it is possible for a DagRun instance to be transitioned into a `running` state, without having a `start_date` set. I spent some time digging through the code, and I believe the only code path that would allow a `DagRun` to end up in such a scenario is the state transition that occurs at L475 in [DagRun](https://github.com/apache/airflow/blob/2.0.1/airflow/models/dagrun.py#L475) where `DagRun.set_state(State.RUNNING)` is invoked without verifying that a `start_date` is set.
**What you expected to happen**:
I expect the Airflow scheduler not to crash, and to handle this scenario gracefully.
I have the impression that this is an edge-case, and even handling a missing `start_date` to be equal to a set `end_date` in `_emit_duration_stats_for_finished_state()` seems like a more favorable solution than raising a `TypeError` in the scheduler to me.
**How to reproduce it**:
I haven't been able to figure out a scenario which allows me to reproduce this issue reliably. We've had this issue surface for a fairly complex DAG twice in a time span of 5 days. We run over 25 DAGs on our Airflow instance, and so far the issue seems to be isolated to a single DAG.
**Anything else we need to know**:
While I'm unclear on what exactly causes a DagRun instance to not have a `start_date` set, the problem seems to be isolated to a single DAG on our Airflow instance. This DAG is fairly complex in the sense that it contains a number of root tasks, that have SubDagOperators set as downstream (leaf) dependencies. These SubDagOperators each represent a SubDag containing between 2 and 12 tasks. The SubDagOperator's have `depends_on_past` set to True, and `catchup` is enabled for the parent DAG. The parent DAG also has `max_active_runs` set to limit concurrency.
I also have the impression, that this issue mainly surfaces when there are multiple DagRuns running concurrently for this DAG, but I don't have hard evidence for this. I did at some point clear task instance states, and transition the DagRun's state from `failed` back to `running` through the Web UI around the period that some of these issues arose.
I've also been suspecting that this [block of code](https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L1498-L1511) in `_do_scheduling` may be related to this issue, in the sense that I've been suspecting that there exists an edge case in which Task Instances may be considered active for a particular `execution_date`, but for which the DagRun object itself is not "active". My hypothesis is that this would eventually cause the "inactive" DagRun to be transitioned to `running` in `DagRun.set_state()` without ensuring that a `start_date` was set for the DagRun. I haven't been able to gather strong evidence for this hypothesis yet, though, and I'm hoping that someone more familiar with the implementation will be able to provide some guidance as to whether that hypothesis makes sense or not.
| https://github.com/apache/airflow/issues/14384 | https://github.com/apache/airflow/pull/14452 | 258ec5d95e98eac09ecc7658dcd5226c9afe14c6 | 997a009715fb82c241a47405cc8647d23580af25 | 2021-02-23T11:38:10Z | python | 2021-02-25T14:40:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,374 | ["airflow/www/views.py"] | Search in "DAG Runs" crashed with "Conf" filter | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): "v1.14.2"
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
- **Kernel** (e.g. `uname -a`): 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
1. Go to "Browse-> DAG Runs"
2. Add Filter, select "Conf", "Contains", Search
Error messages:
`
Python version: 3.6.12
Airflow version: 2.0.0
Node: airflow-webserver-6646b76f6d-kp9vr
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedFunction: operator does not exist: bytea ~~* bytea
LINE 4: WHERE dag_run.conf ILIKE '\x80049506000000000000008c02252594...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/security/decorators.py", line 109, in wraps
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/views.py", line 551, in list
widgets = self._list()
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/baseviews.py", line 1127, in _list
page_size=page_size,
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/baseviews.py", line 1026, in _get_list_widget
page_size=page_size,
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/models/sqla/interface.py", line 425, in query
count = self.query_count(query, filters, select_columns)
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/models/sqla/interface.py", line 347, in query_count
query, filters, select_columns=select_columns, aliases_mapping={}
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3803, in count
return self.from_self(col).scalar()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3523, in scalar
ret = self.one()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3490, in one
ret = self.one_or_none()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3459, in one_or_none
ret = list(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3535, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3560, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1130, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1511, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: bytea ~~* bytea
LINE 4: WHERE dag_run.conf ILIKE '\x80049506000000000000008c02252594...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
[SQL: SELECT count(*) AS count_1
FROM (SELECT dag_run.state AS dag_run_state, dag_run.id AS dag_run_id, dag_run.dag_id AS dag_run_dag_id, dag_run.execution_date AS dag_run_execution_date, dag_run.start_date AS dag_run_start_date, dag_run.end_date AS dag_run_end_date, dag_run.run_id AS dag_run_run_id, dag_run.creating_job_id AS dag_run_creating_job_id, dag_run.external_trigger AS dag_run_external_trigger, dag_run.run_type AS dag_run_run_type, dag_run.conf AS dag_run_conf, dag_run.last_scheduling_decision AS dag_run_last_scheduling_decision, dag_run.dag_hash AS dag_run_dag_hash
FROM dag_run
WHERE dag_run.conf ILIKE %(conf_1)s) AS anon_1]
[parameters: {'conf_1': <psycopg2.extensions.Binary object at 0x7f5dad9c2c60>}]
(Background on this error at: http://sqlalche.me/e/13/f405)`
<!-- (please include exact error messages if you can) -->
**What you expected to happen**: Not crashed.
<!-- What do you think went wrong? -->
**How to reproduce it**: See above
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14374 | https://github.com/apache/airflow/pull/15099 | 6b9b0675c5ece22a1c382ebb9b904fb18f486211 | 3585b3c54ce930d2ce2eaeddc238201a0a867018 | 2021-02-23T05:57:16Z | python | 2021-03-30T19:10:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,364 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Missing schedule_delay metric | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0 but applicable to master
**Environment**: Running on ECS but not relevant to question
**What happened**: I am not seeing the metric dagrun.schedule_delay.<dag_id> being reported. A search in the codebase seems to reveal that it no longer exists. It was originally added in https://github.com/apache/airflow/pull/5050.
<!-- (please include exact error messages if you can) -->
<!-- What do you think went wrong? -->
I suspect either:
1. This metric was intentionally removed, in which case the docs should be updated to remove it.
2. It was unintentionally removed during a refactor, in which case we should add it back.
3. I am bad at searching through code, and someone could hopefully point me to where it is reported from now.
**How to reproduce it**:
https://github.com/apache/airflow/search?q=schedule_delay
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14364 | https://github.com/apache/airflow/pull/15105 | 441b4ef19f07d8c72cd38a8565804e56e63b543c | ca4c4f3d343dea0a034546a896072b9c87244e71 | 2021-02-22T18:18:44Z | python | 2021-03-31T12:38:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,363 | ["airflow/providers/google/cloud/hooks/gcs.py"] | Argument order change in airflow.providers.google.cloud.hooks.gcs.GCSHook:download appears to be a mistake. | ### Description
https://github.com/apache/airflow/blob/6019c78cb475800f58714a9dabb747b9415599c8/airflow/providers/google/cloud/hooks/gcs.py#L262-L265
Was this order swap of the `object_name` and `bucket_name` arguments in the `GCSHook.download` a mistake? The `upload` and `delete` methods still use `bucket_name` first and the commit where this change was made `1845cd11b77f302777ab854e84bef9c212c604a0` was supposed to just add strict type checking. The docstring also appears to reference the old order. | https://github.com/apache/airflow/issues/14363 | https://github.com/apache/airflow/pull/14497 | 77f5629a80cfec643bd3811bc92c48ef4ec39ceb | bfef559cf6138eec3ac77c64289fb1d45133d8be | 2021-02-22T17:21:20Z | python | 2021-02-27T09:03:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,331 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/utils/state.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Airflow stable API taskInstance call fails if a task is removed from running DAG | **Apache Airflow version**: 2.0.1
**Environment**: Docker on Win 10 with WSL, image based on `apache/airflow:2.0.1-python3.8`
**What happened**:
I'm using the airflow API and the following (what I believe to be a) bug popped up:
```Python
>>> import requests
>>> r = requests.get("http://localhost:8084/api/v1/dags/~/dagRuns/~/taskInstances", auth=HTTPBasicAuth('username', 'password'))
>>> r.status_code
500
>>> print(r.text)
{
"detail": "'removed' is not one of ['success', 'running', 'failed', 'upstream_failed', 'skipped', 'up_for_retry', 'up_for_reschedule', 'queued', 'none', 'scheduled']\n\nFailed validating 'enum' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['state']:\n {'description': 'Task state.',\n 'enum': ['success',\n 'running',\n 'failed',\n 'upstream_failed',\n 'skipped',\n 'up_for_retry',\n 'up_for_reschedule',\n 'queued',\n 'none',\n 'scheduled'],\n 'nullable': True,\n 'type': 'string',\n 'x-scope': ['',\n '#/components/schemas/TaskInstanceCollection',\n '#/components/schemas/TaskInstance']}\n\nOn instance['task_instances'][16]['state']:\n 'removed'",
"status": 500,
"title": "Response body does not conform to specification",
"type": "https://airflow.apache.org/docs/2.0.1rc2/stable-rest-api-ref.html#section/Errors/Unknown"
}
>>> print(r.json()["detail"])
'removed' is not one of ['success', 'running', 'failed', 'upstream_failed', 'skipped', 'up_for_retry', 'up_for_reschedule', 'queued', 'none', 'scheduled']
Failed validating 'enum' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['state']:
{'description': 'Task state.',
'enum': ['success',
'running',
'failed',
'upstream_failed',
'skipped',
'up_for_retry',
'up_for_reschedule',
'queued',
'none',
'scheduled'],
'nullable': True,
'type': 'string',
'x-scope': ['',
'#/components/schemas/TaskInstanceCollection',
'#/components/schemas/TaskInstance']}
On instance['task_instances'][16]['state']:
'removed'
```
This happened after I changed a DAG in the corresponding instance, thus a task was removed from a DAG while the DAG was running.
**What you expected to happen**:
Give me all task instances, whether including the removed ones or not is up to the airflow team to decide (no preferences from my side, though I'd guess it makes more sense to supply all data as it is available).
**How to reproduce it**:
- Run airflow
- Create a DAG with multiple tasks
- While the DAG is running, remove one of the tasks (ideally one that did not yet run)
- Make the API call as above | https://github.com/apache/airflow/issues/14331 | https://github.com/apache/airflow/pull/14381 | ea7118316660df43dd0ac0a5e72283fbdf5f2396 | 7418679591e5df4ceaab6c471bc6d4a975201871 | 2021-02-20T13:15:11Z | python | 2021-03-08T21:24:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,327 | ["airflow/utils/json.py"] | Kubernetes Objects are not serializable and break Graph View in UI | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.12
**Environment**:
- **Cloud provider or hardware configuration**: AWS EKS
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
When you click on Graph view for some DAGs in 2.0.1 the UI errors out (logs below).
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
The Graph view to display
<!-- What do you think went wrong? -->
**How to reproduce it**:
It is not clear to me. This only happening on a handle of our DAGs. Also the tree view displays fine.
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/decorators.py", line 97, in view_func
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/views.py", line 2080, in graph
return self.render_template(
File "/usr/local/lib/python3.8/site-packages/airflow/www/views.py", line 396, in render_template
return super().render_template(
File "/usr/local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 280, in render_template
return render_template(
File "/usr/local/lib/python3.8/site-packages/flask/templating.py", line 137, in render_template
return _render(
File "/usr/local/lib/python3.8/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/usr/local/lib/python3.8/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/usr/local/lib/python3.8/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/usr/local/lib/python3.8/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/graph.html", line 21, in top-level template code
{% from 'appbuilder/loading_dots.html' import loading_dots %}
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/dag.html", line 21, in top-level template code
{% from 'appbuilder/dag_docs.html' import dag_docs %}
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/main.html", line 20, in top-level template code
{% extends 'appbuilder/baselayout.html' %}
File "/usr/local/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "/usr/local/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 60, in top-level template code
{% block tail %}
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/graph.html", line 145, in block "tail"
var task_instances = {{ task_instances|tojson }};
File "/usr/local/lib/python3.8/site-packages/flask/json/__init__.py", line 376, in tojson_filter
return Markup(htmlsafe_dumps(obj, **kwargs))
File "/usr/local/lib/python3.8/site-packages/flask/json/__init__.py", line 290, in htmlsafe_dumps
dumps(obj, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask/json/__init__.py", line 211, in dumps
rv = _json.dumps(obj, **kwargs)
File "/usr/local/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/usr/local/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/json.py", line 74, in _default
raise TypeError(f"Object of type '{obj.__class__.__name__}' is not JSON serializable")
TypeError: Object of type 'V1ResourceRequirements' is not JSON serializable
``` | https://github.com/apache/airflow/issues/14327 | https://github.com/apache/airflow/pull/15199 | 6706b67fecc00a22c1e1d6658616ed9dd96bbc7b | 7b577c35e241182f3f3473ca02da197f1b5f7437 | 2021-02-20T00:17:59Z | python | 2021-04-05T11:41:29Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,326 | ["airflow/kubernetes/pod_generator.py", "tests/kubernetes/test_pod_generator.py"] | Task Instances stuck in "scheduled" state | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.12
**Environment**:
- **Cloud provider or hardware configuration**: AWS EKS
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Several Task Instances get `scheduled` but never move to `queued` or `running`. They then become orphan tasks.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
Tasks to get scheduled and run :)
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
I believe the issue is caused by this [limit](https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L923). If we have more Task Instances than Pool Slots Free then some Task Instances may never show up in this query.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14326 | https://github.com/apache/airflow/pull/14703 | b1ce429fee450aef69a813774bf5d3404d50f4a5 | b5e7ada34536259e21fca5032ef67b5e33722c05 | 2021-02-20T00:11:56Z | python | 2021-03-26T14:41:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,299 | ["airflow/www/templates/airflow/dag_details.html"] | UI: Start Date is incorrect in "DAG Details" view | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
The start date in the "DAG Details" view `{AIRFLOW_URL}/dag_details?dag_id={dag_id}` is incorrect if there's a schedule for the DAG.
**What you expected to happen**:
Start date should be the same as the specified date in the DAG.
**How to reproduce it**:
For example, I created a DAG with a start date of `2019-07-09` but the DAG details view shows:

Minimal code block to reproduce:
```
from datetime import datetime, timedelta
START_DATE = datetime(2019, 7, 9)
DAG_ID = '*redacted*'
dag = DAG(
dag_id=DAG_ID,
description='*redacted*',
catchup=False,
start_date=START_DATE,
schedule_interval=timedelta(weeks=1),
)
start = DummyOperator(task_id='start', dag=dag)
```
| https://github.com/apache/airflow/issues/14299 | https://github.com/apache/airflow/pull/16206 | 78c4f1a46ce74f13a99447207f8cdf0fcfc7df95 | ebc03c63af7282c9d826054b17fe7ed50e09fe4e | 2021-02-18T20:14:50Z | python | 2021-06-08T14:13:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 14,286 | ["airflow/providers_manager.py"] | from 'apache-airflow-providers-google' package: No module named 'airflow.providers.postgres' | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Environment**:
- **Cloud provider or hardware configuration**: MacBook Pro
- **OS** (e.g. from /etc/os-release): Catalina 10.15.7
- **Kernel** (e.g. `uname -a`): Darwin CSchillebeeckx-0589.local 19.6.0 Darwin Kernel Version 19.6.0: Thu Oct 29 22:56:45 PDT 2020; root:xnu-6153.141.2.2~1/RELEASE_X86_64 x86_64
- **Install tools**: pip
requirements.txt
```
apache-airflow[crypto,celery,amazon,mysql,jdbc,password,redis,slack,snowflake,ssh,google,databricks,mongo,zendesk,papermill,salesforce]==2.0.1
pyarrow==0.17.1
iniconfig==1.1.1
sqlparse==0.4.1
google-api-python-client===1.12.8
google-auth==1.27.0
google-api-core==1.26.0
avro-python3==1.10.0
databricks-connect==7.3.8
matplotlib==3.3.4
scikit-learn==0.24.1
ipykernel==5.4.3
flower==0.9.7
```
**What happened**:
I'm using MySQL as a backend and am using the `GoogleBaseHook`; when I run the webserver I see:
```
webserver_1 | [2021-02-16 23:39:13,008] {providers_manager.py:299} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.cloud_sql.CloudSQLHook' from 'apache-airflow-providers-google' package: No module named 'airflow.providers.postgres'
webserver_1 | [2021-02-16 23:39:13,009] {providers_manager.py:299} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.cloud_sql.CloudSQLDatabaseHook' from 'apache-airflow-providers-google' package: No module named 'airflow.providers.postgres'
webserver_1 | [2021-02-16 23:39:13,066] {providers_manager.py:299} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.cloud_sql.CloudSQLHook' from 'apache-airflow-providers-google' package: No module named 'airflow.providers.postgres'
webserver_1 | [2021-02-16 23:39:13,067] {providers_manager.py:299} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.cloud_sql.CloudSQLDatabaseHook' from 'apache-airflow-providers-google' package: No module named 'airflow.providers.postgres'
```
If I add the extra package `postgres` the warnings are not present anymore.
**What you expected to happen**:
No warnings
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14286 | https://github.com/apache/airflow/pull/14903 | 178dee9a5ed0cde3d7a7d4a47daeae85408fcd67 | 2f32df7b711cf63d18efd5e0023b22a79040cc86 | 2021-02-17T22:44:55Z | python | 2021-03-20T01:04:52Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.