status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 16,460 | ["airflow/cli/commands/dag_command.py"] | Typos in Backfill's `task-regex` param | Example:
DAG structure:
```
default_args = {
'owner': 'dimon',
'depends_on_past': False,
'start_date': datetime(2021, 1, 10)
}
dag = DAG(
'dummy-dag',
schedule_interval='21 2 * * *',
catchup=False,
default_args=default_args
)
DagContext.push_context_managed_dag(dag)
task1 = BashOperator(task_id='task1', bash_command='echo 1')
task2 = BashOperator(task_id='task2', bash_command='echo 2')
task2 << task1
task3 = BashOperator(task_id='task3', bash_command='echo 3')
```
Let’s say you’ve missed the button and typed `--task-regex task4`.
When the backfill starts, firstly it will create a new empty DagRun and puts it in DB. Then the backfill job will go and try to find tasks that match the regex you’ve entered, will not find any obviously and will be stuck in the “running” state together with newly created DagRun forever.
| https://github.com/apache/airflow/issues/16460 | https://github.com/apache/airflow/pull/16461 | bf238aa21da8c0716b251575216434bb549e64f0 | f2c79b238f4ea3ee801038a6305b925f2f4e753b | 2021-06-15T14:11:59Z | python | 2021-06-16T20:07:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,435 | ["airflow/www/static/css/main.css", "airflow/www/utils.py", "setup.cfg", "tests/www/test_utils.py"] | Switch Markdown engine to markdown-it-py | Copying from #16414:
The current Markdown engine does not support [fenced code blocks](https://python-markdown.github.io/extensions/fenced_code_blocks/), so it still won’t work after this change. Python-Markdown’s fenced code support is pretty spotty, and if we want to fix that for good IMO we should switch to another Markdown parser. [markdown-it-py](https://github.com/executablebooks/markdown-it-py) (the parser backing [MyST](https://myst-parser.readthedocs.io/en/latest/using/intro.html)) is a popular choice for [CommonMark](https://commonmark.org/) support, which is much closer to [GitHub-Flavored Markdown](https://github.github.com/gfm/) which almost everyone thinks is the standard Markdown (which is unfortunately because GFM is not standard, but that’s how the world works). | https://github.com/apache/airflow/issues/16435 | https://github.com/apache/airflow/pull/19702 | 904cc121b83ecfaacba25433a7911a2541b2c312 | 88363b543f6f963247c332e9d7830bc782ed6e2d | 2021-06-14T15:09:17Z | python | 2022-06-21T09:24:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,434 | ["airflow/www/templates/airflow/dag.html", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py"] | Properly handle HTTP header 'Referrer-Policy' | **Description**
Properly set [HTTP Security Header `Referrer-Policy`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy) instead of relying on browser or environment defaults.
**Use case / motivation**
I'm not sure if this is a feature or a bug, but first wanted to start a discussion at all.
1. Airflow in some places has a hard requirement that the `referrer` header is required for navigation/functionality, e.g. [here](https://github.com/apache/airflow/blob/69a1a732a034406967e82a59be6d3c019e94a07b/airflow/www/views.py#L2653).
There are numerous places where this header is _not_ needed.
2. I'm deferring to an [external source](https://web.dev/referrer-best-practices/#why-%22explicitly%22) which gives a good overview and makes good arguments why to set it explicitly:
> Why "explicitly"?
If no referrer policy is set, the browser's default policy will be used - in fact, websites often defer to the browser's default. But this is not ideal, because:
> * Browser default policies are either `no-referrer-when-downgrade`, `strict-origin-when-cross-origin`, or stricter - depending on the browser and mode (private/incognito). **So your website won't behave predictably across browsers**.
> * Browsers are adopting stricter defaults such as `strict-origin-when-cross-origin` and mechanisms such as referrer trimming for cross-origin requests. Explicitly opting into a privacy-enhancing policy before browser defaults change gives you control and helps you run tests as you see fit.
Therefore, we have an implicit coupling to browser's default behaviour. </details>
3. There are (suggested) best-practices like injecting "secure" headers yourself **in case the application does not provide explicit values**. [This example](https://blogs.sap.com/2019/02/11/kubernetes-security-secure-by-default-headers-with-envoy-and-istio/) uses service mesh functionality to set `Referrer-Policy: no-referrer` if the service/pod app does not set something itself.
---
→ There are three obvious ways to tackle this:
1. Document the "minimum requirement", e.g. explicitly stipulate the lack of support for policies like `Referrer-Policy: no-referrer`.
2. Explicitly set a sane (configurable?) global value, e.g. `strict-origin-when-cross-origin`.
3. Explicitly set specific values, depending on which page the user is on (and might go to).
**Are you willing to submit a PR?**
That depends on the preferred solution 😬.
I'm quite new in this area but _might_ be able to tackle solutions 1/2 with some guidance/help.
At the same time I feel like 3 is the "proper" solution and for that I lack a **lot** of in-depth Airflow knowledge
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/16434 | https://github.com/apache/airflow/pull/21751 | 900bad1c67654252196bb095a2a150a23ae5fc9a | df31902533e94b428e1fa19e5014047f0bae6fcc | 2021-06-14T13:11:42Z | python | 2022-02-27T00:12:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,379 | ["airflow/utils/json.py", "tests/utils/test_json.py"] | Airflow Stable REST API [GET api/v1/pools] issue | **Apache Airflow version**: v2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**: AWS
- **Cloud provider or hardware configuration**: AWS EC2 Instance
- **OS** (e.g. from /etc/os-release): Ubuntu Server 20.04 LTS
- **Kernel** (e.g. `uname -a`): Linux ip-172-31-23-31 5.4.0-1048-aws #50-Ubuntu SMP Mon May 3 21:44:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**: Python version: 3.8.5
**What happened**: Using Airflow Stable REST API [GET api/v1/pools] results in Ooops! This only occurs when the pools have "Running Slots". If no tasks are running and the slots are zero, then it works just fine.
<!-- (please include exact error messages if you can) -->
Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.
Python version: 3.8.5
Airflow version: 2.0.2
Node: ip-172-31-23-31.ec2.internal
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/tool/gto_env/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/tool/gto_env/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/tool/gto_env/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/tool/gto_env/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/tool/gto_env/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/tool/gto_env/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
response = function(request)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
response = function(request)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/decorators/validation.py", line 384, in wrapper
return function(request)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/decorators/response.py", line 104, in wrapper
return _wrapper(request, response)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/decorators/response.py", line 89, in _wrapper
self.operation.api.get_connexion_response(response, self.mimetype)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/apis/abstract.py", line 351, in get_connexion_response
response = cls._response_from_handler(response, mimetype)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/apis/abstract.py", line 331, in _response_from_handler
return cls._build_response(mimetype=mimetype, data=response, extra_context=extra_context)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/apis/flask_api.py", line 173, in _build_response
data, status_code, serialized_mimetype = cls._prepare_body_and_status_code(data=data, mimetype=mimetype, status_code=status_code, extra_context=extra_context)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/apis/abstract.py", line 403, in _prepare_body_and_status_code
body, mimetype = cls._serialize_data(data, mimetype)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/apis/flask_api.py", line 190, in _serialize_data
body = cls.jsonifier.dumps(data)
File "/home/tool/gto_env/lib/python3.8/site-packages/connexion/jsonifier.py", line 44, in dumps
return self.json.dumps(data, **kwargs) + '\n'
File "/home/tool/gto_env/lib/python3.8/site-packages/flask/json/__init__.py", line 211, in dumps
rv = _json.dumps(obj, **kwargs)
File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/usr/lib/python3.8/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.8/json/encoder.py", line 325, in _iterencode_list
yield from chunks
File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.8/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/tool/gto_env/lib/python3.8/site-packages/airflow/utils/json.py", line 74, in _default
raise TypeError(f"Object of type '{obj.__class__.__name__}' is not JSON serializable")
TypeError: Object of type 'Decimal' is not JSON serializable
**What you expected to happen**: I expect the appropriate JSON response
<!-- What do you think went wrong? -->
**How to reproduce it**:
On an Airflow instance, run some tasks and while the tasks are running query the pools via the API. NOTE: That you have to query the specific pool that has tasks running, if you avoid the pool using limit and/or offset then the issue will not occur. You must try to return a pool with running_slots > 0
**Anything else we need to know**:
Not really
| https://github.com/apache/airflow/issues/16379 | https://github.com/apache/airflow/pull/16383 | d42970124733c5dff94a1d3a2a71b9988c547aab | df8a87779524a713971e8cf75ddef927dc045cee | 2021-06-10T22:06:12Z | python | 2021-06-21T08:29:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,367 | ["airflow/www/static/js/tree.js", "airflow/www/templates/airflow/tree.html"] | Tree view shown incorrect dag runs | Apache Airflow version: 2.1.0
On Tree view, switch to 50 Runs, and the view is broken:

| https://github.com/apache/airflow/issues/16367 | https://github.com/apache/airflow/pull/16437 | 5c86e3d50970e61d0eabd0965ebdc7b5ecf3bf14 | 6087a09f89c7da4aac47eab3756a7fe24e3b602b | 2021-06-10T11:53:47Z | python | 2021-06-14T20:02:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,364 | ["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "docs/apache-airflow-providers-ssh/connections/ssh.rst", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | Timeout is ambiguous in SSHHook and SSHOperator | In SSHHook the timeout argument of the constructor is used to set a connection timeout. This is fine.
But in SSHOperator the timeout argument of the constructor is used for *both* the timeout of the SSHHook *and* the timeout of the command itself (see paramiko's ssh client exec_command use of the timeout parameter). This ambiguous use of the same parameter is very dirty.
I see two ways to clean the behaviour:
1. Let the SSHHook constructor be the only way to handle the connection timeout (thus, if one wants a specific timeout they should explicitely build a hook to be passed to the operator using the operator's constructor).
2. Split the timeout argument in SSHOperator into two arguments conn_timeout and cmd_timeout for example.
The choice between 1 and 2 depends on how frequently people are supposed to want to change the connection timeout. If it is something very frequent. then go for 2. if not go for 1.
BR and thanks for the code! | https://github.com/apache/airflow/issues/16364 | https://github.com/apache/airflow/pull/17236 | 0e3b06ba2f3898c938c3d191d0c2bc8d85c318c7 | 68d99bc5582b52106f876ccc22cc1e115a42b252 | 2021-06-10T09:32:15Z | python | 2021-09-10T13:16:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,363 | ["scripts/in_container/prod/entrypoint_prod.sh"] | _PIP_ADDITIONAL_REQUIREMENTS environment variable of the container image cannot install more than one package | **Apache Airflow version**: 2.1.0
**Environment**: Docker
**What happened**: I tried to install more than one pip packages using the _PIP_ADDITIONAL_REQUIREMENTS enviroment variable when running Airflow image built using the latest Dockerfile. My _PIP_ADDITIONAL_REQUIREMENTS was set to "pandas scipy". The result was `ERROR: Invalid requirement: 'pandas scipy'`
**What you expected to happen**: I expected both pandas and scipy to install without errors. I believe that the image is now trying to install one package called "pandas scipy", which doesn't exist. I believe by removing the double quotation marks surrounding the ${_PIP_ADDITIONAL_REQUIREMENTS=} from this line of code would solve the issue: https://github.com/apache/airflow/blob/main/scripts/in_container/prod/entrypoint_prod.sh#L327
**How to reproduce it**: Using image built using the latest Dockerfile, try to run the image with `docker run --rm -it --env _PIP_ADDITIONAL_REQUIREMENTS="pandas scipy" image:tag`
| https://github.com/apache/airflow/issues/16363 | https://github.com/apache/airflow/pull/16382 | 4ef804ffa2c3042ca49a3beeaa745e068325d51b | 01e546b33c7ada1956de018474d0b311cada8676 | 2021-06-10T07:41:13Z | python | 2021-06-11T13:31:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,359 | ["airflow/www/static/js/graph.js"] | Dag graph aligned at bottom when expanding a TaskGroup | **Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** : v1.17.5
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux airflow-scheduler-5c6fcfbf9d-mh57k 4.14.138-rancher #1 SMP Sat Aug 10 11:25:46 UTC 2019 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
When expanding a TaskGroup, graph is placed at bottom ( it disappears from current display) .
<!-- (please include exact error messages if you can) -->
Graph collapsed placed at top:

Graph at bottom when clicking on a TaskGroup:

**What you expected to happen**:
Maintain graph at top , to avoid a scroll down.
<!-- What do you think went wrong? -->
| https://github.com/apache/airflow/issues/16359 | https://github.com/apache/airflow/pull/16484 | c158d4c5c4e2fa9eb476fd49b6db4781550986a5 | f1675853a5ed9b779ee2fc13bb9aa97185472bc7 | 2021-06-10T01:20:28Z | python | 2021-06-16T18:20:19Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,356 | ["airflow/models/serialized_dag.py", "tests/models/test_serialized_dag.py"] | exception when root account goes to http://airflow.ordercapital.com/dag-dependencies | Happens every time
Python version: 3.8.10
Airflow version: 2.1.0
Node: airflow-web-55974db849-5bdxq
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/www/decorators.py", line 97, in view_func
return f(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/www/views.py", line 4004, in list
self._calculate_graph()
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/www/views.py", line 4023, in _calculate_graph
for dag, dependencies in SerializedDagModel.get_dag_dependencies().items():
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/serialized_dag.py", line 321, in get_dag_dependencies
dependencies[row[0]] = [DagDependency(**d) for d in row[1]]
TypeError: 'NoneType' object is not iterable
| https://github.com/apache/airflow/issues/16356 | https://github.com/apache/airflow/pull/16393 | 3f674bd6bdb281cd4c911f8b1bc7ec489a24c49d | 0fa4d833f72a77f30bafa7c32f12b27c0ace4381 | 2021-06-09T21:11:23Z | python | 2021-06-15T19:21:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,328 | ["airflow/models/serialized_dag.py", "airflow/www/views.py"] | error on click in dag-dependencies - airflow 2.1 | Python version: 3.7.9
Airflow version: 2.1.0
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.7/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/www/decorators.py", line 97, in view_func
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/www/views.py", line 4003, in list
if SerializedDagModel.get_max_last_updated_datetime() > self.last_refresh:
TypeError: '>' not supported between instances of 'NoneType' and 'datetime.datetime'
**What you expected to happen**:
See the dags dependencies
**What do you think went wrong?**
It's happen only if I don't have any dag yet.
**How to reproduce it**:
With any dag created click in
menu -> browser -> dag-dependencies
<!---
| https://github.com/apache/airflow/issues/16328 | https://github.com/apache/airflow/pull/16345 | 0fa4d833f72a77f30bafa7c32f12b27c0ace4381 | 147bcecc4902793e0b913dfdad1bd799621971c7 | 2021-06-08T15:26:46Z | python | 2021-06-15T19:23:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,326 | ["airflow/jobs/base_job.py", "tests/jobs/test_base_job.py"] | CeleryKubernetesExecutor is broken in 2.1.0 | Tested with both chart 1.1.0rc1 (i.e. main branch r.n.) and 1.0.0
in Airflow 2.1.0, scheduler does not exit immediately (this was an issue < 2.1.0), but all tasks fail like this:
```
2021-06-08 15:30:17,167] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: sqoop_acquisition.terminate_job_flow 2021-06-08 13:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2021-06-08 15:30:17,170] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: gsheets.state_mapping.to_s3 2021-06-08 14:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2021-06-08 15:30:17,171] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: gsheets.app_event_taxonomy.to_s3 2021-06-08 14:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2021-06-08 15:30:17,172] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: gsheets.strain_flavors.to_s3 2021-06-08 14:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2021-06-08 15:30:19,053] {scheduler_job.py:1205} INFO - Executor reports execution of reporting_8hr.dev.cannalytics.feature_duration.sql execution_date=2021-06-08 07:00:00+00:00 exited with status failed for try_number 1
[2021-06-08 15:30:19,125] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: reporting_8hr.dev.cannalytics.feature_duration.sql 2021-06-08 07:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2021-06-08 15:30:23,842] {dagrun.py:429} ERROR - Marking run <DagRun gsheets @ 2021-06-08 14:00:00+00:00: scheduled__2021-06-08T14:00:00+00:00, externally triggered: False> failed
```
@kaxil @jedcunningham do you see this when you run CKE? Any suggestions? | https://github.com/apache/airflow/issues/16326 | https://github.com/apache/airflow/pull/16700 | 42b74a7891bc17fed0cf19e1c7f354fdcb3455c9 | 7857a9bde2e189881f87fe4dc0cdce7503895c03 | 2021-06-08T14:36:18Z | python | 2021-06-29T22:39:34Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,310 | ["airflow/utils/db.py", "airflow/utils/session.py"] | Enable running airflow db init in parallel | **Apache Airflow version**:
2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
Not applicable
**Environment**:
- **Cloud provider or hardware configuration**: None
- **OS** (e.g. from /etc/os-release): Ubuntu
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
1. Ran airflow db init on mysql in parallel in 2 command lines. Only one command did the migrations, the other one waited. But connections were inserted twice. I would like them not to be added twice.
2. Ran airflow db init on postgres in parallel in 2 command lines. Both command lines started doing migrations on the same db in parallel. I would like one command to run, the other to wait.
**What you expected to happen**:
1. For MySQL. Connections and other config objects to be inserted only once.
2. For Postgres. Only one migration can be performed in the same time for the same db.
**How to reproduce it**:
Scenario 1:
Setup Airflow so that it uses MySQL.
Run `airflow init db` in 2 command lines, side by side.
Scenario 2:
Setup Airflow so that it uses Postgres.
Run `airflow init db` in 2 command lines, side by side.
**Anything else we need to know**:
This problem occurs every time. | https://github.com/apache/airflow/issues/16310 | https://github.com/apache/airflow/pull/17078 | 24d02bfa840ae2a315af4280b2c185122e3c30e1 | fbc945d2a2046feda18e7a1a902a318dab9e6fd2 | 2021-06-07T15:05:41Z | python | 2021-07-19T09:51:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,306 | ["airflow/providers/tableau/hooks/tableau.py", "docs/apache-airflow-providers-tableau/connections/tableau.rst", "tests/providers/tableau/hooks/test_tableau.py"] | Tableau connection - Flag to disable SSL | **Description**
To add a new flag to be able to disable SSL in Tableau connection( {"verify": "False"}? ) as it is not present in the current version, apache-airflow-providers-tableau 1.0.0
**Use case / motivation**
Unable to disable SSL in Tableau connection and therefore unable to use the TableauRefreshWorkbook operator
**Are you willing to submit a PR?**
NO
**Related Issues**
NO
| https://github.com/apache/airflow/issues/16306 | https://github.com/apache/airflow/pull/16365 | fc917af8b49a914d4404faebbec807679f0626af | df0746e133ca0f54adb93257c119dd550846bb89 | 2021-06-07T14:02:34Z | python | 2021-07-10T11:34:29Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,303 | ["airflow/cli/commands/task_command.py", "airflow/models/taskinstance.py", "tests/jobs/test_local_task_job.py", "tests/models/test_taskinstance.py"] | Replace `execution_date` in `TI.generate_command` to send run_id instead | Currently we use execution-date when generating the command to send via executor in https://github.com/apache/airflow/blob/9c94b72d440b18a9e42123d20d48b951712038f9/airflow/models/taskinstance.py#L420-L436
We should replace the execution_date in their to run_id instead and handle the corresponding changes needs on the executor and worker. | https://github.com/apache/airflow/issues/16303 | https://github.com/apache/airflow/pull/16666 | 9b2e593fd4c79366681162a1da43595584bd1abd | 9922287a4f9f70b57635b04436ddc4cfca0e84d2 | 2021-06-07T13:46:12Z | python | 2021-08-18T20:46:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,299 | ["airflow/providers/amazon/aws/operators/sagemaker_training.py", "tests/providers/amazon/aws/operators/test_sagemaker_training.py"] | SageMakerTrainingOperator gets ThrottlingException when listing training jobs | **Apache Airflow version**: 1.10.15 (also applies to 2.X)
**Environment**:
- **Cloud provider or hardware configuration**: Astronomer deployment
**What happened**:
I am currently upgrading an Airflow deployment from 1.10.15 to 2.1.0. While doing so, I switched over from `airflow.contrib.operators.sagemaker_training_operator.SageMakerTrainingOperator` to `airflow.providers.amazon.aws.operators.sagemaker_training.SageMakerTrainingOperator`, and found that some DAGs started failing at every run after that.
I dug into the issue a bit, and found that the problem comes from the operator listing existing training jobs ([here](https://github.com/apache/airflow/blob/db63de626f53c9e0242f0752bb996d0e32ebf6ea/airflow/providers/amazon/aws/operators/sagemaker_training.py#L95)). This method calls `boto3`'s `list_training_jobs` over and over again, enough times to get rate limited with a single operator if the number of existing training jobs is high enough - AWS does not allow to delete existing jobs, so these can easily be in the hundreds if not more. Since the operator does not allow to pass `max_results` to the hook's method (although the method can take it), the default page size is used (10) and the number of requests can explode. With a single operator, I was able to mitigate the issue by using the standard retry handler (instead of the legacy handler) - see [doc](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html). However, even the standard retry handler does not help in our case, where we have a dozen operators firing at the same time. All of them got rate limited again, and I was unable to make the job succeed.
One technical fix would be to use a dedicated pool with 1 slot, thereby effectively running all training jobs sequentially. However that won't do in the real world: SageMaker jobs are often long-running, and we cannot afford to go from 1-2 hours when executing them in parallel, to 10-20 hours sequentially.
I believe (and this is somewhat opinionated) that the `SageMakerTrainingOperator` should **not** be responsible for renaming jobs, for two reasons: (1) single responsibility principle (my operator should trigger a SageMaker job, not figure out the correct name + trigger it); (2) alignment between operators and the systems they interact with: running this list operation is, until AWS allows to somehow delete old jobs and/or dramatically increases rate limits, not aligned with the way AWS works.
**What you expected to happen**:
The `SageMakerTrainingOperator` should not be limited in parallelism by the number of existing training jobs in AWS. This limitation is a side-effect of listing existing training jobs. Therefore, the `SageMakerTrainingOperator` should not list existing training jobs.
**Anything else we need to know**:
<details><summary>x.log</summary>Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 984, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/sagemaker_training.py", line 97, in execute
training_jobs = self.hook.list_training_jobs(name_contains=training_job_name)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/sagemaker.py", line 888, in list_training_jobs
list_training_jobs_request, "TrainingJobSummaries", max_results=max_results
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/sagemaker.py", line 945, in _list_request
response = partial_func(**kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 337, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 656, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ThrottlingException) when calling the ListTrainingJobs operation (reached max retries: 9): Rate exceeded</details> | https://github.com/apache/airflow/issues/16299 | https://github.com/apache/airflow/pull/16327 | a68075f7262cbac4c18ef2f14cbf3f0c10d68186 | 36dc6a8100c0261270f7f6fa20928508f90bac96 | 2021-06-07T12:17:27Z | python | 2021-06-16T21:29:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,295 | ["airflow/utils/log/secrets_masker.py"] | JDBC operator not logging errors | Hi,
Since Airflow 2.0, we are having issues with logging for the JDBC operator. When such a tasks fails, we only see
`INFO - Task exited with return code 1`
The actual error and stack trace is not present.
It also seems to not try to execute it again, it only tries once even though my max_tries is 3.
I am using a Local Executor, and logs are also stored locally.
This issue occurs for both local installations and Docker.
full log:
```
*** Reading local file: /home/stijn/airflow/logs/airflow_incr/fmc_mtd/2021-06-01T15:00:00+00:00/1.log
[2021-06-01 18:00:13,389] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: airflow_incr.fmc_mtd 2021-06-01T15:00:00+00:00 [queued]>
[2021-06-01 18:00:13,592] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: airflow_incr.fmc_mtd 2021-06-01T15:00:00+00:00 [queued]>
[2021-06-01 18:00:13,592] {taskinstance.py:1067} INFO -
--------------------------------------------------------------------------------
[2021-06-01 18:00:13,592] {taskinstance.py:1068} INFO - Starting attempt 1 of 4
[2021-06-01 18:00:13,593] {taskinstance.py:1069} INFO -
--------------------------------------------------------------------------------
[2021-06-01 18:00:13,975] {taskinstance.py:1087} INFO - Executing <Task(JdbcOperator): fmc_mtd> on 2021-06-01T15:00:00+00:00
[2021-06-01 18:00:13,980] {standard_task_runner.py:52} INFO - Started process 957 to run task
[2021-06-01 18:00:13,983] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'airflow_incr', 'fmc_mtd', '2021-06-01T15:00:00+00:00', '--job-id', '2841', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/100_FL_DAG_airflow_incr_20210531_122511.py', '--cfg-path', '/tmp/tmp67h9tgso', '--error-file', '/tmp/tmp4w35rr0g']
[2021-06-01 18:00:13,990] {standard_task_runner.py:77} INFO - Job 2841: Subtask fmc_mtd
[2021-06-01 18:00:15,336] {logging_mixin.py:104} INFO - Running <TaskInstance: airflow_incr.fmc_mtd 2021-06-01T15:00:00+00:00 [running]> on host DESKTOP-VNC70B9.localdomain
[2021-06-01 18:00:17,757] {taskinstance.py:1282} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=Vaultspeed
AIRFLOW_CTX_DAG_ID=airflow_incr
AIRFLOW_CTX_TASK_ID=fmc_mtd
AIRFLOW_CTX_EXECUTION_DATE=2021-06-01T15:00:00+00:00
AIRFLOW_CTX_DAG_RUN_ID=scheduled__2021-06-01T15:00:00+00:00
[2021-06-01 18:00:17,757] {jdbc.py:70} INFO - Executing: ['INSERT INTO "moto_fmc"."fmc_loading_history" \n\t\tSELECT \n\t\t\t\'airflow_incr\',\n\t\t\t\'airflow\',\n\t\t\t35,\n\t\t\tTO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar),\n\t\t\t"fmc_begin_lw_timestamp" + -15 * interval\'1 minute\',\n\t\t\tTO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar),\n\t\t\tTO_TIMESTAMP(\'2021-06-01 15:59:59.210732\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar),\n\t\t\tnull,\n\t\t\tnull\n\t\tFROM (\n\t\t\tSELECT MAX("fmc_end_lw_timestamp") as "fmc_begin_lw_timestamp" \n\t\t\tFROM "moto_fmc"."fmc_loading_history" \n\t\t\tWHERE "src_bk" = \'airflow\' \n\t\t\tAND "success_flag" = 1\n\t\t\tAND "load_cycle_id" < 35\n\t\t) SRC_WINDOW\n\t\tWHERE NOT EXISTS(SELECT 1 FROM "moto_fmc"."fmc_loading_history" WHERE "load_cycle_id" = 35)', 'TRUNCATE TABLE "airflow_mtd"."load_cycle_info" ', 'INSERT INTO "airflow_mtd"."load_cycle_info"("load_cycle_id","load_date") \n\t\t\tSELECT 35,TO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar)', 'TRUNCATE TABLE "airflow_mtd"."fmc_loading_window_table" ', 'INSERT INTO "airflow_mtd"."fmc_loading_window_table"("fmc_begin_lw_timestamp","fmc_end_lw_timestamp") \n\t\t\tSELECT "fmc_begin_lw_timestamp" + -15 * interval\'1 minute\', TO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar)\n\t\t\tFROM (\n\t\t\t\tSELECT MAX("fmc_end_lw_timestamp") as "fmc_begin_lw_timestamp" \n\t\t\t\tFROM "moto_fmc"."fmc_loading_history" \n\t\t\t\tWHERE "src_bk" = \'airflow\' \n\t\t\t\tAND "success_flag" = 1\n\t\t\t\tAND "load_cycle_id" < 35\n\t\t\t) SRC_WINDOW']
[2021-06-01 18:00:18,097] {base.py:78} INFO - Using connection to: id: test_dv. Host: jdbc:postgresql://localhost:5432/test_dv_stijn, Port: None, Schema: , Login: postgres, Password: ***, extra: {'extra__jdbc__drv_path': '/home/stijn/airflow/jdbc/postgresql-9.4.1212.jar', 'extra__jdbc__drv_clsname': 'org.postgresql.Driver', 'extra__google_cloud_platform__project': '', 'extra__google_cloud_platform__key_path': '', 'extra__google_cloud_platform__keyfile_dict': '', 'extra__google_cloud_platform__scope': '', 'extra__google_cloud_platform__num_retries': 5, 'extra__grpc__auth_type': '', 'extra__grpc__credential_pem_file': '', 'extra__grpc__scopes': '', 'extra__yandexcloud__service_account_json': '', 'extra__yandexcloud__service_account_json_path': '', 'extra__yandexcloud__oauth': '', 'extra__yandexcloud__public_ssh_key': '', 'extra__yandexcloud__folder_id': '', 'extra__kubernetes__in_cluster': False, 'extra__kubernetes__kube_config': '', 'extra__kubernetes__namespace': ''}
[2021-06-01 18:00:18,530] {local_task_job.py:151} INFO - Task exited with return code 1
`
| https://github.com/apache/airflow/issues/16295 | https://github.com/apache/airflow/pull/21540 | cb24ee9414afcdc1a2b0fe1ec0b9f0ba5e1bd7b7 | bc1b422e1ce3a5b170618a7a6589f8ae2fc33ad6 | 2021-06-07T08:52:12Z | python | 2022-02-27T13:07:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,290 | ["airflow/providers/cncf/kubernetes/hooks/kubernetes.py", "airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"] | Allow deleting existing spark application before creating new one via SparkKubernetesOperator in Kubernetes | airflow version: v2.0.2
**Description**
calling SparkKubernetesOperator within DAG should delete spark application if such already exists before submitting a new one.
**Use case / motivation**
```
t1 = SparkKubernetesOperator(
task_id='spark_pi_submit',
namespace="dummy",
application_file="spark.yaml",
kubernetes_conn_id="kubernetes",
do_xcom_push=True,
dag=dag,
)
```
After first successful run, next runs fail to submit spark application
> airflow.exceptions.AirflowException: Exception when calling -> create_custom_object: (409)
> Reason: Conflict
>
> {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"sparkapplications.sparkoperator.k8s.io "xxx" already exists","reason":"AlreadyExists","details":{"name":"xxx","group":"sparkoperator.k8s.io","kind":"sparkapplications"},"code":409}
>
**Expected Result**
Delete existing spark application if such exists before submitting new one.
| https://github.com/apache/airflow/issues/16290 | https://github.com/apache/airflow/pull/21092 | 2bb69508d8d0248621ada682d1bdedef729bbcf0 | 3c5bc73579080248b0583d74152f57548aef53a2 | 2021-06-06T17:46:11Z | python | 2022-04-12T13:32:13Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,285 | ["airflow/jobs/local_task_job.py", "tests/jobs/test_local_task_job.py", "tests/models/test_taskinstance.py"] | Airflow 2.1.0 doesn't retry a task if it externally killed | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS
- **Kernel** (e.g. `uname -a`): Linux 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**: pip
- **Others**:
**What happened**:
When a task get externally killed, it is marked as Failed even though it can be retried.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
When a task get externally killed (kill -9 pid), it should put back to retry if it `retries` did not run out yet.
<!-- What do you think went wrong? -->
**How to reproduce it**:
I'm using Celery as executor and I have a cluster of ~250 machine.
I have a task that defined as next. When the task started to execute, and it get killed externally by sending SIGKILL to it (or to the executor process and it's children), it get marked as FAILED and doesn't put to retry (even though retries is set to 10 times)
```
import time
def _task1(ts_nodash, dag_run, ti, **context):
time.sleep(300)
tasks = PythonOperator(
task_id='task1',
python_callable=_task1,
retries=10,
dag=dag1
)
```
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**: This bug is introduced by [15537](https://github.com/apache/airflow/pull/15537/files#diff-d80fa918cc75c4d6aa582d5e29eeb812ba21371d6977fde45a4749668b79a515R159) as far as I know.
<img width="1069" alt="image" src="https://user-images.githubusercontent.com/2614168/120921545-7bedbe00-c6c4-11eb-8e7b-b4f5a7fc2292.png">
Next is the task log after sending SIGKILL to it.
```
[2021-06-06 18:50:07,897] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: convert_manager.download_rtv_file 2021-06-06T11:26:37+00:00 [queued]>
[2021-06-06 18:50:07,916] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: convert_manager.download_rtv_file 2021-06-06T11:26:37+00:00 [queued]>
[2021-06-06 18:50:07,918] {taskinstance.py:1067} INFO -
--------------------------------------------------------------------------------
[2021-06-06 18:50:07,919] {taskinstance.py:1068} INFO - Starting attempt 6 of 16
[2021-06-06 18:50:07,921] {taskinstance.py:1069} INFO -
--------------------------------------------------------------------------------
[2021-06-06 18:50:07,930] {taskinstance.py:1087} INFO - Executing <Task(PythonOperator): download_rtv_file> on 2021-06-06T11:26:37+00:00
[2021-06-06 18:50:07,937] {standard_task_runner.py:52} INFO - Started process 267 to run task
[2021-06-06 18:50:07,942] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'convert_manager', 'download_rtv_file', '2021-06-06T11:26:37+00:00', '--job-id', '75', '--pool', 'lane_xs', '--raw', '--subdir', 'DAGS_FOLDER/convert_manager.py', '--cfg-path', '/tmp/tmp35oxqliw', '--error-file', '/tmp/tmp3eme_cq7']
[2021-06-06 18:50:07,948] {standard_task_runner.py:77} INFO - Job 75: Subtask download_rtv_file
[2021-06-06 18:50:07,999] {logging_mixin.py:104} INFO - Running <TaskInstance: convert_manager.download_rtv_file 2021-06-06T11:26:37+00:00 [running]> on host 172.29.29.11
[2021-06-06 18:50:08,052] {taskinstance.py:1282} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=traffics
AIRFLOW_CTX_DAG_ID=convert_manager
AIRFLOW_CTX_TASK_ID=download_rtv_file
AIRFLOW_CTX_EXECUTION_DATE=2021-06-06T11:26:37+00:00
AIRFLOW_CTX_DAG_RUN_ID=dev_triggered_lane_31_itf-30_201208213_run_2021-06-06T13:26:37.135821+02:00
[2021-06-06 18:50:08,087] {convert_manager.py:377} INFO - downloading to /var/spool/central/airflow/data/ftp/***/ITF_RTV.xml.zip/rtv/ITF_RTV.xml.zip_20210606184921
[2021-06-06 18:50:08,094] {ftp.py:187} INFO - Retrieving file from FTP: /rtv/ITF_RTV.xml.zip
[2021-06-06 18:50:38,699] {local_task_job.py:151} INFO - Task exited with return code Negsignal.SIGKILL
```
<!--
How often does this problem occur? Once? Every time etc? Every time
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16285 | https://github.com/apache/airflow/pull/16301 | 2c190029e81cbfd77a858b5fd0779c7fbc9af373 | 4e2a94c6d1bde5ddf2aa0251190c318ac22f3b17 | 2021-06-06T10:35:16Z | python | 2021-07-28T14:57:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,263 | ["airflow/www/utils.py", "tests/www/test_utils.py"] | Unable to use nested lists in DAG markdown documentation | **Apache Airflow version**: 2.0.2
**What happened**:
Tried to use the following markdown as a `doc_md` string passed to a DAG
```markdown
- Example
- Nested List
```
It was rendered in the web UI as a single list with no nesting or indentation.
**What you expected to happen**:
I expected the list to display as a nested list with visible indentation.
**How to reproduce it**:
Try and pass a DAG a `doc_md` string of the above nested list.
I think the bug will affect any markdown that relies on meaningful indentation (tabs or spaces)
| https://github.com/apache/airflow/issues/16263 | https://github.com/apache/airflow/pull/16414 | 15ff2388e8a52348afcc923653f85ce15a3c5f71 | 6f9c0ceeb40947c226d35587097529d04c3e3e59 | 2021-06-04T05:36:05Z | python | 2021-06-13T00:30:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,256 | ["chart/templates/workers/worker-kedaautoscaler.yaml", "chart/values.schema.json", "chart/values.yaml"] | Helm chart: Keda add minReplicaCount | **Description**
Keda supports [minReplicaCount](https://keda.sh/docs/1.4/concepts/scaling-deployments/) (default value is 0). It would be great if the users would have the option in the helm chart to overwrite the default value.
**Use case / motivation**
Keda scales the workers to zero if there is no running DAG. The scaling is possible between 0-`maxReplicaCount` however we want the scaling between `minReplicaCount`-`maxReplicaCount `
**Are you willing to submit a PR?**
Yes
| https://github.com/apache/airflow/issues/16256 | https://github.com/apache/airflow/pull/16262 | 7744f05997c1622678a8a7c65a2959c9aef07141 | ef83f730f5953eff1e9c63056e32f633afe7d3e2 | 2021-06-03T19:15:43Z | python | 2021-06-05T23:35:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,252 | ["Dockerfile", "scripts/ci/libraries/_verify_image.sh", "scripts/in_container/prod/entrypoint_prod.sh"] | Unbound variable in entrypoint_prod.sh | When I execute the following command, I got the error:
```
$ docker run -ti apache/airflow:2.0.1 airflow
/entrypoint: line 250: 2: unbound variable
```
It would be great if I could see a list of commands I can execute. | https://github.com/apache/airflow/issues/16252 | https://github.com/apache/airflow/pull/16258 | 363477fe0e375e8581c0976616be943eb56a09bd | 7744f05997c1622678a8a7c65a2959c9aef07141 | 2021-06-03T18:11:35Z | python | 2021-06-05T17:47:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,238 | ["airflow/www/static/js/tree.js"] | Airflow Tree View for larger dags | Hi, Airflow Web UI shows nothing on tree view for larger dags (more than 100 tasks), although it's working fine for smaller dags. Anything that is needed to be configured in `airflow.cfg` to support larger dags in the UI?

Smaller Dag:

**Apache Airflow version**: 2.1.0 (Celery)
**Environment**:
- **Cloud provider or hardware configuration**: `AWS EC2`
- **OS** (e.g. from /etc/os-release): `Ubuntu 18.04`
- **Kernel** (e.g. `uname -a`): `5.4.0-1045-aws`
- **Install tools**: `pip`
**What you expected to happen**:
It is rendering correctly on `Airflow 1.10.13 (Sequential)`

**How to reproduce it**:
Create a sample dag with `>=` 100 tasks
**Anything else we need to know**:
The cli command for viewing dag tree is working correctly `airflow tasks list services_data_sync --tree`
| https://github.com/apache/airflow/issues/16238 | https://github.com/apache/airflow/pull/16522 | 6b0dfec01fd9fca7ab3be741d25528a303424edc | f9786d42f1f861c7a40745c00cd4d3feaf6254a7 | 2021-06-03T10:57:07Z | python | 2021-06-21T15:25:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,227 | ["airflow/jobs/local_task_job.py", "airflow/models/taskinstance.py", "tests/cli/commands/test_task_command.py", "tests/jobs/test_local_task_job.py", "tests/models/test_taskinstance.py"] | LocalTaskJob heartbeat race condition with finishing task causing SIGTERM | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.2
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.2 LTS
- **Kernel** (e.g. `uname -a`): Linux datadumpprod2 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**: Docker
**What happened**:
After task execution is done but process isn't finished yet, heartbeat callback kills the process because falsely detects external change of state.
```
[2021-06-02 20:40:55,273] {{taskinstance.py:1532}} INFO - Marking task as FAILED. dag_id=<dag_id>, task_id=<task_id>, execution_date=20210602T104000, start_date=20210602T184050, end_date=20210602T184055
[2021-06-02 20:40:55,768] {{local_task_job.py:188}} WARNING - State of this instance has been externally set to failed. Terminating instance.
[2021-06-02 20:40:55,770] {{process_utils.py:100}} INFO - Sending Signals.SIGTERM to GPID 2055
[2021-06-02 20:40:55,770] {{taskinstance.py:1265}} ERROR - Received SIGTERM. Terminating subprocesses.
[2021-06-02 20:40:56,104] {{process_utils.py:66}} INFO - Process psutil.Process(pid=2055, status='terminated', exitcode=1, started='20:40:49') (2055) terminated with exit code 1
```
This happens more often when mini scheduler is enabled because in such case the window for race condition is bigger (time of execution mini scheduler).
**What you expected to happen**:
Heartbeat should allow task to finish and shouldn't kill it.
**How to reproduce it**:
As it's a race condition it happens randomly but to make it more often, you should have mini scheduler enabled and big enough database that execution of mini scheduler takes as long as possible. You can also reduce heartbeat interval to minimum.
| https://github.com/apache/airflow/issues/16227 | https://github.com/apache/airflow/pull/16289 | 59c67203a76709fffa9a314d77501d877055ca39 | 408bd26c22913af93d05aa70abc3c66c52cd4588 | 2021-06-02T20:08:13Z | python | 2021-06-10T13:29:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,224 | ["airflow/providers/microsoft/azure/hooks/wasb.py"] | WASB remote logging too verbose in task logger | **Apache Airflow version**: 2.1.0
**Environment**:
apache-airflow-providers-microsoft-azure == 2.0.0
**What happened**:
When wasb remote logger wants to create container (always), the log is part of the task logger.
```
[2021-06-02 13:55:51,619] {wasb.py:385} INFO - Attempting to create container: xxxxx
[2021-06-02 13:55:51,479] {_universal.py:419} INFO - Request URL: 'XXXXXXX'
[2021-06-02 13:55:51,483] {_universal.py:420} INFO - Request method: 'HEAD'
[2021-06-02 13:55:51,490] {_universal.py:421} INFO - Request headers:
[2021-06-02 13:55:51,495] {_universal.py:424} INFO - 'x-ms-version': 'REDACTED'
[2021-06-02 13:55:51,499] {_universal.py:424} INFO - 'Accept': 'application/xml'
[2021-06-02 13:55:51,507] {_universal.py:424} INFO - 'User-Agent': 'azsdk-python-storage-blob/12.8.1 Python/3.7.10 (Linux-5.4.0-1046-azure-x86_64-with-debian-10.9)'
[2021-06-02 13:55:51,511] {_universal.py:424} INFO - 'x-ms-date': 'REDACTED'
[2021-06-02 13:55:51,517] {_universal.py:424} INFO - 'x-ms-client-request-id': ''
[2021-06-02 13:55:51,523] {_universal.py:424} INFO - 'Authorization': 'REDACTED'
[2021-06-02 13:55:51,529] {_universal.py:437} INFO - No body was attached to the request
[2021-06-02 13:55:51,541] {_universal.py:452} INFO - Response status: 404
[2021-06-02 13:55:51,550] {_universal.py:453} INFO - Response headers:
[2021-06-02 13:55:51,556] {_universal.py:456} INFO - 'Transfer-Encoding': 'chunked'
[2021-06-02 13:55:51,561] {_universal.py:456} INFO - 'Server': 'Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0'
[2021-06-02 13:55:51,567] {_universal.py:456} INFO - 'x-ms-request-id': 'xxxx'
[2021-06-02 13:55:51,573] {_universal.py:456} INFO - 'x-ms-client-request-id': 'xxxx'
[2021-06-02 13:55:51,578] {_universal.py:456} INFO - 'x-ms-version': 'REDACTED'
[2021-06-02 13:55:51,607] {_universal.py:456} INFO - 'x-ms-error-code': 'REDACTED'
[2021-06-02 13:55:51,613] {_universal.py:456} INFO - 'Date': 'Wed, 02 Jun 2021 13:55:50 GMT'
```
**What you expected to happen**:
HTTP verbose logging not showing and "Attempting to create container" maybe it's not useful in the task log
ref: https://github.com/Azure/azure-sdk-for-python/issues/9422
| https://github.com/apache/airflow/issues/16224 | https://github.com/apache/airflow/pull/18896 | 85137f376373876267675f606cffdb788caa4818 | d5f40d739fc583c50ae3b3f4b4bde29e61c8d81b | 2021-06-02T17:45:39Z | python | 2022-08-09T19:48:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,204 | ["airflow/sensors/external_task.py", "newsfragments/27190.significant.rst", "tests/sensors/test_external_task_sensor.py"] | ExternalTaskSensor does not fail when failed_states is set along with a execution_date_fn | **Apache Airflow version**: 2.x including main
**What happened**:
I am using an `execution_date_fn` in an `ExternalTaskSensor` that also sets `allowed_states=['success']` and `failed_states=['failed']`. When one of the N upstream tasks fails, the sensor will hang forever in the `poke` method because there is a bug in checking for failed_states.
**What you expected to happen**:
I would expect the `ExternalTaskSensor` to fail.
I think this is due to a bug in the `poke` method where it should check if `count_failed > 0` as opposed to checking `count_failed == len(dttm_filter)`. I've created a fix locally that works for my case and have submitted a PR #16205 for it as reference.
**How to reproduce it**:
Create any `ExternalTaskSensor` that checks for `failed_states` and have one of the external DAGs tasks fail while others succeed.
E.g.
```
ExternalTaskSensor(
task_id='check_external_dag',
external_dag_id='external_dag',
external_task_id=None,
execution_date_fn=dependent_date_fn,
allowed_states=['success'],
failed_states=['failed'],
check_existence=True)
```
| https://github.com/apache/airflow/issues/16204 | https://github.com/apache/airflow/pull/27190 | a504a8267dd5530923bbe2c8ec4d1b409f909d83 | 34e21ea3e49f1720652eefc290fc2972a9292d29 | 2021-06-01T19:10:02Z | python | 2022-11-10T09:20:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,202 | ["airflow/www/views.py", "tests/www/views/test_views_custom_user_views.py"] | Missing Show/Edit/Delete under Security -> Users in 2.1.0 | **Apache Airflow version**: 2.1.0
**Browsers**: Chrome and Firefox
**What happened**:
Before upgrading to 2.1.0

After upgrading to 2.1.0

**What you expected to happen**:
Show/Edit/Delete under Security -> Users are available
<!-- What do you think went wrong? -->
**How to reproduce it**:
Go to Security -> Users (as an admin of course) | https://github.com/apache/airflow/issues/16202 | https://github.com/apache/airflow/pull/17431 | 7dd11abbb43a3240c2291f8ea3981d393668886b | c1e2af4dd2bf868307caae9f2fa825562319a4f8 | 2021-06-01T16:35:51Z | python | 2021-08-09T14:46:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,148 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | Downloading files from S3 broken in 2.1.0 | **Apache Airflow version**: 2.0.2 and 2.1.0
**Environment**:
- **Cloud provider or hardware configuration**: running locally
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`): Darwin CSchillebeeckx-0589.local 19.6.0 Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64 x86_64
- **Install tools**: pip
- **Others**: Running everything in Docker including Redis and Celery
**What happened**:
I'm seeing issues with downloading files from S3 on 2.1.0; a file is created after download, however the file content is empty!
**What you expected to happen**:
Non-empty files :)
**How to reproduce it**:
The DAG I'm running:
```python
# -*- coding: utf-8 -*-
import os
import logging
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow.utils.dates import days_ago
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
def download_file_from_s3():
# authed with ENVIRONMENT variables
s3_hook = S3Hook()
bucket = 'some-secret-bucket'
key = 'tmp.txt'
with open('/tmp/s3_hook.txt', 'w') as f:
s3_hook.get_resource_type("s3").Bucket(bucket).Object(key).download_file(f.name)
logging.info(f"File downloaded: {f.name}")
with open(f.name, 'r') as f_in:
logging.info(f"FILE CONTENT {f_in.read()}")
dag = DAG(
"tmp",
catchup=False,
default_args={
"start_date": days_ago(1),
},
schedule_interval=None,
)
download_file_from_s3 = PythonOperator(
task_id="download_file_from_s3", python_callable=download_file_from_s3, dag=dag
)
```
The logged output from 2.0.2
```
*** Fetching from: http://ba1b92003f54:8793/log/tmp/download_file_from_s3ile/2021-05-28T17:25:58.851532+00:00/1.log
[2021-05-28 10:26:04,227] {executor_loader.py:82} DEBUG - Loading core executor: CeleryExecutor
[2021-05-28 10:26:04,239] {__init__.py:51} DEBUG - Loading core task runner: StandardTaskRunner
[2021-05-28 10:26:04,252] {base_task_runner.py:62} DEBUG - Planning to run as the user
[2021-05-28 10:26:04,255] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> from DB
[2021-05-28 10:26:04,264] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]>
[2021-05-28 10:26:04,264] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2021-05-28 10:26:04,265] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Task Instance Not Running' PASSED: True, Task is not in running state.
[2021-05-28 10:26:04,279] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2021-05-28 10:26:04,280] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2021-05-28 10:26:04,280] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Task Instance State' PASSED: True, Task state queued was valid.
[2021-05-28 10:26:04,280] {taskinstance.py:877} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]>
[2021-05-28 10:26:04,281] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2021-05-28 10:26:04,291] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Pool Slots Available' PASSED: True, ('There are enough open slots in %s to execute the task', 'default_pool')
[2021-05-28 10:26:04,301] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2021-05-28 10:26:04,301] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2021-05-28 10:26:04,301] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Task Concurrency' PASSED: True, Task concurrency is not set.
[2021-05-28 10:26:04,301] {taskinstance.py:877} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]>
[2021-05-28 10:26:04,301] {taskinstance.py:1068} INFO -
--------------------------------------------------------------------------------
[2021-05-28 10:26:04,302] {taskinstance.py:1069} INFO - Starting attempt 1 of 1
[2021-05-28 10:26:04,302] {taskinstance.py:1070} INFO -
--------------------------------------------------------------------------------
[2021-05-28 10:26:04,317] {taskinstance.py:1089} INFO - Executing <Task(PythonOperator): download_file_from_s3ile> on 2021-05-28T17:25:58.851532+00:00
[2021-05-28 10:26:04,324] {standard_task_runner.py:52} INFO - Started process 118 to run task
[2021-05-28 10:26:04,331] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'tmp', 'download_file_from_s3ile', '2021-05-28T17:25:58.851532+00:00', '--job-id', '6', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/tmp_dag.py', '--cfg-path', '/tmp/tmpuz8u2gva', '--error-file', '/tmp/tmpms02c24z']
[2021-05-28 10:26:04,333] {standard_task_runner.py:77} INFO - Job 6: Subtask download_file_from_s3ile
[2021-05-28 10:26:04,334] {cli_action_loggers.py:66} DEBUG - Calling callbacks: [<function default_action_log at 0x7f348514f0e0>]
[2021-05-28 10:26:04,350] {settings.py:210} DEBUG - Setting up DB connection pool (PID 118)
[2021-05-28 10:26:04,351] {settings.py:243} DEBUG - settings.prepare_engine_args(): Using NullPool
[2021-05-28 10:26:04,357] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [None]> from DB
[2021-05-28 10:26:04,377] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]>
[2021-05-28 10:26:04,391] {logging_mixin.py:104} INFO - Running <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> on host ba1b92003f54
[2021-05-28 10:26:04,395] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> from DB
[2021-05-28 10:26:04,401] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]>
[2021-05-28 10:26:04,406] {taskinstance.py:658} DEBUG - Clearing XCom data
[2021-05-28 10:26:04,413] {taskinstance.py:665} DEBUG - XCom data cleared
[2021-05-28 10:26:04,438] {taskinstance.py:1283} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=tmp
AIRFLOW_CTX_TASK_ID=download_file_from_s3ile
AIRFLOW_CTX_EXECUTION_DATE=2021-05-28T17:25:58.851532+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-05-28T17:25:58.851532+00:00
[2021-05-28 10:26:04,438] {__init__.py:146} DEBUG - Preparing lineage inlets and outlets
[2021-05-28 10:26:04,438] {__init__.py:190} DEBUG - inlets: [], outlets: []
[2021-05-28 10:26:04,439] {base_aws.py:362} INFO - Airflow Connection: aws_conn_id=aws_default
[2021-05-28 10:26:04,446] {base_aws.py:385} WARNING - Unable to use Airflow Connection for credentials.
[2021-05-28 10:26:04,446] {base_aws.py:386} INFO - Fallback on boto3 credential strategy
[2021-05-28 10:26:04,446] {base_aws.py:391} INFO - Creating session using boto3 credential strategy region_name=None
[2021-05-28 10:26:04,448] {hooks.py:417} DEBUG - Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
[2021-05-28 10:26:04,450] {hooks.py:417} DEBUG - Changing event name from before-call.apigateway to before-call.api-gateway
[2021-05-28 10:26:04,451] {hooks.py:417} DEBUG - Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict
[2021-05-28 10:26:04,452] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration
[2021-05-28 10:26:04,453] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.route53 to before-parameter-build.route-53
[2021-05-28 10:26:04,453] {hooks.py:417} DEBUG - Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search
[2021-05-28 10:26:04,454] {hooks.py:417} DEBUG - Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section
[2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask
[2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section
[2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search
[2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section
[2021-05-28 10:26:04,471] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/boto3/data/s3/2006-03-01/resources-1.json
[2021-05-28 10:26:04,477] {credentials.py:1961} DEBUG - Looking for credentials via: env
[2021-05-28 10:26:04,477] {credentials.py:1087} INFO - Found credentials in environment variables.
[2021-05-28 10:26:04,477] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/endpoints.json
[2021-05-28 10:26:04,483] {hooks.py:210} DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f347f1165f0>
[2021-05-28 10:26:04,494] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/s3/2006-03-01/service-2.json
[2021-05-28 10:26:04,505] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x7f347f1bd170>
[2021-05-28 10:26:04,505] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function lazy_call.<locals>._handler at 0x7f3453f7f170>
[2021-05-28 10:26:04,506] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x7f347f1b9ef0>
[2021-05-28 10:26:04,510] {endpoint.py:291} DEBUG - Setting s3 timeout as (60, 60)
[2021-05-28 10:26:04,511] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/_retry.json
[2021-05-28 10:26:04,512] {client.py:164} DEBUG - Registering retry handlers for service: s3
[2021-05-28 10:26:04,513] {factory.py:66} DEBUG - Loading s3:s3
[2021-05-28 10:26:04,515] {factory.py:66} DEBUG - Loading s3:Bucket
[2021-05-28 10:26:04,515] {model.py:358} DEBUG - Renaming Bucket attribute name
[2021-05-28 10:26:04,516] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Bucket: calling handler <function lazy_call.<locals>._handler at 0x7f3453ecbe60>
[2021-05-28 10:26:04,517] {factory.py:66} DEBUG - Loading s3:Object
[2021-05-28 10:26:04,519] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Object: calling handler <function lazy_call.<locals>._handler at 0x7f3453ecb3b0>
[2021-05-28 10:26:04,520] {utils.py:599} DEBUG - Acquiring 0
[2021-05-28 10:26:04,521] {tasks.py:194} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>}) about to wait for the following futures []
[2021-05-28 10:26:04,521] {tasks.py:203} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>}) done waiting for dependent futures
[2021-05-28 10:26:04,521] {tasks.py:147} DEBUG - Executing task DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>}) with kwargs {'client': <botocore.client.S3 object at 0x7f3453215d10>, 'config': <boto3.s3.transfer.TransferConfig object at 0x7f3453181390>, 'osutil': <s3transfer.utils.OSUtils object at 0x7f3453181510>, 'request_executor': <s3transfer.futures.BoundedExecutor object at 0x7f3453181190>, 'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>, 'io_executor': <s3transfer.futures.BoundedExecutor object at 0x7f34531fced0>}
[2021-05-28 10:26:04,522] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function sse_md5 at 0x7f347f133a70>
[2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function validate_bucket_name at 0x7f347f1339e0>
[2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>>
[2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f34531d0250>>
[2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function generate_idempotent_uuid at 0x7f347f133830>
[2021-05-28 10:26:04,524] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function add_expect_header at 0x7f347f133d40>
[2021-05-28 10:26:04,524] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>>
[2021-05-28 10:26:04,525] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function inject_api_version_header_if_needed at 0x7f347f13b0e0>
[2021-05-28 10:26:04,525] {endpoint.py:101} DEBUG - Making request for OperationModel(name=HeadObject) with params: {'url_path': '[REDACT]', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': b'', 'url': 'https://s3.amazonaws.com/[REDACT]/tmp.txt', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f345321a710>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}}
[2021-05-28 10:26:04,526] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_not_transferring at 0x7f347ee7de60>
[2021-05-28 10:26:04,526] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f3453215e90>>
[2021-05-28 10:26:04,527] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f3453f046d0>>
[2021-05-28 10:26:04,527] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <function set_operation_specific_signer at 0x7f347f133710>
[2021-05-28 10:26:04,527] {hooks.py:210} DEBUG - Event before-sign.s3.HeadObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f34531d0710>>
[2021-05-28 10:26:04,527] {utils.py:1639} DEBUG - Defaulting to S3 virtual host style addressing with path style addressing fallback.
[2021-05-28 10:26:04,528] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: https://s3.amazonaws.com/[REDACT]/tmp.txt
[2021-05-28 10:26:04,528] {utils.py:1036} DEBUG - URI updated to: https://[REDACT].s3.amazonaws.com/tmp.txt
[2021-05-28 10:26:04,528] {auth.py:364} DEBUG - Calculating signature using v4 auth.
[2021-05-28 10:26:04,529] {auth.py:365} DEBUG - CanonicalRequest:
HEAD
/tmp.txt
[REDACT]
[2021-05-28 10:26:04,529] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_transferring at 0x7f347ee8a320>
[2021-05-28 10:26:04,529] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=https://[REDACT].s3.amazonaws.com/tmp.txt, headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T172604Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT]', SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}>
[2021-05-28 10:26:04,531] {connectionpool.py:943} DEBUG - Starting new HTTPS connection (1): [REDACT].s3.amazonaws.com:443
[2021-05-28 10:26:05,231] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "HEAD /tmp.txt HTTP/1.1" 200 0
[2021-05-28 10:26:05,232] {parsers.py:233} DEBUG - Response headers: {'x-amz-id-2': 'o[REDACT]', 'x-amz-request-id': '[REDACT]', 'Date': 'Fri, 28 May 2021 17:26:06 GMT', 'Last-Modified': 'Thu, 27 May 2021 20:37:55 GMT', 'ETag': '"[REDACT]"', 'x-amz-server-side-encryption': 'AES256', 'x-amz-version-id': '[REDACT]', 'Accept-Ranges': 'bytes', 'Content-Type': 'text/plain', 'Content-Length': '5', 'Server': 'AmazonS3'}
[2021-05-28 10:26:05,232] {parsers.py:234} DEBUG - Response body:
b''
[2021-05-28 10:26:05,234] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f345321ab50>
[2021-05-28 10:26:05,235] {retryhandler.py:187} DEBUG - No retry needed.
[2021-05-28 10:26:05,235] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>>
[2021-05-28 10:26:05,236] {futures.py:318} DEBUG - Submitting task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) to executor <s3transfer.futures.BoundedExecutor object at 0x7f3453181190> for transfer request: 0.
[2021-05-28 10:26:05,236] {utils.py:599} DEBUG - Acquiring 0
[2021-05-28 10:26:05,236] {tasks.py:194} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) about to wait for the following futures []
[2021-05-28 10:26:05,237] {tasks.py:203} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) done waiting for dependent futures
[2021-05-28 10:26:05,237] {tasks.py:147} DEBUG - Executing task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) with kwargs {'client': <botocore.client.S3 object at 0x7f3453215d10>, 'bucket': '[REDACT]', 'key': 'tmp.txt', 'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f34531fc890>, 'extra_args': {}, 'callbacks': [], 'max_attempts': 5, 'download_output_manager': <s3transfer.download.DownloadFilenameOutputManager object at 0x7f34531fc7d0>, 'io_chunksize': 262144, 'bandwidth_limiter': None}
[2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function sse_md5 at 0x7f347f133a70>
[2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function validate_bucket_name at 0x7f347f1339e0>
[2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>>
[2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f34531d0250>>
[2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function generate_idempotent_uuid at 0x7f347f133830>
[2021-05-28 10:26:05,239] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function add_expect_header at 0x7f347f133d40>
[2021-05-28 10:26:05,239] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>>
[2021-05-28 10:26:05,239] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function inject_api_version_header_if_needed at 0x7f347f13b0e0>
[2021-05-28 10:26:05,240] {utils.py:612} DEBUG - Releasing acquire 0/None
[2021-05-28 10:26:05,240] {endpoint.py:101} DEBUG - Making request for OperationModel(name=GetObject) with params: {'url_path': '/[REDACT]/tmp.txt', 'query_string': {}, 'method': 'GET', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': b'', 'url': '[REDACT]', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f345321a710>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}}
[2021-05-28 10:26:05,241] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_not_transferring at 0x7f347ee7de60>
[2021-05-28 10:26:05,241] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f3453215e90>>
[2021-05-28 10:26:05,241] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f3453f046d0>>
[2021-05-28 10:26:05,242] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <function set_operation_specific_signer at 0x7f347f133710>
[2021-05-28 10:26:05,242] {hooks.py:210} DEBUG - Event before-sign.s3.GetObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f34531d0710>>
[2021-05-28 10:26:05,242] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: [REDACT]
[2021-05-28 10:26:05,242] {utils.py:1036} DEBUG - URI updated to: [REDACT]
[2021-05-28 10:26:05,243] {auth.py:364} DEBUG - Calculating signature using v4 auth.
[2021-05-28 10:26:05,243] {auth.py:365} DEBUG - CanonicalRequest:
GET
/tmp.txt
[REDACT]
[2021-05-28 10:26:05,243] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_transferring at 0x7f347ee8a320>
[2021-05-28 10:26:05,243] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=True, method=GET, url=[REDACT], headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T172605Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT], SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}>
[2021-05-28 10:26:05,402] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "GET /tmp.txt HTTP/1.1" 200 5
[2021-05-28 10:26:05,402] {parsers.py:233} DEBUG - Response headers: {'x-amz-id-2': '[REDACT]', 'x-amz-request-id': '[REDACT]', 'Date': 'Fri, 28 May 2021 17:26:06 GMT', 'Last-Modified': 'Thu, 27 May 2021 20:37:55 GMT', 'ETag': '"[REDACT]"', 'x-amz-server-side-encryption': 'AES256', 'x-amz-version-id': '[REDACT]', 'Accept-Ranges': 'bytes', 'Content-Type': 'text/plain', 'Content-Length': '5', 'Server': 'AmazonS3'}
[2021-05-28 10:26:05,403] {parsers.py:234} DEBUG - Response body:
<botocore.response.StreamingBody object at 0x7f345310d090>
[2021-05-28 10:26:05,404] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f345321ab50>
[2021-05-28 10:26:05,404] {retryhandler.py:187} DEBUG - No retry needed.
[2021-05-28 10:26:05,404] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>>
[2021-05-28 10:26:05,405] {tasks.py:194} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) about to wait for the following futures []
[2021-05-28 10:26:05,406] {tasks.py:203} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) done waiting for dependent futures
[2021-05-28 10:26:05,406] {tasks.py:147} DEBUG - Executing task IOWriteTask(transfer_id=0, {'offset': 0}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f34531fc890>, 'offset': 0}
[2021-05-28 10:26:05,407] {tasks.py:194} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) about to wait for the following futures []
[2021-05-28 10:26:05,407] {tasks.py:203} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) done waiting for dependent futures
[2021-05-28 10:26:05,408] {tasks.py:147} DEBUG - Executing task IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f34531fc890>, 'final_filename': '/tmp/s3_hook.txt', 'osutil': <s3transfer.utils.OSUtils object at 0x7f3453181510>}
[2021-05-28 10:26:05,409] {utils.py:612} DEBUG - Releasing acquire 0/None
[2021-05-28 10:26:05,412] {tmp_dag.py:21} INFO - File downloaded: /tmp/s3_hook.txt
[2021-05-28 10:26:05,413] {tmp_dag.py:24} INFO - FILE CONTENT test
[2021-05-28 10:26:05,413] {python.py:118} INFO - Done. Returned value was: None
[2021-05-28 10:26:05,413] {__init__.py:107} DEBUG - Lineage called with inlets: [], outlets: []
[2021-05-28 10:26:05,413] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> from DB
[2021-05-28 10:26:05,421] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]>
[2021-05-28 10:26:05,423] {taskinstance.py:1192} INFO - Marking task as SUCCESS. dag_id=tmp, task_id=download_file_from_s3ile, execution_date=20210528T172558, start_date=20210528T172604, end_date=20210528T172605
[2021-05-28 10:26:05,423] {taskinstance.py:1891} DEBUG - Task Duration set to 1.141694
[2021-05-28 10:26:05,455] {dagrun.py:491} DEBUG - number of tis tasks for <DagRun tmp @ 2021-05-28 17:25:58.851532+00:00: manual__2021-05-28T17:25:58.851532+00:00, externally triggered: True>: 0 task(s)
[2021-05-28 10:26:05,456] {taskinstance.py:1246} INFO - 0 downstream tasks scheduled from follow-on schedule check
[2021-05-28 10:26:05,457] {cli_action_loggers.py:84} DEBUG - Calling callbacks: []
[2021-05-28 10:26:05,510] {local_task_job.py:146} INFO - Task exited with return code 0
[2021-05-28 10:26:05,511] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> from DB
[2021-05-28 10:26:05,524] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [success]>
```
⚠️ notice the file content (`test`) is properly shown in the log
The logged output from 2.1.0
```*** Log file does not exist: /usr/local/airflow/logs/tmp/download_file_from_s3ile/2021-05-28T17:36:09.750993+00:00/1.log
*** Fetching from: http://f2ffe4375669:8793/log/tmp/download_file_from_s3ile/2021-05-28T17:36:09.750993+00:00/1.log
[2021-05-28 10:36:14,758] {executor_loader.py:82} DEBUG - Loading core executor: CeleryExecutor
[2021-05-28 10:36:14,769] {__init__.py:51} DEBUG - Loading core task runner: StandardTaskRunner
[2021-05-28 10:36:14,779] {base_task_runner.py:62} DEBUG - Planning to run as the user
[2021-05-28 10:36:14,781] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> from DB
[2021-05-28 10:36:14,788] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]>
[2021-05-28 10:36:14,789] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2021-05-28 10:36:14,789] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Task Instance Not Running' PASSED: True, Task is not in running state.
[2021-05-28 10:36:14,789] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Task Instance State' PASSED: True, Task state queued was valid.
[2021-05-28 10:36:14,793] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2021-05-28 10:36:14,793] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2021-05-28 10:36:14,793] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]>
[2021-05-28 10:36:14,793] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2021-05-28 10:36:14,800] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Pool Slots Available' PASSED: True, ('There are enough open slots in %s to execute the task', 'default_pool')
[2021-05-28 10:36:14,808] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2021-05-28 10:36:14,810] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2021-05-28 10:36:14,810] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Task Concurrency' PASSED: True, Task concurrency is not set.
[2021-05-28 10:36:14,810] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]>
[2021-05-28 10:36:14,810] {taskinstance.py:1067} INFO -
--------------------------------------------------------------------------------
[2021-05-28 10:36:14,810] {taskinstance.py:1068} INFO - Starting attempt 1 of 1
[2021-05-28 10:36:14,811] {taskinstance.py:1069} INFO -
--------------------------------------------------------------------------------
[2021-05-28 10:36:14,823] {taskinstance.py:1087} INFO - Executing <Task(PythonOperator): download_file_from_s3ile> on 2021-05-28T17:36:09.750993+00:00
[2021-05-28 10:36:14,830] {standard_task_runner.py:52} INFO - Started process 116 to run task
[2021-05-28 10:36:14,836] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'tmp', 'download_file_from_s3ile', '2021-05-28T17:36:09.750993+00:00', '--job-id', '8', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/tmp_dag.py', '--cfg-path', '/tmp/tmplhbjfxop', '--error-file', '/tmp/tmpdbeh5gr9']
[2021-05-28 10:36:14,839] {standard_task_runner.py:77} INFO - Job 8: Subtask download_file_from_s3ile
[2021-05-28 10:36:14,841] {cli_action_loggers.py:66} DEBUG - Calling callbacks: [<function default_action_log at 0x7f2e2920f5f0>]
[2021-05-28 10:36:14,860] {settings.py:210} DEBUG - Setting up DB connection pool (PID 116)
[2021-05-28 10:36:14,860] {settings.py:246} DEBUG - settings.prepare_engine_args(): Using NullPool
[2021-05-28 10:36:14,864] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [None]> from DB
[2021-05-28 10:36:14,883] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]>
[2021-05-28 10:36:14,893] {logging_mixin.py:104} INFO - Running <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> on host f2ffe4375669
[2021-05-28 10:36:14,896] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> from DB
[2021-05-28 10:36:14,902] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]>
[2021-05-28 10:36:14,917] {taskinstance.py:657} DEBUG - Clearing XCom data
[2021-05-28 10:36:14,925] {taskinstance.py:664} DEBUG - XCom data cleared
[2021-05-28 10:36:14,947] {taskinstance.py:1282} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=***
AIRFLOW_CTX_DAG_ID=tmp
AIRFLOW_CTX_TASK_ID=download_file_from_s3ile
AIRFLOW_CTX_EXECUTION_DATE=2021-05-28T17:36:09.750993+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-05-28T17:36:09.750993+00:00
[2021-05-28 10:36:14,948] {__init__.py:146} DEBUG - Preparing lineage inlets and outlets
[2021-05-28 10:36:14,948] {__init__.py:190} DEBUG - inlets: [], outlets: []
[2021-05-28 10:36:14,949] {base_aws.py:362} INFO - Airflow Connection: aws_conn_id=aws_default
[2021-05-28 10:36:14,958] {base_aws.py:385} WARNING - Unable to use Airflow Connection for credentials.
[2021-05-28 10:36:14,958] {base_aws.py:386} INFO - Fallback on boto3 credential strategy
[2021-05-28 10:36:14,958] {base_aws.py:391} INFO - Creating session using boto3 credential strategy region_name=None
[2021-05-28 10:36:14,960] {hooks.py:417} DEBUG - Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
[2021-05-28 10:36:14,962] {hooks.py:417} DEBUG - Changing event name from before-call.apigateway to before-call.api-gateway
[2021-05-28 10:36:14,962] {hooks.py:417} DEBUG - Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict
[2021-05-28 10:36:14,965] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration
[2021-05-28 10:36:14,965] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.route53 to before-parameter-build.route-53
[2021-05-28 10:36:14,965] {hooks.py:417} DEBUG - Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search
[2021-05-28 10:36:14,966] {hooks.py:417} DEBUG - Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section
[2021-05-28 10:36:14,968] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask
[2021-05-28 10:36:14,969] {hooks.py:417} DEBUG - Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section
[2021-05-28 10:36:14,969] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search
[2021-05-28 10:36:14,969] {hooks.py:417} DEBUG - Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section
[2021-05-28 10:36:14,982] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/boto3/data/s3/2006-03-01/resources-1.json
[2021-05-28 10:36:14,986] {credentials.py:1961} DEBUG - Looking for credentials via: env
[2021-05-28 10:36:14,986] {credentials.py:1087} INFO - Found credentials in environment variables.
[2021-05-28 10:36:14,987] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/endpoints.json
[2021-05-28 10:36:14,992] {hooks.py:210} DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f2e22e7b7a0>
[2021-05-28 10:36:15,002] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/s3/2006-03-01/service-2.json
[2021-05-28 10:36:15,010] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x7f2e22ea5320>
[2021-05-28 10:36:15,010] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function lazy_call.<locals>._handler at 0x7f2df976ee60>
[2021-05-28 10:36:15,011] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x7f2e22ea50e0>
[2021-05-28 10:36:15,015] {endpoint.py:291} DEBUG - Setting s3 timeout as (60, 60)
[2021-05-28 10:36:15,017] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/_retry.json
[2021-05-28 10:36:15,017] {client.py:164} DEBUG - Registering retry handlers for service: s3
[2021-05-28 10:36:15,018] {factory.py:66} DEBUG - Loading s3:s3
[2021-05-28 10:36:15,019] {factory.py:66} DEBUG - Loading s3:Bucket
[2021-05-28 10:36:15,020] {model.py:358} DEBUG - Renaming Bucket attribute name
[2021-05-28 10:36:15,021] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Bucket: calling handler <function lazy_call.<locals>._handler at 0x7f2df9762d40>
[2021-05-28 10:36:15,021] {factory.py:66} DEBUG - Loading s3:Object
[2021-05-28 10:36:15,022] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Object: calling handler <function lazy_call.<locals>._handler at 0x7f2df977fa70>
[2021-05-28 10:36:15,023] {utils.py:599} DEBUG - Acquiring 0
[2021-05-28 10:36:15,024] {tasks.py:194} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>}) about to wait for the following futures []
[2021-05-28 10:36:15,024] {tasks.py:203} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>}) done waiting for dependent futures
[2021-05-28 10:36:15,025] {tasks.py:147} DEBUG - Executing task DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>}) with kwargs {'client': <botocore.client.S3 object at 0x7f2df9721390>, 'config': <boto3.s3.transfer.TransferConfig object at 0x7f2df921fe90>, 'osutil': <s3transfer.utils.OSUtils object at 0x7f2df921ffd0>, 'request_executor': <s3transfer.futures.BoundedExecutor object at 0x7f2df921fcd0>, 'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>, 'io_executor': <s3transfer.futures.BoundedExecutor object at 0x7f2df921fa50>}
[2021-05-28 10:36:15,025] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function sse_md5 at 0x7f2e22e18c20>
[2021-05-28 10:36:15,025] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function validate_bucket_name at 0x7f2e22e18b90>
[2021-05-28 10:36:15,025] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>>
[2021-05-28 10:36:15,026] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f2df92be3d0>>
[2021-05-28 10:36:15,026] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function generate_idempotent_uuid at 0x7f2e22e189e0>
[2021-05-28 10:36:15,027] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function add_expect_header at 0x7f2e22e18ef0>
[2021-05-28 10:36:15,027] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>>
[2021-05-28 10:36:15,027] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function inject_api_version_header_if_needed at 0x7f2e22e1f290>
[2021-05-28 10:36:15,027] {endpoint.py:101} DEBUG - Making request for OperationModel(name=HeadObject) with params: {'url_path': '/[REDACT]/tmp.txt', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': [], 'url': 'https://s3.amazonaws.com/[REDACT]/tmp.txt', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f2df92be350>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}}
[2021-05-28 10:36:15,028] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_not_transferring at 0x7f2e22b9f170>
[2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f2df92b7a10>>
[2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f2df96db510>>
[2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <function set_operation_specific_signer at 0x7f2e22e188c0>
[2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event before-sign.s3.HeadObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f2df92756d0>>
[2021-05-28 10:36:15,029] {utils.py:1639} DEBUG - Defaulting to S3 virtual host style addressing with path style addressing fallback.
[2021-05-28 10:36:15,029] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: https://s3.amazonaws.com/[REDACT]/tmp.txt
[2021-05-28 10:36:15,030] {utils.py:1036} DEBUG - URI updated to: https://[REDACT].s3.amazonaws.com/tmp.txt
[2021-05-28 10:36:15,030] {auth.py:364} DEBUG - Calculating signature using v4 auth.
[2021-05-28 10:36:15,030] {auth.py:365} DEBUG - CanonicalRequest:
HEAD
/tmp.txt
[REDACT]
[2021-05-28 10:36:15,031] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_transferring at 0x7f2e22baa4d0>
[2021-05-28 10:36:15,031] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=https://[REDACT].s3.amazonaws.com/tmp.txt, headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T173615Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT], SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}>
[2021-05-28 10:36:15,032] {connectionpool.py:943} DEBUG - Starting new HTTPS connection (1): [REDACT].s3.amazonaws.com:443
[2021-05-28 10:36:15,695] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "HEAD /tmp.txt HTTP/1.1" 200 0
[2021-05-28 10:36:15,696] {parsers.py:233} DEBUG - Response headers: ['x-amz-id-2', 'x-amz-request-id', 'Date', 'Last-Modified', 'ETag', 'x-amz-server-side-encryption', 'x-amz-version-id', 'Accept-Ranges', 'Content-Type', 'Content-Length', 'Server']
[2021-05-28 10:36:15,696] {parsers.py:234} DEBUG - Response body:
[]
[2021-05-28 10:36:15,697] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f2df926ef90>
[2021-05-28 10:36:15,698] {retryhandler.py:187} DEBUG - No retry needed.
[2021-05-28 10:36:15,698] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>>
[2021-05-28 10:36:15,698] {futures.py:318} DEBUG - Submitting task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) to executor <s3transfer.futures.BoundedExecutor object at 0x7f2df921fcd0> for transfer request: 0.
[2021-05-28 10:36:15,698] {utils.py:599} DEBUG - Acquiring 0
[2021-05-28 10:36:15,699] {tasks.py:194} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) about to wait for the following futures []
[2021-05-28 10:36:15,699] {tasks.py:203} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) done waiting for dependent futures
[2021-05-28 10:36:15,699] {tasks.py:147} DEBUG - Executing task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) with kwargs {'client': <botocore.client.S3 object at 0x7f2df9721390>, 'bucket': '[REDACT]', 'key': 'tmp.txt', 'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f2df97dc0d0>, 'extra_args': {}, 'callbacks': [], 'max_attempts': 5, 'download_output_manager': <s3transfer.download.DownloadFilenameOutputManager object at 0x7f2df976b310>, 'io_chunksize': 262144, 'bandwidth_limiter': None}
[2021-05-28 10:36:15,699] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function sse_md5 at 0x7f2e22e18c20>
[2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function validate_bucket_name at 0x7f2e22e18b90>
[2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>>
[2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f2df92be3d0>>
[2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function generate_idempotent_uuid at 0x7f2e22e189e0>
[2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function add_expect_header at 0x7f2e22e18ef0>
[2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>>
[2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function inject_api_version_header_if_needed at 0x7f2e22e1f290>
[2021-05-28 10:36:15,701] {endpoint.py:101} DEBUG - Making request for OperationModel(name=GetObject) with params: {'url_path': '/[REDACT]/tmp.txt', 'query_string': {}, 'method': 'GET', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': [], 'url': 'https://s3.amazonaws.com/[REDACT]/tmp.txt', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f2df92be350>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}}
[2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_not_transferring at 0x7f2e22b9f170>
[2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f2df92b7a10>>
[2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f2df96db510>>
[2021-05-28 10:36:15,702] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <function set_operation_specific_signer at 0x7f2e22e188c0>
[2021-05-28 10:36:15,702] {hooks.py:210} DEBUG - Event before-sign.s3.GetObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f2df92756d0>>
[2021-05-28 10:36:15,702] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: https://s3.amazonaws.com/[REDACT]/tmp.txt
[2021-05-28 10:36:15,702] {utils.py:1036} DEBUG - URI updated to: https://[REDACT].s3.amazonaws.com/tmp.txt
[2021-05-28 10:36:15,702] {utils.py:612} DEBUG - Releasing acquire 0/None
[2021-05-28 10:36:15,702] {auth.py:364} DEBUG - Calculating signature using v4 auth.
[2021-05-28 10:36:15,703] {auth.py:365} DEBUG - CanonicalRequest:
GET
/tmp.txt
[REDACT]
[2021-05-28 10:36:15,703] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_transferring at 0x7f2e22baa4d0>
[2021-05-28 10:36:15,703] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=True, method=GET, url=https://[REDACT].s3.amazonaws.com/tmp.txt, headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T173615Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT], SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}>
[2021-05-28 10:36:15,879] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "GET /tmp.txt HTTP/1.1" 200 5
[2021-05-28 10:36:15,879] {parsers.py:233} DEBUG - Response headers: ['x-amz-id-2', 'x-amz-request-id', 'Date', 'Last-Modified', 'ETag', 'x-amz-server-side-encryption', 'x-amz-version-id', 'Accept-Ranges', 'Content-Type', 'Content-Length', 'Server']
[2021-05-28 10:36:15,879] {parsers.py:234} DEBUG - Response body:
[[116, 101, 115, 116, 10]]
[2021-05-28 10:36:15,883] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f2df926ef90>
[2021-05-28 10:36:15,883] {retryhandler.py:187} DEBUG - No retry needed.
[2021-05-28 10:36:15,883] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>>
[2021-05-28 10:36:15,883] {tasks.py:194} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) about to wait for the following futures []
[2021-05-28 10:36:15,885] {tasks.py:203} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) done waiting for dependent futures
[2021-05-28 10:36:15,885] {tasks.py:147} DEBUG - Executing task IOWriteTask(transfer_id=0, {'offset': 0}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f2df97dc0d0>, 'offset': 0}
[2021-05-28 10:36:15,885] {tasks.py:194} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) about to wait for the following futures []
[2021-05-28 10:36:15,886] {tasks.py:203} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) done waiting for dependent futures
[2021-05-28 10:36:15,886] {tasks.py:147} DEBUG - Executing task IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f2df97dc0d0>, 'final_filename': '/tmp/s3_hook.txt', 'osutil': <s3transfer.utils.OSUtils object at 0x7f2df921ffd0>}
[2021-05-28 10:36:15,886] {utils.py:612} DEBUG - Releasing acquire 0/None
[2021-05-28 10:36:15,887] {tmp_dag.py:21} INFO - File downloaded: /tmp/s3_hook.txt
[2021-05-28 10:36:15,888] {tmp_dag.py:24} INFO - FILE CONTENT
[2021-05-28 10:36:15,888] {python.py:151} INFO - Done. Returned value was: None
[2021-05-28 10:36:15,888] {__init__.py:107} DEBUG - Lineage called with inlets: [], outlets: []
[2021-05-28 10:36:15,888] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> from DB
[2021-05-28 10:36:15,893] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]>
[2021-05-28 10:36:15,894] {taskinstance.py:1191} INFO - Marking task as SUCCESS. dag_id=tmp, task_id=download_file_from_s3ile, execution_date=20210528T173609, start_date=20210528T173614, end_date=20210528T173615
[2021-05-28 10:36:15,894] {taskinstance.py:1888} DEBUG - Task Duration set to 1.100586
[2021-05-28 10:36:15,915] {dagrun.py:490} DEBUG - number of tis tasks for <DagRun tmp @ 2021-05-28 17:36:09.750993+00:00: manual__2021-05-28T17:36:09.750993+00:00, externally triggered: True>: 0 task(s)
[2021-05-28 10:36:15,917] {taskinstance.py:1245} INFO - 0 downstream tasks scheduled from follow-on schedule check
[2021-05-28 10:36:15,917] {cli_action_loggers.py:84} DEBUG - Calling callbacks: []
[2021-05-28 10:36:15,939] {local_task_job.py:151} INFO - Task exited with return code 0
[2021-05-28 10:36:15,939] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> from DB
[2021-05-28 10:36:15,949] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [success]>
```
⚠️ notice the file content is **NOT** properly shown in the log
**Anything else we need to know**:
pip freeze for 2.0.2:
```
adal==1.2.7
aiohttp==3.7.4.post0
alembic==1.6.5
amqp==2.6.1
ansiwrap==0.8.4
apache-airflow==2.0.2
apache-airflow-providers-amazon==1.2.0
apache-airflow-providers-celery==1.0.1
apache-airflow-providers-databricks==1.0.1
apache-airflow-providers-ftp==1.1.0
apache-airflow-providers-google==1.0.0
apache-airflow-providers-http==1.1.1
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-jdbc==1.0.1
apache-airflow-providers-mongo==1.0.1
apache-airflow-providers-mysql==1.0.2
apache-airflow-providers-papermill==1.0.2
apache-airflow-providers-postgres==1.0.1
apache-airflow-providers-redis==1.0.1
apache-airflow-providers-salesforce==1.0.1
apache-airflow-providers-slack==3.0.0
apache-airflow-providers-snowflake==1.1.1
apache-airflow-providers-sqlite==1.0.2
apache-airflow-providers-ssh==1.2.0
apispec==3.3.2
appdirs==1.4.4
argcomplete==1.12.3
asn1crypto==1.4.0
async-generator==1.10
async-timeout==3.0.1
attrs==20.3.0
Authlib==0.15.3
avro-python3==1.10.0
azure-common==1.1.27
azure-core==1.14.0
azure-datalake-store==0.0.52
azure-storage-blob==12.8.1
Babel==2.9.1
backcall==0.2.0
bcrypt==3.2.0
billiard==3.6.4.0
black==21.5b1
blinker==1.4
boto3==1.15.18
botocore==1.18.18
cached-property==1.5.2
cachetools==4.2.2
cattrs==1.7.0
celery==4.4.7
Cerberus==1.3.2
certifi==2020.12.5
cffi==1.14.5
chardet==3.0.4
click==7.1.2
clickclick==20.10.2
colorama==0.4.4
colorlog==5.0.1
commonmark==0.9.1
connexion==2.7.0
croniter==0.3.37
cryptography==3.4.7
cycler==0.10.0
databricks-cli==0.14.3
databricks-connect==7.3.8
decorator==5.0.9
defusedxml==0.7.1
dill==0.3.3
dnspython==1.16.0
docutils==0.17.1
email-validator==1.1.2
entrypoints==0.3
Flask==1.1.4
Flask-AppBuilder==3.3.0
Flask-Babel==1.0.0
Flask-Bcrypt==0.7.1
Flask-Caching==1.10.1
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
flower==0.9.5
fsspec==2021.5.0
gcsfs==2021.5.0
google-ads==7.0.0
google-api-core==1.26.0
google-api-python-client==1.12.8
google-auth==1.27.0
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.4
google-cloud-automl==1.0.1
google-cloud-bigquery==2.17.0
google-cloud-bigquery-datatransfer==1.1.1
google-cloud-bigquery-storage==2.4.0
google-cloud-bigtable==1.7.0
google-cloud-container==1.0.1
google-cloud-core==1.6.0
google-cloud-datacatalog==0.7.0
google-cloud-dataproc==1.1.1
google-cloud-dlp==1.0.0
google-cloud-kms==1.4.0
google-cloud-language==1.3.0
google-cloud-logging==1.15.1
google-cloud-memcache==0.3.0
google-cloud-monitoring==1.1.0
google-cloud-os-login==1.0.0
google-cloud-pubsub==1.7.0
google-cloud-redis==1.0.0
google-cloud-secret-manager==1.0.0
google-cloud-spanner==1.19.1
google-cloud-speech==1.3.2
google-cloud-storage==1.38.0
google-cloud-tasks==1.5.0
google-cloud-texttospeech==1.0.1
google-cloud-translate==1.7.0
google-cloud-videointelligence==1.16.1
google-cloud-vision==1.0.0
google-crc32c==1.1.2
google-resumable-media==1.3.0
googleapis-common-protos==1.53.0
graphviz==0.16
grpc-google-iam-v1==0.12.3
grpcio==1.38.0
grpcio-gcp==0.2.2
gunicorn==19.10.0
httplib2==0.19.1
humanize==3.5.0
idna==2.10
importlib-metadata==1.7.0
importlib-resources==1.5.0
inflection==0.5.1
iniconfig==1.1.1
ipykernel==5.4.3
ipython==7.23.1
ipython-genutils==0.2.0
iso8601==0.1.14
isodate==0.6.0
itsdangerous==1.1.0
JayDeBeApi==1.2.3
jedi==0.18.0
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.0.1
JPype1==1.2.1
jsonschema==3.2.0
jupyter-client==6.1.12
jupyter-core==4.7.1
kiwisolver==1.3.1
kombu==4.6.11
lazy-object-proxy==1.6.0
libcst==0.3.19
lockfile==0.12.2
Mako==1.1.4
Markdown==3.3.4
MarkupSafe==1.1.1
marshmallow==3.12.1
marshmallow-enum==1.5.1
marshmallow-oneofschema==2.1.0
marshmallow-sqlalchemy==0.23.1
matplotlib==3.3.4
matplotlib-inline==0.1.2
msrest==0.6.21
multidict==5.1.0
mypy-extensions==0.4.3
mysql-connector-python==8.0.22
mysqlclient==1.3.14
natsort==7.1.1
nbclient==0.5.3
nbformat==5.1.3
nest-asyncio==1.5.1
nteract-scrapbook==0.4.2
numpy==1.20.3
oauthlib==3.1.0
openapi-schema-validator==0.1.5
openapi-spec-validator==0.3.1
oscrypto==1.2.1
packaging==20.9
pandas==1.2.4
pandas-gbq==0.15.0
papermill==2.3.3
paramiko==2.7.2
parso==0.8.2
pathspec==0.8.1
pendulum==2.1.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.2.0
prison==0.1.3
prometheus-client==0.8.0
prompt-toolkit==3.0.18
proto-plus==1.18.1
protobuf==3.17.1
psutil==5.8.0
psycopg2-binary==2.8.6
ptyprocess==0.7.0
py4j==0.10.9
pyarrow==4.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pycryptodomex==3.10.1
pydata-google-auth==1.2.0
Pygments==2.9.0
PyJWT==1.7.1
pymongo==3.11.4
PyNaCl==1.4.0
pyOpenSSL==20.0.1
pyparsing==2.4.7
pyrsistent==0.17.3
pysftp==0.2.9
python-daemon==2.3.0
python-dateutil==2.8.1
python-editor==1.0.4
python-nvd3==0.15.0
python-slugify==4.0.1
python3-openid==3.2.0
pytz==2021.1
pytzdata==2020.1
PyYAML==5.4.1
pyzmq==22.1.0
redis==3.5.3
regex==2021.4.4
requests==2.25.1
requests-oauthlib==1.3.0
rich==9.2.0
rsa==4.7.2
s3transfer==0.3.7
scikit-learn==0.24.1
scipy==1.6.3
setproctitle==1.2.2
simple-salesforce==1.11.1
six==1.16.0
slack-sdk==3.5.1
snowflake-connector-python==2.4.3
snowflake-sqlalchemy==1.2.4
SQLAlchemy==1.3.23
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.37.4
sqlparse==0.4.1
sshtunnel==0.1.5
swagger-ui-bundle==0.0.8
tableauserverclient==0.15.0
tabulate==0.8.9
tenacity==6.2.0
termcolor==1.1.0
text-unidecode==1.3
textwrap3==0.9.2
threadpoolctl==2.1.0
toml==0.10.2
tornado==6.1
tqdm==4.61.0
traitlets==5.0.5
typed-ast==1.4.3
typing-extensions==3.10.0.0
typing-inspect==0.6.0
unicodecsv==0.14.1
uritemplate==3.0.1
urllib3==1.25.11
vine==1.3.0
watchtower==0.7.3
wcwidth==0.2.5
Werkzeug==1.0.1
WTForms==2.3.3
yarl==1.6.3
zipp==3.4.1
```
pip freeze for 2.1.0:
```
adal==1.2.7
aiohttp==3.7.4.post0
alembic==1.6.5
amqp==2.6.1
ansiwrap==0.8.4
apache-airflow==2.1.0
apache-airflow-providers-amazon==1.2.0
apache-airflow-providers-celery==1.0.1
apache-airflow-providers-databricks==1.0.1
apache-airflow-providers-ftp==1.1.0
apache-airflow-providers-google==1.0.0
apache-airflow-providers-http==1.1.1
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-jdbc==1.0.1
apache-airflow-providers-mongo==1.0.1
apache-airflow-providers-mysql==1.0.2
apache-airflow-providers-papermill==1.0.2
apache-airflow-providers-postgres==1.0.1
apache-airflow-providers-redis==1.0.1
apache-airflow-providers-salesforce==1.0.1
apache-airflow-providers-slack==3.0.0
apache-airflow-providers-snowflake==1.1.1
apache-airflow-providers-sqlite==1.0.2
apache-airflow-providers-ssh==1.2.0
apispec==3.3.2
appdirs==1.4.4
argcomplete==1.12.3
asn1crypto==1.4.0
async-generator==1.10
async-timeout==3.0.1
attrs==20.3.0
Authlib==0.15.3
avro-python3==1.10.0
azure-common==1.1.27
azure-core==1.14.0
azure-datalake-store==0.0.52
azure-storage-blob==12.8.1
Babel==2.9.1
backcall==0.2.0
bcrypt==3.2.0
billiard==3.6.4.0
black==21.5b1
blinker==1.4
boto3==1.15.18
botocore==1.18.18
cached-property==1.5.2
cachetools==4.2.2
cattrs==1.7.0
celery==4.4.7
Cerberus==1.3.2
certifi==2020.12.5
cffi==1.14.5
chardet==3.0.4
click==7.1.2
clickclick==20.10.2
colorama==0.4.4
colorlog==5.0.1
commonmark==0.9.1
croniter==1.0.13
cryptography==3.4.7
cycler==0.10.0
databricks-cli==0.14.3
databricks-connect==7.3.8
decorator==5.0.9
defusedxml==0.7.1
dill==0.3.3
dnspython==1.16.0
docutils==0.17.1
email-validator==1.1.2
entrypoints==0.3
Flask==1.1.4
Flask-AppBuilder==3.3.0
Flask-Babel==1.0.0
Flask-Bcrypt==0.7.1
Flask-Caching==1.10.1
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
flower==0.9.5
fsspec==2021.5.0
gcsfs==2021.5.0
google-ads==7.0.0
google-api-core==1.26.0
google-api-python-client==1.12.8
google-auth==1.27.0
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.4
google-cloud-automl==1.0.1
google-cloud-bigquery==2.17.0
google-cloud-bigquery-datatransfer==1.1.1
google-cloud-bigquery-storage==2.4.0
google-cloud-bigtable==1.7.0
google-cloud-container==1.0.1
google-cloud-core==1.6.0
google-cloud-datacatalog==0.7.0
google-cloud-dataproc==1.1.1
google-cloud-dlp==1.0.0
google-cloud-kms==1.4.0
google-cloud-language==1.3.0
google-cloud-logging==1.15.1
google-cloud-memcache==0.3.0
google-cloud-monitoring==1.1.0
google-cloud-os-login==1.0.0
google-cloud-pubsub==1.7.0
google-cloud-redis==1.0.0
google-cloud-secret-manager==1.0.0
google-cloud-spanner==1.19.1
google-cloud-speech==1.3.2
google-cloud-storage==1.38.0
google-cloud-tasks==1.5.0
google-cloud-texttospeech==1.0.1
google-cloud-translate==1.7.0
google-cloud-videointelligence==1.16.1
google-cloud-vision==1.0.0
google-crc32c==1.1.2
google-resumable-media==1.3.0
googleapis-common-protos==1.53.0
graphviz==0.16
grpc-google-iam-v1==0.12.3
grpcio==1.38.0
grpcio-gcp==0.2.2
gunicorn==20.1.0
h11==0.12.0
httpcore==0.13.3
httplib2==0.19.1
httpx==0.18.1
humanize==3.5.0
idna==2.10
importlib-metadata==1.7.0
importlib-resources==1.5.0
inflection==0.5.1
iniconfig==1.1.1
ipykernel==5.4.3
ipython==7.23.1
ipython-genutils==0.2.0
iso8601==0.1.14
isodate==0.6.0
itsdangerous==1.1.0
JayDeBeApi==1.2.3
jedi==0.18.0
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.0.1
JPype1==1.2.1
jsonschema==3.2.0
jupyter-client==6.1.12
jupyter-core==4.7.1
kiwisolver==1.3.1
kombu==4.6.11
lazy-object-proxy==1.6.0
libcst==0.3.19
lockfile==0.12.2
Mako==1.1.4
Markdown==3.3.4
MarkupSafe==1.1.1
marshmallow==3.12.1
marshmallow-enum==1.5.1
marshmallow-oneofschema==2.1.0
marshmallow-sqlalchemy==0.23.1
matplotlib==3.3.4
matplotlib-inline==0.1.2
msrest==0.6.21
multidict==5.1.0
mypy-extensions==0.4.3
mysql-connector-python==8.0.22
mysqlclient==1.3.14
nbclient==0.5.3
nbformat==5.1.3
nest-asyncio==1.5.1
nteract-scrapbook==0.4.2
numpy==1.20.3
oauthlib==3.1.0
openapi-schema-validator==0.1.5
openapi-spec-validator==0.3.1
oscrypto==1.2.1
packaging==20.9
pandas==1.2.4
pandas-gbq==0.15.0
papermill==2.3.3
paramiko==2.7.2
parso==0.8.2
pathspec==0.8.1
pendulum==2.1.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.2.0
prison==0.1.3
prometheus-client==0.8.0
prompt-toolkit==3.0.18
proto-plus==1.18.1
protobuf==3.17.1
psutil==5.8.0
psycopg2-binary==2.8.6
ptyprocess==0.7.0
py4j==0.10.9
pyarrow==3.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pycryptodomex==3.10.1
pydata-google-auth==1.2.0
Pygments==2.9.0
PyJWT==1.7.1
pymongo==3.11.4
PyNaCl==1.4.0
pyOpenSSL==20.0.1
pyparsing==2.4.7
pyrsistent==0.17.3
pysftp==0.2.9
python-daemon==2.3.0
python-dateutil==2.8.1
python-editor==1.0.4
python-nvd3==0.15.0
python-slugify==4.0.1
python3-openid==3.2.0
pytz==2021.1
pytzdata==2020.1
PyYAML==5.4.1
pyzmq==22.1.0
redis==3.5.3
regex==2021.4.4
requests==2.25.1
requests-oauthlib==1.3.0
rfc3986==1.5.0
rich==10.2.2
rsa==4.7.2
s3transfer==0.3.7
scikit-learn==0.24.1
scipy==1.6.3
setproctitle==1.2.2
simple-salesforce==1.11.1
six==1.16.0
slack-sdk==3.5.1
sniffio==1.2.0
snowflake-connector-python==2.4.3
snowflake-sqlalchemy==1.2.4
SQLAlchemy==1.3.23
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.37.4
sqlparse==0.4.1
sshtunnel==0.1.5
swagger-ui-bundle==0.0.8
tableauserverclient==0.15.0
tabulate==0.8.9
tenacity==6.2.0
termcolor==1.1.0
text-unidecode==1.3
textwrap3==0.9.2
threadpoolctl==2.1.0
toml==0.10.2
tornado==6.1
tqdm==4.61.0
traitlets==5.0.5
typed-ast==1.4.3
typing-extensions==3.10.0.0
typing-inspect==0.6.0
unicodecsv==0.14.1
uritemplate==3.0.1
urllib3==1.25.11
vine==1.3.0
watchtower==0.7.3
wcwidth==0.2.5
Werkzeug==1.0.1
WTForms==2.3.3
yarl==1.6.3
zipp==3.4.1
``` | https://github.com/apache/airflow/issues/16148 | https://github.com/apache/airflow/pull/16424 | cbf8001d7630530773f623a786f9eb319783b33c | d1d02b62e3436dedfe9a2b80cd1e61954639ca4d | 2021-05-28T18:23:20Z | python | 2021-06-16T09:29:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,138 | ["airflow/www/utils.py", "tests/www/test_utils.py"] | doc_md code block collapsing lines | **Apache Airflow version**: 2.0.0 - 2.1.0
**Kubernetes version**: N/A
**Environment**:
- **Cloud provider or hardware configuration**: Docker on MacOS (but also AWS ECS deployed)
- **OS** (e.g. from /etc/os-release): MacOS Big Sur 11.3.1
- **Kernel** (e.g. `uname -a`): Darwin Kernel Version 20.4.0
- **Install tools**:
- **Others**:
**What happened**:
When a code block is a part of a DAG's `doc_md`, it does not render correctly in the Web UI, but collapses all the lines into one line instead.
**What you expected to happen**:
The multi line code block be rendered with line breaks preserved.
**How to reproduce it**:
Create a DAG with `doc_md` containing a code block:
````python
from airflow import DAG
DOC_MD = """\
# Markdown code block
Inline `code` works well.
```
Code block
does not
respect
newlines
```
"""
dag = DAG(
dag_id='test',
doc_md=DOC_MD
)
````
The rendered documentation looks like this:
<img src="https://user-images.githubusercontent.com/11132999/119981579-19a70600-bfbe-11eb-8036-7d981ae1f232.png" width="50%"/>
**Anything else we need to know**: N/A
| https://github.com/apache/airflow/issues/16138 | https://github.com/apache/airflow/pull/16414 | 15ff2388e8a52348afcc923653f85ce15a3c5f71 | 6f9c0ceeb40947c226d35587097529d04c3e3e59 | 2021-05-28T12:33:59Z | python | 2021-06-13T00:30:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,090 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "tests/core/test_configuration.py"] | Contradictory default in store_dag configuration reference | https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#store-dag-code

The Default is True or None? | https://github.com/apache/airflow/issues/16090 | https://github.com/apache/airflow/pull/16093 | 57bd6fb2925a7d505a80b83140811b94b363f49c | bff213e07735d1ee45101f85b01b3d3a97cddbe5 | 2021-05-26T15:49:01Z | python | 2021-06-07T08:47:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,079 | ["airflow/configuration.py", "tests/www/views/test_views.py"] | NameError: name `conf` is not defined in configuration.py after upgraded to 2.1.0 | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):centos7
- **Kernel** (e.g. `uname -a`): 3.10.0
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
After upgraded from 2.0.1 to 2.1.0, airflow fails with the error:
```
Traceback (most recent call last):
File "/data/apps/pyenv/versions/airflow-py381/bin/airflow", line 6, in <module>
from airflow.__main__ import main
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 1117, in <module>
conf = initialize_config()
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 879, in initialize_config
conf.validate()
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 204, in validate
self._validate_config_dependencies()
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 232, in _validate_config_dependencies
is_sqlite = "sqlite" in self.get('core', 'sql_alchemy_conn')
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 344, in get
option = self._get_environment_variables(deprecated_key, deprecated_section, key, section)
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 410, in _get_environment_variables
option = self._get_env_var_option(section, key)
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 314, in _get_env_var_option
return _get_config_value_from_secret_backend(os.environ[env_var_secret_path])
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 83, in _get_config_value_from_secret_backend
secrets_client = get_custom_secret_backend()
File "/data/apps/pyenv/versions/3.8.1/envs/airflow-py381/lib/python3.8/site-packages/airflow/configuration.py", line 1018, in get_custom_secret_backend
secrets_backend_cls = conf.getimport(section='secrets', key='backend')
NameError: name 'conf' is not defined
```
I have mask the password in airflow.cfg using the following env vars and define my own secrets backend in airflow.cfg
```
export AIRFLOW__CORE__SQL_ALCHEMY_CONN_SECRET=AIRFLOW__CORE__SQL_ALCHEMY_CONN_ENC
export AIRFLOW__CELERY__BROKER_URL_SECRET=AIRFLOW__CELERY__BROKER_URL_ENC
export AIRFLOW__CELERY__RESULT_BACKEND_SECRET=AIRFLOW__CELERY__RESULT_BACKEND_ENC
```
And I fixed this by moving "conf.validate()":
```configuration.py
if not os.path.isfile(WEBSERVER_CONFIG):
import shutil
log.info('Creating new FAB webserver config file in: %s', WEBSERVER_CONFIG)
shutil.copy(_default_config_file_path('default_webserver_config.py'), WEBSERVER_CONFIG)
# conf.validate()
return conf
...
conf = initialize_config()
secrets_backend_list = initialize_secrets_backends()
conf.validate()
```
**What you expected to happen**:
<!-- What do you think went wrong? -->
The upgradation should be compatible.
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
Use a self-define secret backend
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16079 | https://github.com/apache/airflow/pull/16088 | 9d06ee8019ecbc07d041ccede15d0e322aa797a3 | 65519ab83ddf4bd6fc30c435b5bfccefcb14d596 | 2021-05-26T04:50:06Z | python | 2021-05-27T16:37:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,078 | ["airflow/jobs/scheduler_job.py", "airflow/models/taskinstance.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py"] | Queued tasks become running after dagrun is marked failed | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): centos7
- **Kernel** (e.g. `uname -a`): 3.10.0
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
A dagrun has some tasks which are in running and queued status because the concurrency limit. After I mark dagrun as **failed**, the running tasks turn **failed** while the queued tasks turn **running**.
**What you expected to happen**:
<!-- What do you think went wrong? -->
The queued tasks should turn **failed** instead of **running**
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
- in airflow.cfg set worker_concurrency=8, dag_concurrency=64
- create a dag with 100 BashOperator tasks which are all independent, with a bash command "sleep 1d"
- run the dag, and will see 8 tasks running, 56 queued and 36 scheduled
- mark the dagrun as failed, and will see 8 running tasks are set failed, but another 8 are set running and the rest 84 are set no_status. If the dagrun is marked failed again, this process will be repeated again.
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16078 | https://github.com/apache/airflow/pull/19095 | 561610b1f00daaac2ad9870ba702be49c9764fe7 | 8d703ae7db3c2a08b94c824a6f4287c3dd29cebf | 2021-05-26T03:56:09Z | python | 2021-10-20T14:10:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,071 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | Secret masking fails on io objects | **Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**: *NIX
- **Cloud provider or hardware configuration**: N/A
- **OS** (e.g. from /etc/os-release): N/A
- **Kernel** (e.g. `uname -a`): N/A
- **Install tools**:
- **Others**:
**What happened**:
Due to the new secrets masker, logging will fail when an IO object is passed to a logging call.
**What you expected to happen**:
Logging should succeed when an IO object is passed to the logging cal.
**How to reproduce it**:
Sample DAG:
```python
import logging
from datetime import datetime
from airflow import DAG
from airflow.operators.python import PythonOperator
log = logging.getLogger(__name__)
def log_io():
file = open("/tmp/foo", "w")
log.info("File: %s", file)
# Create the DAG -----------------------------------------------------------------------
dag = DAG(
dag_id="Test_Log_IO",
schedule_interval=None,
catchup=False,
default_args={
"owner": "madison.swain-bowden",
"depends_on_past": False,
"start_date": datetime(2021, 5, 4),
},
)
with dag:
PythonOperator(
task_id="log_io",
python_callable=log_io,
)
```
Logging that occurs when run on Airflow (task subsequently fails):
```
[2021-05-25 11:27:08,080] {logging_mixin.py:104} INFO - Running <TaskInstance: Test_Log_IO.log_io 2021-05-25T18:25:17.679660+00:00 [running]> on host Madisons-MacBook-Pro
[2021-05-25 11:27:08,137] {taskinstance.py:1280} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=madison.swain-bowden
AIRFLOW_CTX_DAG_ID=Test_Log_IO
AIRFLOW_CTX_TASK_ID=log_io
AIRFLOW_CTX_EXECUTION_DATE=2021-05-25T18:25:17.679660+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-05-25T18:25:17.679660+00:00
[2021-05-25 11:27:08,138] {taskinstance.py:1481} ERROR - Task failed with exception
Traceback (most recent call last):
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1137, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/operators/python.py", line 150, in execute
return_value = self.execute_callable()
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/operators/python.py", line 161, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/Users/madison/git/airflow-dags/ookla/dags/Test_Log_IO/log_io.py", line 13, in log_io
log.info("File: %s", file)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/logging/__init__.py", line 1446, in info
self._log(INFO, msg, args, **kwargs)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/logging/__init__.py", line 1589, in _log
self.handle(record)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/logging/__init__.py", line 1599, in handle
self.callHandlers(record)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/logging/__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/logging/__init__.py", line 948, in handle
rv = self.filter(record)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/logging/__init__.py", line 806, in filter
result = f.filter(record)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/utils/log/secrets_masker.py", line 157, in filter
record.__dict__[k] = self.redact(v)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/utils/log/secrets_masker.py", line 203, in redact
return tuple(self.redact(subval) for subval in item)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/utils/log/secrets_masker.py", line 203, in <genexpr>
return tuple(self.redact(subval) for subval in item)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/utils/log/secrets_masker.py", line 205, in redact
return list(self.redact(subval) for subval in item)
File "/Users/madison/programs/anaconda3/envs/ookla-airflow/lib/python3.9/site-packages/airflow/utils/log/secrets_masker.py", line 205, in <genexpr>
return list(self.redact(subval) for subval in item)
io.UnsupportedOperation: not readable
[2021-05-25 11:27:08,145] {taskinstance.py:1524} INFO - Marking task as FAILED. dag_id=Test_Log_IO, task_id=log_io, execution_date=20210525T182517, start_date=20210525T182707, end_date=20210525T182708
[2021-05-25 11:27:08,197] {local_task_job.py:151} INFO - Task exited with return code 1
```
**Anything else we need to know**:
If I set the value defined here to `False`, the task completes successfully and the line is logged appropriately: https://github.com/apache/airflow/blob/2.1.0/airflow/cli/commands/task_command.py#L205
Example output (when set to `False`):
```
[2021-05-25 11:48:54,185] {logging_mixin.py:104} INFO - Running <TaskInstance: Test_Log_IO.log_io 2021-05-25T18:48:45.911082+00:00 [running]> on host Madisons-MacBook-Pro
[2021-05-25 11:48:54,262] {taskinstance.py:1280} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=madison.swain-bowden
AIRFLOW_CTX_DAG_ID=Test_Log_IO
AIRFLOW_CTX_TASK_ID=log_io
AIRFLOW_CTX_EXECUTION_DATE=2021-05-25T18:48:45.911082+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-05-25T18:48:45.911082+00:00
[2021-05-25 11:48:54,264] {log_io.py:13} INFO - File: <_io.TextIOWrapper name='/tmp/foo' mode='w' encoding='UTF-8'>
[2021-05-25 11:48:54,264] {python.py:151} INFO - Done. Returned value was: None
[2021-05-25 11:48:54,274] {taskinstance.py:1184} INFO - Marking task as SUCCESS. dag_id=Test_Log_IO, task_id=log_io, execution_date=20210525T184845, start_date=20210525T184854, end_date=20210525T184854
[2021-05-25 11:48:54,305] {taskinstance.py:1245} INFO - 0 downstream tasks scheduled from follow-on schedule check
[2021-05-25 11:48:54,339] {local_task_job.py:151} INFO - Task exited with return code 0
```
Unfortunately the logging that caused this problem for me originally is being done by a third party library, so I can't alter the way this works on our end. | https://github.com/apache/airflow/issues/16071 | https://github.com/apache/airflow/pull/16118 | db63de626f53c9e0242f0752bb996d0e32ebf6ea | 57bd6fb2925a7d505a80b83140811b94b363f49c | 2021-05-25T18:49:29Z | python | 2021-06-07T08:27:01Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,068 | ["airflow/providers/snowflake/hooks/snowflake.py", "tests/providers/snowflake/hooks/test_snowflake.py"] | Snowflake hook doesn't parameterize SQL passed as a string type, causing SnowflakeOperator to fail | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.0
**What happened**: The new Snowflake hook run method is not taking parameters into account when the SQL is passed as a string; it's using Snowflake connector's execute_string method, which does not support parameterization. So the only way to parameterize your query from a SnowflakeOperator is to put the SQL into a list.
https://github.com/apache/airflow/blob/304e174674ff6921cb7ed79c0158949b50eff8fe/airflow/providers/snowflake/hooks/snowflake.py#L272-L279
https://docs.snowflake.com/en/user-guide/python-connector-api.html#execute_string
**How to reproduce it**: Pass a sql string and parameters to SnowflakeOperator; the query will not be parameterized, and will fail as a SQL syntax error on the parameterization characters, e.g. %(param)s.
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**: Quick workaround is to put your sql string into a list, these are still being parameterized correctly.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16068 | https://github.com/apache/airflow/pull/16102 | 6d268abc621cc0ad60a2bd11148c6282735687f3 | aeb93f8e5bb4a9175e8834d476a6b679beff4a7e | 2021-05-25T18:01:29Z | python | 2021-05-27T07:01:21Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,061 | ["airflow/utils/log/secrets_masker.py"] | Consider and add common sensitive names | **Description**
Since sensitive informations in the connection object (specifically the extras field) are now being masked based on sensitive key names, we should consider adding some common sensitive key names.
`private_key` from [ssh connection](https://airflow.apache.org/docs/apache-airflow-providers-ssh/stable/connections/ssh.html) is an examples.
**Use case / motivation**
Extras field used to be blocked out entirely before the sensitive value masking feature (#15599).
[Before in 2.0.2](https://github.com/apache/airflow/blob/2.0.2/airflow/hooks/base.py#L78
) and [after in 2.1.0](https://github.com/apache/airflow/blob/2.1.0/airflow/hooks/base.py#L78
).
Extras field containing sensitive information now shown unless the key contains sensitive names.
**Are you willing to submit a PR?**
@ashb has expressed interest in adding this.
| https://github.com/apache/airflow/issues/16061 | https://github.com/apache/airflow/pull/16392 | 5fdf7468ff856ba8c05ec20637ba5a145586af4a | 430073132446f7cc9c7d3baef99019be470d2a37 | 2021-05-25T16:49:39Z | python | 2021-06-11T18:08:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,056 | ["chart/templates/_helpers.yaml", "chart/tests/test_git_sync_scheduler.py", "chart/tests/test_git_sync_webserver.py", "chart/tests/test_git_sync_worker.py", "chart/tests/test_pod_template_file.py", "chart/values.schema.json", "chart/values.yaml"] | [Helm] Resources for the git-sync sidecar | **Description**
It would be nice to be able to specify resources for the `git-sync` sidecar in the helm chart values.
**Use case / motivation**
I don't want to use keda for autoscaling and would like to setup a HPA myself. However this is currently not possible since it is not possible to specify resources for the `git-sync` sidecar.
**Are you willing to submit a PR?**
Yes, I am willing to submit a PR.
**Related Issues**
Not that I know of.
| https://github.com/apache/airflow/issues/16056 | https://github.com/apache/airflow/pull/16080 | 6af963c7d5ae9b59d17b156a053d5c85e678a3cb | c90284d84e42993204d84cccaf5c03359ca0cdbd | 2021-05-25T15:02:45Z | python | 2021-05-26T14:08:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,042 | ["airflow/www/static/css/flash.css", "airflow/www/static/css/main.css", "airflow/www/templates/appbuilder/flash.html"] | DAG Import Errors list items as collapsible spoiler-type at collapsed state | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
<!-- A short description of your feature -->
Perform a DAG Import Errors list items as collapsible spoiler-type at collapsed state.
Title of each spoiler block may be a first line of traceback error, dag_id or dag full filename (or pair of them)
**Use case / motivation**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
When amount of DAG import errors becomes huge(see screenshot below) it is hard to find a necessary import error or maybe, compare errors of different DAGs. Of course, it can be done by using of web page find.. but when un-collapsed list is huge, it is inconvient

**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/16042 | https://github.com/apache/airflow/pull/16072 | 4aaa8df51c23c8833f9fa11d445a4c5bab347347 | 62fe32590aab5acbcfc8ce81f297b1f741a0bf09 | 2021-05-25T08:23:19Z | python | 2021-05-25T19:48:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,039 | ["chart/templates/flower/flower-service.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/templates/webserver/webserver-service.yaml", "chart/tests/test_flower.py", "chart/tests/test_webserver.py", "chart/values.schema.json", "chart/values.yaml"] | Kubernetes liveliness probe fails when changing from default port for Airflow UI from 8080 to 80 in Helm Chart. | **Apache Airflow version**: 2.0.2.
**Kubernetes version**:
```
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-05-06T05:09:48Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
```
**What happened**:
I added the following block of code to the user values in the helm chart and because of that, the pod failed to start because the liveliness probe failed.
```
ports:
airflowUI: 80
```
```
Normal Scheduled 3m19s default-scheduler Successfully assigned ucp/airflow-webserver-5c6dffbcd5-5crwg to ip-abcd.ap-south-1.compute.internal
Normal Pulled 3m18s kubelet Container image "xyz" already present on machine
Normal Created 3m18s kubelet Created container wait-for-airflow-migrations
Normal Started 3m18s kubelet Started container wait-for-airflow-migrations
Normal Pulled 3m6s kubelet Container image "xyz" already present on machine
Normal Created 3m6s kubelet Created container webserver
Normal Started 3m6s kubelet Started container webserver
Warning Unhealthy 2m8s (x9 over 2m48s) kubelet Readiness probe failed: Get http://100.124.0.6:80/health: dial tcp 100.124.0.6:80: connect: connection refused
Warning Unhealthy 2m4s (x10 over 2m49s) kubelet Liveness probe failed: Get http://100.124.0.6:80/health: dial tcp 100.124.0.6:80: connect: connection refused
```
**What you expected to happen**:
The liveliness probe should pass.
**How to reproduce it**:
Just change the default port for airflowUI from 8080 to 80.
| https://github.com/apache/airflow/issues/16039 | https://github.com/apache/airflow/pull/16572 | c2af5e3ca22eca7d4797b141520a97cf5e5cc879 | 8217db8cb4b1ff302c5cf8662477ac00f701e78c | 2021-05-25T07:57:13Z | python | 2021-06-23T12:50:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,037 | ["airflow/operators/python.py", "airflow/utils/python_virtualenv.py", "tests/config_templates/requirements.txt", "tests/decorators/test_python_virtualenv.py", "tests/operators/test_python.py"] | allow using requirments.txt in PythonVirtualEnvOperator | Currently the operator allows to set requirement as list that needs to be hard coded.
It would be nice if airflow can support reading from file directly (something similar to how operators read sql file) | https://github.com/apache/airflow/issues/16037 | https://github.com/apache/airflow/pull/17349 | cd4bc175cb7673f191126db04d052c55279ef7a6 | b597ceaec9078b0ce28fe0081a196f065f600f43 | 2021-05-25T07:47:15Z | python | 2022-01-07T14:32:29Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,035 | ["airflow/sensors/base.py"] | GCSToLocalFilesystemOperator from Google providers pre 4.0.0 fails to import in airflow 2.1.0 | The GCSToLocalFilesystemOperator in Google Provider <=3.0.0 had wrong import for apply_defaults. It used
```
from airflow.sensors.base_sensor_operator import apply_defaults
```
instead of
```
from airflow.utils.decorators import apply_defaults
```
When we removed `apply_defaults` in #15667, the base_sensor_operator import was removed as well which made the GCSToLocalFilestystemOperator stops working in 2.1.0
The import in base_sensor_operator will be restored in 2.1.1 and Google Provider 4.0.0 will work without problems after it is released. Workaround for 2.1.0 Airflow is to copy the code of the operator to DAG and use it temporarily until new versions are released. | https://github.com/apache/airflow/issues/16035 | https://github.com/apache/airflow/pull/16040 | 71ef2f2ee9ccf238a99cb0e42412d2118bad22a1 | 0f8f66eb6bb5fe7f91ecfaa2e93d4c3409813b61 | 2021-05-25T07:01:35Z | python | 2021-05-27T05:08:34Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,024 | ["airflow/www/static/js/tree.js"] | airflow 2.1.0 - squares with tasks are aligned far to the right | **Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.19.8
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Opened the dag page. I saw that the task squares were shifted to the right to the end. The pop-up window with the details of the task goes off-screen.
Additionally, a large diagonal monitor leaves a lot of empty space
**What you expected to happen**:
I believe that the alignment of the squares of the tasks should be closer to the center, as it was in version 2.0.2
**How to reproduce it**:
open any page with a dag who has completed or scheduled tasks
**Anything else we need to know**:


| https://github.com/apache/airflow/issues/16024 | https://github.com/apache/airflow/pull/16067 | 44345f3a635d3aef3bf98d6a3134e8820564b105 | f2aa9b58cb012a3bc347f43baeaa41ecdece4cbf | 2021-05-24T15:03:37Z | python | 2021-05-25T20:20:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,022 | ["airflow/utils/python_virtualenv_script.jinja2", "tests/operators/test_python.py"] | PythonVirtualEnvOperator not serialising return type of function if False | **Apache Airflow version**: 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**: Ubuntu 18.04 / Python 3.7
- **Cloud provider or hardware configuration**: N/A
- **OS**: Debian GNU/Linux 10 (buster)"
- **Kernel** (e.g. `uname -a`): Ubuntu 18.04 on WSL2
**What happened**:
https://github.com/apache/airflow/blob/0f327788b5b0887c463cb83dd8f732245da96577/airflow/utils/python_virtualenv_script.jinja2#L53
When using the `PythonVirtualEnvOperator` with a python callable that returns `False` (or any other value, `x` such that `bool(x) == False`), due to line 53 of the Jinja template linked above, we don't end up serialising the return type into the `script.out` file, meaning that when [read_result](https://github.com/apache/airflow/blob/8ab9c0c969559318417b9e66454f7a95a34aeeeb/airflow/operators/python.py#L422) is called with `script.out`, we see an empty file.
**What you expected to happen**:
It's expected that regardless of the return value of the function, this will be correctly serialised in the `script.out`. This could be fixed by changing the jinja template to use `if res is not None` instead of `if res`
**How to reproduce it**:
Minimal DAG:
```
from airflow import DAG
from airflow.operators.python_operator import PythonVirtualenvOperator
import airflow
dag = DAG(
dag_id='test_dag',
start_date=airflow.utils.dates.days_ago(3),
schedule_interval='0 20 * * *',
catchup=False,
)
with dag:
def fn_that_returns_false():
return False
def fn_that_returns_true():
return True
task_1 = PythonVirtualenvOperator(
task_id='return_false',
python_callable=fn_that_returns_false
)
task_2 = PythonVirtualenvOperator(
task_id='return_true',
python_callable=fn_that_returns_true
)
```
Checking the logs for `return_false`, we see:
```
...
[2021-05-24 12:09:02,729] {python.py:118} INFO - Done. Returned value was: None
[2021-05-24 12:09:02,741] {taskinstance.py:1192} INFO - Marking task as SUCCESS. dag_id=test_dag, task_id=return_false, execution_date=20210524T120900, start_date=20210524T120900, end_date=20210524T120902
[2021-05-24 12:09:02,765] {taskinstance.py:1246} INFO - 0 downstream tasks scheduled from follow-on schedule check
[2021-05-24 12:09:02,779] {local_task_job.py:146} INFO - Task exited with return code 0
```
When it should probably read 'Returned value was: False`.
This issue was discovered whilst trying to build a Virtualenv aware version of `ShortCircuitOperator`, where a return value of `False` is important
| https://github.com/apache/airflow/issues/16022 | https://github.com/apache/airflow/pull/16049 | add7490145fabd097d605d85a662dccd02b600de | 6af963c7d5ae9b59d17b156a053d5c85e678a3cb | 2021-05-24T12:11:41Z | python | 2021-05-26T11:28:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,017 | ["airflow/www/static/js/tree.js", "airflow/www/templates/airflow/dag.html"] | hardcoded base url in tree view's auto-refresh | **Apache Airflow version**: 2.1.0
**What happened**:
When UI airflow webserver is not in /, auto-refresh from tree view fails (because of hardcoded base url get )
tree.js
```
function handleRefresh() {
$('#loading-dots').css('display', 'inline-block');
$.get('/object/tree_data?dag_id=${dagId}')
...
```
**What you expected to happen**:
use base_url for getting the real webserver path
| https://github.com/apache/airflow/issues/16017 | https://github.com/apache/airflow/pull/16018 | 5dd080279937f1993ee4b093fad9371983ee5523 | c288957939ad534eb968a90a34b92dd3a009ddb3 | 2021-05-24T02:33:12Z | python | 2021-05-24T20:22:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,013 | ["airflow/cli/commands/kubernetes_command.py", "tests/cli/commands/test_kubernetes_command.py"] | CLI 'kubernetes cleanup-pods' fails on invalid label key | Apache Airflow version: 2.0.2
Helm chart version: 1.0.0
Kubernetes version: 1.20
**What happened**:
Airflow airflow-cleanup cronjob is failing with the error below. When I run the same command form the webserver or scheduler pod I got the same error.
```bash
> airflow@airflow-webserver-7f9f7954c-p9vv9:/opt/airflow$ airflow kubernetes cleanup-pods --namespace airflow
Loading Kubernetes configuration
Listing pods in namespace airflow
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/commands/kubernetes_command.py", line 111, in cleanup_pods
pod_list = kube_client.list_namespaced_pod(**list_kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 12803, in list_namespaced_pod
(data) = self.list_namespaced_pod_with_http_info(namespace, **kwargs) # noqa: E501
File "/home/airflow/.local/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 12905, in list_namespaced_pod_with_http_info
collection_formats=collection_formats)
File "/home/airflow/.local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 345, in call_api
_preload_content, _request_timeout)
File "/home/airflow/.local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 176, in __call_api
_request_timeout=_request_timeout)
File "/home/airflow/.local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 366, in request
headers=headers)
File "/home/airflow/.local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 241, in GET
query_params=query_params)
File "/home/airflow/.local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '53ee7655-f595-42a5-bdfb-689067a7fe02', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': 'e14ece85-9601-4034-9a43-7872ebabcbc5', 'X-Kubernetes-Pf-Prioritylevel-Uid': '72601873-fd48-4405-99dc-b7c4cac03b5c', 'Date': 'Sun, 23 May 2021 16:07:37 GMT', 'Content-Length': '428'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"unable to parse requirement: invalid label key \"{'matchExpressions':\": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')","reason":"BadRequest","code":400}
```
**How to reproduce it**:
Create and airflow deployment with Helm chart
Enable automatic cleanup
```yaml
cleanup:
enabled: true
```
Run command `airflow kubernetes cleanup-pods --namespace airflow` | https://github.com/apache/airflow/issues/16013 | https://github.com/apache/airflow/pull/17298 | 2020a544c8208c8c3c9763cf0dbb6b2e1a145727 | 36bdfe8d0ef7e5fc428434f8313cf390ee9acc8f | 2021-05-23T16:26:39Z | python | 2021-07-29T20:17:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,008 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GoogleCloudStorageToBigQueryOperator reads string as a list in parameter source_objects | **Apache Airflow version**:1.10.12
**Environment**: google cloud composer
**What happened**:
When using GoogleCloudStorageToBigQueryOperator and providing string as parameter source_objects, the process is iterating on a the string as a valid list.
For example -
`cloud_storage_to_bigquery = GoogleCloudStorageToBigQueryOperator(
bucket = 'bucket',
source_objects = 'abc',
)`
Will result in looking into the sources: bucket/a, bucket/b, bucket/c.
**What you expected to happen**:
Throw an error on type (string instead of list).
| https://github.com/apache/airflow/issues/16008 | https://github.com/apache/airflow/pull/16160 | b7d1039b60f641e78381fbdcc33e68d291b71748 | 99d1535287df7f8cfced39baff7a08f6fcfdf8ca | 2021-05-23T09:34:41Z | python | 2021-05-31T05:06:44Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,007 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | Masking passwords with empty connection passwords make some logs unreadable in 2.1.0 | Discovered in this [Slack conversation](https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1621752408213700).
When you have connections with empty passwords masking logs masks all the character breaks:
```
[2021-05-23 04:00:23,309] {{logging_mixin.py:104}} WARNING - ***-***-***-*** ***L***o***g***g***i***n***g*** ***e***r***r***o***r*** ***-***-***-***
[2021-05-23 04:00:23,309] {{logging_mixin.py:104}} WARNING - ***T***r***a***c***e***b***a***c***k*** ***(***m***o***s***t*** ***r***e***c***e***n***t*** ***c***a***l***l*** ***l***a***s***t***)***:***
[2021-05-23 04:00:23,309] {{logging_mixin.py:104}} WARNING - *** *** ***F***i***l***e*** ***"***/***u***s***r***/***l***o***c***a***l***/***l***i***b***/***p***y***t***h***o***n***3***.***8***/***l***o***g***g***i***n***g***/***_***_***i***n***i***t***_***_***.***p***y***"***,*** ***l***i***n***e*** ***1***0***8***1***,*** ***i***n*** ***e***m***i***t***
*** *** *** *** ***m***s***g*** ***=*** ***s***e***l***f***.***f***o***r***m***a***t***(***r***e***c***o***r***d***)***
```
Until this is fixed, an easy workaround is to disable masking via disabling sensitive connection masking in configuration:
```
[core]
hide_sensitive_var_conn_fields = False
```
or vial env variable:
```
AIRFLOW__CORE__HIDE_SENSITIVE_VAR_CONN_FIELDS="False"
```
This is only happening if the task accesses the connection that has empty password. However there are a number of cases where such an empty password might be "legitimate" - for example in `google` provider you might authenticate using env variable or workload identity and connection will contain an empty password then. | https://github.com/apache/airflow/issues/16007 | https://github.com/apache/airflow/pull/16057 | 9c98a60cdd29f0b005bf3abdbfc42aba419fded8 | 8814a59a5bf54dd17aef21eefd0900703330c22c | 2021-05-23T08:41:10Z | python | 2021-05-25T18:31:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,000 | ["chart/templates/secrets/elasticsearch-secret.yaml", "chart/templates/secrets/metadata-connection-secret.yaml", "chart/templates/secrets/pgbouncer-stats-secret.yaml", "chart/templates/secrets/redis-secrets.yaml", "chart/templates/secrets/result-backend-connection-secret.yaml", "chart/tests/test_elasticsearch_secret.py", "chart/tests/test_metadata_connection_secret.py", "chart/tests/test_redis.py", "chart/tests/test_result_backend_connection_secret.py"] | If external postgres password contains '@' then it appends it to host. | **What happened:**
My password for external Postgres RDS contained '@123' at the end which got appended to the host of the DB due to some bug. One can notice in the logs, the DB_HOST has an unwanted 123@ in the front of it DB_HOST=**123@**{{postgres_host}}. I removed '@' character from the password and it worked fine.
I am using the latest image of apache/airflow and using the official helm chart.
```
kc logs airflow-run-airflow-migrations-xxx
BACKEND=postgresql
DB_HOST=123@{{postgres_host}}
DB_PORT=5432
....................
ERROR! Maximum number of retries (20) reached.
Last check result:
$ run_nc '123@{{postgres_host}}' '5432'
Traceback (most recent call last):
File "<string>", line 1, in <module>
socket.gaierror: [Errno -2] Name or service not known
Can't parse as an IP address
```
**Steps to reproduce:**
One can easily reproduce this by using a password that contains the '@' character in it.
```
data:
metadataConnection:
user: {{postgres_airflow_username}}
pass: {{postgres_airflow_password}}
protocol: postgresql
host: {{postgres_host}}
port: 5432
db: {{postgres_airflow_dbname}}
```
**Expected behavior:**
Migrations should run irrespective if the Postgres password contains an @ character or not. | https://github.com/apache/airflow/issues/16000 | https://github.com/apache/airflow/pull/16004 | 26840970718228d1484142f0fe06f26bc91566cc | ce358b21533eeb7a237e6b0833872bf2daab7e30 | 2021-05-22T21:06:26Z | python | 2021-05-23T17:07:19Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,994 | [".pre-commit-config.yaml", "airflow/sensors/base.py", "airflow/utils/orm_event_handlers.py", "dev/breeze/src/airflow_breeze/commands/production_image_commands.py", "scripts/ci/libraries/_sanity_checks.sh", "scripts/in_container/run_system_tests.sh", "tests/conftest.py"] | Use inclusive words in Apache Airflow project | **Description**
Apache Software Foundation is discussing how we can improve inclusiveness of projects and raise awareness of conscious language. Related thread on [email protected]:
https://lists.apache.org/thread.html/r2d8845d9c37ac581046997d980464e8a7b6bffa6400efb0e41013171%40%3Cdiversity.apache.org%3E
**Use case / motivation**
We already have pre-commit check that checks for some word. However, on [CLC (Conscious Language Checker)](https://clcdemo.net/analysis.html?project=airflow.git) Apache Airflow seems to have problems with the following words:
- he
- her
- him
- his
- master
- sanity check
- slave
- whitelist (pylintrc)
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
#12982 https://github.com/apache/airflow/pull/9175 | https://github.com/apache/airflow/issues/15994 | https://github.com/apache/airflow/pull/23090 | 9a6baab5a271b28b6b3cbf96ffa151ac7dc79013 | d7b85d9a0a09fd7b287ec928d3b68c38481b0225 | 2021-05-21T18:31:42Z | python | 2022-05-09T21:52:29Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,976 | ["airflow/www/widgets.py"] | Error when querying on the Browse view with empty date picker | **Apache Airflow version**: 2.0.2
**What happened**:
Under Browse, when querying with any empty datetime fields, I received the mushroom cloud.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/security/decorators.py", line 109, in wraps
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/views.py", line 551, in list
widgets = self._list()
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/baseviews.py", line 1127, in _list
page_size=page_size,
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/baseviews.py", line 1026, in _get_list_widget
page_size=page_size,
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 425, in query
count = self.query_count(query, filters, select_columns)
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 347, in query_count
query, filters, select_columns=select_columns, aliases_mapping={}
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 332, in _apply_inner_all
query = self.apply_filters(query, inner_filters)
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 187, in apply_filters
return filters.apply_all(query)
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/filters.py", line 298, in apply_all
query = flt.apply(query, value)
File "/usr/local/lib/python3.7/site-packages/airflow/www/utils.py", line 373, in apply
value = timezone.parse(value, timezone=timezone.utc)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/timezone.py", line 173, in parse
return pendulum.parse(string, tz=timezone or TIMEZONE, strict=False) # type: ignore
File "/usr/local/lib/python3.7/site-packages/pendulum/parser.py", line 29, in parse
return _parse(text, **options)
File "/usr/local/lib/python3.7/site-packages/pendulum/parser.py", line 45, in _parse
parsed = base_parse(text, **options)
File "/usr/local/lib/python3.7/site-packages/pendulum/parsing/__init__.py", line 74, in parse
return _normalize(_parse(text, **_options), **_options)
File "/usr/local/lib/python3.7/site-packages/pendulum/parsing/__init__.py", line 120, in _parse
return _parse_common(text, **options)
File "/usr/local/lib/python3.7/site-packages/pendulum/parsing/__init__.py", line 177, in _parse_common
return date(year, month, day)
ValueError: year 0 is out of range
```
**What you expected to happen**:
Perhaps give a warning/error banner that indicate Airflow cannot perform the search with bad input. I think it'll also work if the datetime picker defaults the timestamp to the current time.
It looks like some fields are equipped to do that but not all.
**How to reproduce it**:
1. Go under Browse
2. Try to query with empty datetime picket
**Anything else we need to know**:






| https://github.com/apache/airflow/issues/15976 | https://github.com/apache/airflow/pull/18602 | 0a37be3e3cf9289f63f1506bc31db409c2b46738 | d74e6776fce1da2c887e33d79e2fb66c83c6ff82 | 2021-05-21T00:17:06Z | python | 2021-09-30T19:52:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,963 | ["airflow/providers/ssh/hooks/ssh.py"] | SSHHook: host_key is not added properly when using non-default port | **Apache Airflow version**: 2.0.2 (but problem can still be found in master)
**What happened**:
When using the SSHHook to connect to an ssh server on a non default port, the host_key setting is not added with the correct hostname to the list of known hosts. In more detail:
```python
from airflow.providers.ssh.hooks.ssh import SSHHook
import paramiko
from base64 import decodebytes
hook = SSHHook(remote_host="1.2.3.4", port=1234, username="user")
# Usually, host_key would come from the connection_extras, for the sake of this example we set the value manually:
host_key = "abc" # Some public key
hook.host_key = paramiko.RSAKey(data=decodebytes(host_key.encode("utf-8")))
hook.no_host_key_check = False
conn = hook.get_conn()
```
This yields the exception
paramiko.ssh_exception.SSHException: Server '[1.2.3.4]:1234' not found in known_hosts
**Reason**:
In the SSHHook the host_key is added using only the name of the remote host.
https://github.com/apache/airflow/blob/5bd6ea784340e0daf1554e207600eae92318ab09/airflow/providers/ssh/hooks/ssh.py#L221
According the the known_hosts format, we would need
```python
hostname = f"[{self.remote_host}]:{self.port}" if self.port != SSH_PORT else self.remote_host
```
**Anything else we need to know**:
I will prepare a PR that solves the problem.
| https://github.com/apache/airflow/issues/15963 | https://github.com/apache/airflow/pull/15964 | ffe8fab6536ac4eec076d48548d7b2e814a55b1f | a2dc01b34590fc7830bdb76fea653e1a0ebecbd3 | 2021-05-20T06:38:23Z | python | 2021-07-03T14:53:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,962 | ["chart/templates/flower/flower-serviceaccount.yaml", "chart/tests/test_rbac.py"] | ServiceAccount always created without correspond resouce | Congratulations for released offical Apache Airflow Helm Chart 1.0.0!
I try to migrate offical repo, but I found it alway created `ServiceAccount` without correspond resouce.
Such as flower ServiceAccount: https://github.com/apache/airflow/blob/master/chart/templates/flower/flower-serviceaccount.yaml#L21
**Apache Airflow version**: 2.0.2
**What happened**:
I do not need flower, but flower serviceAccount was created.
**What you expected to happen**:
We don't need create flower serviceAccount when flower is disable.
EDIT: It seems only flower serviceAccount like this, I can fix it. | https://github.com/apache/airflow/issues/15962 | https://github.com/apache/airflow/pull/16011 | 5dfda5667ca8d61ed022f3e14c524cd777996640 | 9b5bdcac247b0d6306a9bde57bb8af5088de2d7d | 2021-05-20T06:17:52Z | python | 2021-05-23T13:22:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,946 | ["airflow/task/task_runner/base_task_runner.py"] | Web UI not displaying the log when task fails - Permission Denied at temporary error file when using run_as_user | **Apache Airflow version**: 2.0.1
**Environment**: 2 Worker nodes and 1 Master
- **Cloud provider or hardware configuration**: Oracle Cloud
- **OS** (e.g. from /etc/os-release): Oracle Linux 7.8
- **Kernel**: Linux 4.14.35-1902.302.2.el7uek.x86_64 #2 SMP Fri Apr 24 14:24:11 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
**What happened**:
When a task fails, the Web UI doesn't display the log. The URL to get the log is presented without the hostname. When we navigate to the log path and open the .log file in the OS, it shows a permission error when opening the temporary file generated to dump the error.
I noticed when we create the temporary file using NamedTemporaryFile it creates a restricted file, open only for reading. It can be written only by the user airflow. If any other user tries to write in the file, the Permission Error is raised.
The message that is displayed at the UI is:
```
*** Log file does not exist: /path/to/log/1.log
*** Fetching from: http://:8793/log/path/to/log/1.log
*** Failed to fetch log file from worker. Invalid URL 'http://:8793/log/path/to/log/1.log': No host supplied
```
We can see the hostname is not obtained when building the URL since the execution fails when dumping the error into the temporary file.
When we access the log in the OS, the full log is there but it shows the Permission Denied:
```PermissionError: [Errno 13] Permission denied: '/tmp/tmpmg2q49a8'```
**What you expected to happen**:
The print from the Web UI when the task fails:

The print from the Log file, showing the Permission Denied error when accessing the tmp file:

**Anything else we need to know**:
The errors occurs every time a task fails and the run_as_user and owner is not airflow.
When the task does succeed, the log is normal at the Web Ui.
I've added a os.chmod to the self._error_file at base_task_runner, after the NamedTemporaryFile is create, using the umask 0o0777 and now the logs are appearing normally, even when the task fails.
I pretend to create a PR adding that line of code but it depends if the community believes that opening up the permissions for the temp file is ok. As far as i know, i didn't noticed any sensitive informations or possible vulnerabilities from this change.
It's important to say that the task fails not because of that problem. The problem is that the log is inaccessible through the Web UI, which can slow down troubleshootings and so on. | https://github.com/apache/airflow/issues/15946 | https://github.com/apache/airflow/pull/15947 | 48316b9d17a317ddf22f60308429ce089585fb02 | 31b15c94886c6083a6059ca0478060e46db67fdb | 2021-05-19T15:33:41Z | python | 2021-09-03T12:15:36Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,941 | ["docs/apache-airflow/start/docker.rst"] | Detect and inform the users in case there is not enough memory/disk for Docker Quick-start | Default amount of memory/disk size on MacOS is not enough usually to run Airfllow.
We already detect and provide informative message about it when we start Breeze and provide informative messages: https://github.com/apache/airflow/blob/master/scripts/ci/libraries/_docker_engine_resources.sh
I believe we should do the same for the quickstart as many of Mac users raise the ``cannot start`` issue which gets fixed after the memory is increased.
Example here: https://github.com/apache/airflow/issues/15927
| https://github.com/apache/airflow/issues/15941 | https://github.com/apache/airflow/pull/15967 | deececcabc080844ca89272a2e4ab1183cd51e3f | ce778d383e2df2857b09e0f1bfe279eecaef3f8a | 2021-05-19T13:37:31Z | python | 2021-05-20T11:44:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,938 | ["airflow/executors/celery_executor.py", "airflow/jobs/scheduler_job.py", "scripts/ci/docker-compose/base.yml", "tests/executors/test_celery_executor.py"] | celery_executor becomes stuck if child process receives signal before reset_signals is called | **Apache Airflow version**: 1.10.13 onwards (Any version that picked up #11278, including Airflow 2.0.* and 2.1.*)
**Environment**:
- **Cloud provider or hardware configuration**: Any
- **OS** (e.g. from /etc/os-release): Only tested on Debian Linux, but others may be affected too
- **Kernel** (e.g. `uname -a`): Any
- **Install tools**: Any
- **Others**: Only celery_executor is affected
**What happened**:
This was first reported [here](https://github.com/apache/airflow/issues/7935#issuecomment-839656436).
airflow-scheduler sometimes stops heartbeating and stops scheduling any tasks with this last line in the log. This happen at random times, a few times a week. Happens more often if the scheduler machine is slow.
```
{scheduler_job.py:746} INFO - Exiting gracefully upon receiving signal 15
```
The problem is that sometimes the machine is slow, `reset_signals()` of one or more slow child processes is not yet called before other child processes send `SIGTERM` when they exit. As a result, the slow child processes respond to the `SIGTERM` as if they are the main scheduler process. Thus we see the `Exiting gracefully upon receiving signal 15` in the scheduler log. Since the probability of this happening is very low, this issue is really difficult to reproduce reliably in production.
Related to #7935
Most likely caused by #11278
**What you expected to happen**:
Scheduler should not become stuck
**How to reproduce it**:
Here's a small reproducing example of the problem. There's roughly 1/25 chance it will be stuck. Run it many times to see it happen.
```python
#!/usr/bin/env python3.8
import os
import random
import signal
import time
from multiprocessing import Pool
def send_task_to_executor(arg):
pass
def _exit_gracefully(signum, frame):
print(f"{os.getpid()} Exiting gracefully upon receiving signal {signum}")
def register_signals():
print(f"{os.getpid()} register_signals()")
signal.signal(signal.SIGINT, _exit_gracefully)
signal.signal(signal.SIGTERM, _exit_gracefully)
signal.signal(signal.SIGUSR2, _exit_gracefully)
def reset_signals():
if random.randint(0, 500) == 0:
# This sleep statement here simulates the machine being busy
print(f"{os.getpid()} is slow")
time.sleep(0.1)
signal.signal(signal.SIGINT, signal.SIG_DFL)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
signal.signal(signal.SIGUSR2, signal.SIG_DFL)
if __name__ == "__main__":
register_signals()
task_tuples_to_send = list(range(20))
sync_parallelism = 15
chunksize = 5
with Pool(processes=sync_parallelism, initializer=reset_signals) as pool:
pool.map(
send_task_to_executor,
task_tuples_to_send,
chunksize=chunksize,
)
```
The reproducing example above can become stuck with a `py-spy dump` that looks exactly like what airflow scheduler does:
`py-spy dump` for the parent `airflow scheduler` process
```
Python v3.8.7
Thread 0x7FB54794E740 (active): "MainThread"
poll (multiprocessing/popen_fork.py:27)
wait (multiprocessing/popen_fork.py:47)
join (multiprocessing/process.py:149)
_terminate_pool (multiprocessing/pool.py:729)
__call__ (multiprocessing/util.py:224)
terminate (multiprocessing/pool.py:654)
__exit__ (multiprocessing/pool.py:736)
_send_tasks_to_celery (airflow/executors/celery_executor.py:331)
_process_tasks (airflow/executors/celery_executor.py:272)
trigger_tasks (airflow/executors/celery_executor.py:263)
heartbeat (airflow/executors/base_executor.py:158)
_run_scheduler_loop (airflow/jobs/scheduler_job.py:1388)
_execute (airflow/jobs/scheduler_job.py:1284)
run (airflow/jobs/base_job.py:237)
scheduler (airflow/cli/commands/scheduler_command.py:63)
wrapper (airflow/utils/cli.py:89)
command (airflow/cli/cli_parser.py:48)
main (airflow/__main__.py:40)
<module> (airflow:8)
```
`py-spy dump` for the child `airflow scheduler` process
```
Python v3.8.7
Thread 16232 (idle): "MainThread"
__enter__ (multiprocessing/synchronize.py:95)
get (multiprocessing/queues.py:355)
worker (multiprocessing/pool.py:114)
run (multiprocessing/process.py:108)
_bootstrap (multiprocessing/process.py:315)
_launch (multiprocessing/popen_fork.py:75)
__init__ (multiprocessing/popen_fork.py:19)
_Popen (multiprocessing/context.py:277)
start (multiprocessing/process.py:121)
_repopulate_pool_static (multiprocessing/pool.py:326)
_repopulate_pool (multiprocessing/pool.py:303)
__init__ (multiprocessing/pool.py:212)
Pool (multiprocessing/context.py:119)
_send_tasks_to_celery (airflow/executors/celery_executor.py:330)
_process_tasks (airflow/executors/celery_executor.py:272)
trigger_tasks (airflow/executors/celery_executor.py:263)
heartbeat (airflow/executors/base_executor.py:158)
_run_scheduler_loop (airflow/jobs/scheduler_job.py:1388)
_execute (airflow/jobs/scheduler_job.py:1284)
run (airflow/jobs/base_job.py:237)
scheduler (airflow/cli/commands/scheduler_command.py:63)
wrapper (airflow/utils/cli.py:89)
command (airflow/cli/cli_parser.py:48)
main (airflow/__main__.py:40)
<module> (airflow:8)
```
| https://github.com/apache/airflow/issues/15938 | https://github.com/apache/airflow/pull/15989 | 2de0692059c81fa7029d4ad72c5b6d17939eb915 | f75dd7ae6e755dad328ba6f3fd462ade194dab25 | 2021-05-19T11:18:40Z | python | 2021-05-29T15:00:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,907 | ["airflow/providers/microsoft/azure/log/wasb_task_handler.py", "tests/providers/microsoft/azure/log/test_wasb_task_handler.py"] | Problem with Wasb v12 remote logging when blob already exists | **Apache Airflow version**: 2.02
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.20.5
**Environment**:
- **Cloud provider or hardware configuration**: AKS
**What happened**:
When using wasb for remote logging and backfilling a DAG, if the blob name already exists in the bucket, pods fails
```
> `Error in atexit._run_exitfuncs:
> Traceback (most recent call last):
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/storage/blob/_upload_helpers.py", line 105, in upload_block_blob
> **kwargs)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/storage/blob/_generated/operations/_block_blob_operations.py", line 231, in upload
> map_error(status_code=response.status_code, response=response, error_map=error_map)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/core/exceptions.py", line 102, in map_error
> raise error
> azure.core.exceptions.ResourceExistsError: Operation returned an invalid status 'The specified blob already exists.'
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/usr/local/lib/python3.6/logging/__init__.py", line 1946, in shutdown
> h.close()
> File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/microsoft/azure/log/wasb_task_handler.py", line 103, in close
> self.wasb_write(log, remote_loc, append=True)
> File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/microsoft/azure/log/wasb_task_handler.py", line 192, in wasb_write
> remote_log_location,
> File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/microsoft/azure/hooks/wasb.py", line 217, in load_string
> self.upload(container_name, blob_name, string_data, **kwargs)
> File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/microsoft/azure/hooks/wasb.py", line 274, in upload
> return blob_client.upload_blob(data, blob_type, length=length, **kwargs)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/core/tracing/decorator.py", line 83, in wrapper_use_tracer
> return func(*args, **kwargs)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/storage/blob/_blob_client.py", line 685, in upload_blob
> return upload_block_blob(**options)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/storage/blob/_upload_helpers.py", line 157, in upload_block_blob
> process_storage_error(error)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/storage/blob/_shared/response_handlers.py", line 150, in process_storage_error
> error.raise_with_traceback()
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/core/exceptions.py", line 218, in raise_with_traceback
> raise super(AzureError, self).with_traceback(self.exc_traceback)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/storage/blob/_upload_helpers.py", line 105, in upload_block_blob
> **kwargs)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/storage/blob/_generated/operations/_block_blob_operations.py", line 231, in upload
> map_error(status_code=response.status_code, response=response, error_map=error_map)
> File "/home/airflow/.local/lib/python3.6/site-packages/azure/core/exceptions.py", line 102, in map_error
> raise error
> azure.core.exceptions.ResourceExistsError: The specified blob already exists.
> RequestId:8e0b61a7-c01e-0035-699d-4b837e000000
> Time:2021-05-18T04:19:41.7062904Z
> ErrorCode:BlobAlreadyExists
> Error:None
> `
```
**What you expected to happen**:
overwrite log file on backfills
**How to reproduce it**:
Run a trigger a dag run with remote logging wasb, delete dag and run again the same dag run.
| https://github.com/apache/airflow/issues/15907 | https://github.com/apache/airflow/pull/16280 | 5c7d758e24595c485553b0449583ff238114d47d | 29b7f795d6fb9fb8cab14158905c1b141044236d | 2021-05-18T04:23:59Z | python | 2021-06-07T18:46:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,900 | ["chart/files/pod-template-file.kubernetes-helm-yaml", "chart/templates/_helpers.yaml", "chart/tests/test_pod_template_file.py"] | Chart: Extra mounts with DAG persistence and gitsync | **What happened**:
When you have `dag.persistence` enabled and a `dag.gitSync.sshKeySecret` set, the gitSync container isn't added to the pod_template_file for k8s workers, as expected. However, `volumes` for it still are and maybe worse, the ssh key is mounted into the Airflow worker.
**What you expected to happen**:
When using `dag.persistence` and a `dag.gitSync.sshKeySecret`, nothing gitsync related is added to the k8s workers.
**How to reproduce it**:
Deploy the helm chart with `dag.persistence` enabled and a `dag.gitSync.sshKeySecret`.
e.g:
```
dags:
persistence:
enabled: true
gitSync:
enabled: true
repo: {some_repo}
sshKeySecret: my-gitsync-secret
extraSecrets:
'my-gitsync-secret':
data: |
gitSshKey: {base_64_private_key}
```
**Anything else we need to know**:
After a quick look at CeleryExecutor workers, I don't think they are impacted, but worth double checking. | https://github.com/apache/airflow/issues/15900 | https://github.com/apache/airflow/pull/15925 | 9875f640ca19dabd846c17f4278ccc90e189ae8d | 8084cfbb36ec1da47cc6b6863bc08409d7387898 | 2021-05-17T20:26:57Z | python | 2021-05-21T23:17:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,892 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | KubernetesPodOperator pod_template_file content doesn't support jinja airflow template variables | KubernetesPodOperator pod_template_file content doesn't support jinja airflow template variables. pod_template_file is part of templated_fields list.
https://github.com/apache/airflow/blob/master/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L165
template_fields: Iterable[str] = ( 'image', 'cmds', 'arguments', 'env_vars', 'labels', 'config_file', 'pod_template_file', )
But pod_template_file content is not supporting template variables. pod_template_file can be implemented the way SparkKubernetesOperator implemented using template_ext
https://github.com/apache/airflow/blob/master/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py#L46
template_ext = ('yaml', 'yml', 'json') | https://github.com/apache/airflow/issues/15892 | https://github.com/apache/airflow/pull/15942 | fabe8a2e67eff85ec3ff002d8c7c7e02bb3f94c7 | 85b2ccb0c5e03495c58e7c4fb0513ceb4419a103 | 2021-05-17T12:52:31Z | python | 2021-05-20T15:14:29Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,888 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Abort a DAG Run | **Description**
It would be great having a option to abort a DAG Runs through the REST API.
**Use case / motivation**
The proposed input params would be:
- DAG_ID
- DAG_RUN_ID
The DAG Run should abort all its tasks running and mark them as "failed".
**Are you willing to submit a PR?**
**Related Issues**
| https://github.com/apache/airflow/issues/15888 | https://github.com/apache/airflow/pull/17839 | 430976caad5970b718e3dbf5899d4fc879c0ac89 | ab7658147445161fa3f7f2b139fbf9c223877f77 | 2021-05-17T11:00:22Z | python | 2021-09-02T19:32:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,886 | ["docs/apache-airflow/howto/operator/python.rst"] | Adding support for --index-url (or) --extra-index-url for PythonVirtualenvOperator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
<!-- A short description of your feature -->
**Use case / motivation**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/15886 | https://github.com/apache/airflow/pull/20048 | 9319a31ab11e83fd281b8ed5d8469b038ddad172 | 7627de383e5cdef91ca0871d8107be4e5f163882 | 2021-05-17T09:10:59Z | python | 2021-12-05T21:49:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,885 | ["CHANGELOG.txt", "airflow/api_connexion/schemas/task_instance_schema.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Internal error on API REST /api/v1/dags/axesor/updateTaskInstancesState | **Apache Airflow version**: 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Running on Docker 19.03.13
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Windows 10 Enterprise
- **Kernel**:
- **Install tools**:
- **Others**:
**What happened**:
I receive an HTTP Error 500 when changing tasks status through the REST API.
**What you expected to happen**:
I expected to receive a HTTP 200.
**How to reproduce it**:
First, we trigger a new Dag Run:
```
dag_id = 'test'
run_id = 1000
r = requests.post('http://localhost:8080/api/v1/dags/' + dag_id + '/dagRuns',
json={"dag_run_id": str(run_id), "conf": { } },
auth=HTTPBasicAuth('airflow', 'airflow'))
if r.status_code == 200:
print("Dag started with run_id", run_id)
```
Then we try to abort the DAG Run:
```
r = requests.get('http://localhost:8080/api/v1/dags/' + dag_id + '/dagRuns/' + str(run_id) + '/taskInstances?state=running',
auth=HTTPBasicAuth('airflow', 'airflow'))
task_id = r.json()['task_instances'][0]['task_id']
execution_date = r.json()['task_instances'][0]['execution_date']
r = requests.post('http://localhost:8080/api/v1/dags/' + dag_id + '/updateTaskInstancesState',
json={"task_id": str(task_id),
"execution_date": str(execution_date),
"include_upstream": True,
"include_downstream": True,
"include_future": True,
"include_past": False,
"new_state": "failed"
},
auth=HTTPBasicAuth('airflow', 'airflow'))
print(r.status_code)
```
**Anything else we need to know**:
This is the server side track:
```
Something bad has happened.
Please consider letting us know by creating a <b><a href="https://github.com/apache/airflow/issues/new/choose">bug report using GitHub</a></b>.
Python version: 3.6.13
Airflow version: 2.0.2
Node: c8d75444cd4a
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 184, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 384, in wrapper
return function(request)
File "/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/response.py", line 103, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
return function(**kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/api_connexion/security.py", line 47, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/api_connexion/endpoints/task_instance_endpoint.py", line 314, in post_set_task_instances_state
commit=not data["dry_run"],
KeyError: 'dry_run'
```
With every call.
| https://github.com/apache/airflow/issues/15885 | https://github.com/apache/airflow/pull/15889 | 821ea6fc187a9780b8fe0dd76f140367681ba065 | ac3454e4f169cdb0e756667575153aca8c1b6981 | 2021-05-17T09:01:11Z | python | 2021-05-17T14:15:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,834 | ["airflow/dag_processing/manager.py", "docs/apache-airflow/logging-monitoring/metrics.rst"] | Metrics documentation fixes and deprecations | **Apache Airflow version**: 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**: N/A
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
* `dag_processing.last_runtime.*` - In version 1.10.6 [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md#airflow-1106) it was indicated that this metrics will be removed in 2.0. It was not removed from the metrics documentation. Also the metrics documentation doesn't mention it supposed to be removed/deprecated, it's documented as a gauge but it is actually a timer (reported https://github.com/apache/airflow/issues/10091).
* `dag_processing.processor_timeouts`: documented as a guage but it is actually a counter. Again from https://github.com/apache/airflow/issues/10091.
* `dag_file_processor_timeouts` - indicated as supposed to be removed in 2.0, was not removed from [code](https://github.com/apache/airflow/blob/37d549/airflow/utils/dag_processing.py#L1169) but removed from docs.
* Would be nice if documentation of 1.10.15 indicated the deprecated metrics more clearly, not only in `UPDATING.md`.
**What you expected to happen**:
* The Metrics page should document all metrics being emitted by Airflow.
* The Metrics page should correctly document the type of the metric.
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
Check official [Metrics Docs](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html?highlight=metrics#)
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15834 | https://github.com/apache/airflow/pull/27067 | 6bc05671dbcfb38881681b656370d888e6300e26 | 5890b083b1dcc082ddfa34e9bae4573b99a54ae3 | 2021-05-14T00:50:54Z | python | 2022-11-19T03:47:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,832 | ["airflow/www/static/js/dag.js", "airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dag.html", "airflow/www/templates/airflow/dags.html"] | 2.1 Airflow UI (Delete DAG) button is not working | On Airflow UI for DAG delete button is not working as expected.
Airflow version: 2.1.0
**What happened:**
When we click on Delete DAG button for any DAG it sholud delete but geting 404 error page on platform and local.
**What you expected to happen:**
<img width="1771" alt="Screen Shot 2021-05-13 at 3 30 50 PM" src="https://user-images.githubusercontent.com/47584863/118195521-365d0e80-b400-11eb-9453-d3030e011155.png">
When we click on Delete DAG button for any DAG it sholud delete.
**How to reproduce it:**
Go to Airflow UI page select any DAG, right side of the page there will be Delete DAG button in red colour.
<img width="1552" alt="Screen Shot 2021-05-13 at 3 17 14 PM" src="https://user-images.githubusercontent.com/47584863/118195296-cc446980-b3ff-11eb-86c3-964e32d79f89.png">
| https://github.com/apache/airflow/issues/15832 | https://github.com/apache/airflow/pull/15836 | 51e54cb530995edbb6f439294888a79724365647 | 634c12d08a8097bbb4dc7173dd56c0835acda735 | 2021-05-13T22:31:45Z | python | 2021-05-14T06:07:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,815 | ["airflow/providers/docker/CHANGELOG.rst", "airflow/providers/docker/example_dags/example_docker_copy_data.py", "airflow/providers/docker/operators/docker.py", "airflow/providers/docker/operators/docker_swarm.py", "airflow/providers/docker/provider.yaml", "docs/conf.py", "docs/exts/docs_build/third_party_inventories.py", "tests/providers/docker/operators/test_docker.py", "tests/providers/docker/operators/test_docker_swarm.py"] | New syntax to mount Docker volumes with --mount | I had this after reading #12537 and #9047. Currently `DockerOperator`’s `volumes` argument is passed directly to docker-py’s `bind` (aka `docker -v`). But `-v`’s behaviour has long been problematic, and [Docker has been pushing users to the new `--mount` syntax instead](https://docs.docker.com/storage/bind-mounts/#choose-the--v-or---mount-flag). With #12537, it seems like `-v`’s behaviour is also confusing to some Airflow users, so I want to migrate Airflow’s internals to `--mount`.
However, `--mount` has a different syntax to `-v`, and the behaviour is also slightly different, so for compatibility reasons we can’t just do it under the hood. I can think of two possible solutions to this:
A. Deprecate `volumes` altogether and introduce `DockerOperator(mounts=...)`
This will emit a deprecation warning when the user passes `DockerOperator(volumes=...)` to tell them to convert to `DockerOperator(mounts=...)` instead. `volumes` will stay unchanged otherwise, and continue to be passed to bind mounts.
`mounts` will take a list of [`docker.types.Mount`](https://docker-py.readthedocs.io/en/stable/api.html#docker.types.Mount) to describe the mounts. They will be passed directly to the mounts API. Some shorthands could be useful as well, for example:
```python
DockerOperator(
...
mounts=[
('/root/data1', './data1'), # Source and target, default to volume mount.
('/root/data2', './data2', 'bind'), # Bind mount.
],
)
```
B. Reuse `volumes` and do introspection to choose between binds and mounts
The `volumes` argument can be augmented to also accept `docker.types.Mount` instances, and internally we’ll do something like
```python
binds = []
mounts = []
for vol in volumes:
if isinstance(vol, str):
binds.append(vol)
elif isintance(vol, docker.types.Mount):
mounts.append(vol)
else:
raise ValueError('...')
if binds:
warnings.warn('...', DeprecationWarning)
```
and pass the collected lists to binds and mounts respectively.
I’m very interested in hearing thoughts on this.
**Are you willing to submit a PR?**
Yes
**Related Issues**
* #12537: Confusing on the bind syntax.
* #9047: Implement mounting in `DockerSwarmOperator` (it’s a subclass of `DockerOperator`, but the `volumes` option is currently unused).
| https://github.com/apache/airflow/issues/15815 | https://github.com/apache/airflow/pull/15843 | ac3454e4f169cdb0e756667575153aca8c1b6981 | 12995cfb9a90d1f93511a4a4ab692323e62cc318 | 2021-05-13T06:28:57Z | python | 2021-05-17T15:03:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,790 | ["Dockerfile", "Dockerfile.ci", "docs/docker-stack/build.rst", "docs/docker-stack/docker-examples/restricted/restricted_environments.sh", "scripts/docker/common.sh", "scripts/docker/install_additional_dependencies.sh", "scripts/docker/install_airflow.sh", "scripts/docker/install_airflow_from_branch_tip.sh", "scripts/docker/install_from_docker_context_files.sh", "scripts/docker/install_pip_version.sh"] | Failed to build Docker image using Dockerfile in master branch | apache Airflow version
2.0.2
Environment
Configuration: Dockerfile
OS (e.g. from /etc/os-release): ubuntu 16.04
Install tools: sudo docker build -t airflow-with-browser .
What happened:
I Can't build a docker image using Dockerfile in master branch
The python package dependency error occurred while building a image
<img width="1871" alt="스크린샷 2021-05-12 오후 2 16 16" src="https://user-images.githubusercontent.com/23733661/117922125-f416cd00-b32c-11eb-84e2-40bfec7c16f0.png">
```
The conflict is caused by:
connexion[flask,swagger-ui] 2.6.0 depends on connexion 2.6.0 (from https://files.pythonhosted.org/packages/a7/27/d8258c073989776014d49bbc8049a9b0842aaf776f462158d8a885f8c6a2/connexion-2.6.0-py2.py3-none-any.whl#sha256=c568e579f84be808e387dcb8570bb00a536891be1318718a0dad3ba90f034191 (from https://pypi.org/simple/connexion/) (requires-python:>=3.6))
The user requested (constraint) connexion==2.7.0
```
I think there a version mismatching between connexion[flask, swagger-ui] and connexion..

What you expected to happen:
Succeed to build a Airflow image
To Replicate:
git clone https://github.com/apache/airflow
cd airflow
sudo docker build -t airflow-with-browser .
| https://github.com/apache/airflow/issues/15790 | https://github.com/apache/airflow/pull/15802 | bcfa0cbbfc941cae705a39cfbdd6330a5ba0578e | e84fb58d223d0793b3ea3d487bd8de58fb7ecefa | 2021-05-12T05:21:50Z | python | 2021-05-14T12:10:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,789 | ["airflow/models/dag.py", "airflow/operators/python.py", "tests/operators/test_python.py"] | Task preceeding PythonVirtualenvOperator fails: "cannot pickle 'module' object" | **Apache Airflow version**
13faa6912f7cd927737a1dc15630d3bbaf2f5d4d
**Environment**
- **Configuration**: Local Executor
- **OS** (e.g. from /etc/os-release): Mac OS 11.3
- **Kernel**: Darwin Kernel Version 20.4.0
- **Install tools**: `pip install -e .`
**The DAG**
```python
def callable():
print("hi")
with DAG(dag_id="two_virtualenv") as dag:
a = PythonOperator(
task_id="a",
python_callable=callable,
)
# b = PythonOperator( # works
b = PythonVirtualenvOperator( # doesn't work
task_id="b",
python_callable=callable,
)
a >> b
```
**What happened**:
Failure somewhere between first task and second:
```
INFO - Marking task as SUCCESS. dag_id=two_virtualenv, task_id=a
ERROR - Failed to execute task: cannot pickle 'module' object.
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/executors/debug_executor.py", line 79, in _run_task
ti._run_raw_task(job_id=ti.job_id, **params) # pylint: disable=protected-access
File "/Users/matt/src/airflow/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1201, in _run_raw_task
self._run_mini_scheduler_on_child_tasks(session)
File "/Users/matt/src/airflow/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1223, in _run_mini_scheduler_on_child_tasks
partial_dag = self.task.dag.partial_subset(
File "/Users/matt/src/airflow/airflow/models/dag.py", line 1490, in partial_subset
dag.task_dict = {
File "/Users/matt/src/airflow/airflow/models/dag.py", line 1491, in <dictcomp>
t.task_id: copy.deepcopy(t, {id(t.dag): dag}) # type: ignore
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/copy.py", line 153, in deepcopy
y = copier(memo)
File "/Users/matt/src/airflow/airflow/models/baseoperator.py", line 961, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo)) # noqa
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle 'module' object
ERROR - Task instance <TaskInstance: two_virtualenv.a 2021-05-11 00:00:00+00:00 [failed]> failed
```
**What you expected to happen**:
Both tasks say "hi" and succeed
**To Replicate**
The DAG and output above are shortened for brevity. A more complete story: https://gist.github.com/MatrixManAtYrService/6b27378776470491eb20b60e01cfb675
Ran it like this:
```
$ airflow dags test two_virtualenv $(date "+%Y-%m-%d")
``` | https://github.com/apache/airflow/issues/15789 | https://github.com/apache/airflow/pull/15822 | d78f8c597ba281e992324d1a7ff64465803618ce | 8ab9c0c969559318417b9e66454f7a95a34aeeeb | 2021-05-12T03:41:58Z | python | 2021-05-13T15:59:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,783 | ["airflow/providers/alibaba/cloud/log/oss_task_handler.py", "airflow/providers/amazon/aws/log/s3_task_handler.py", "airflow/providers/google/cloud/log/gcs_task_handler.py", "airflow/providers/microsoft/azure/log/wasb_task_handler.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/log_reader.py", "airflow/www/static/js/ti_log.js", "tests/api_connexion/endpoints/test_log_endpoint.py", "tests/providers/google/cloud/log/test_gcs_task_handler.py", "tests/utils/log/test_log_reader.py"] | Auto-refresh of logs. | **Description**
Auto-refresh of logs.
**Use case / motivation**
Similar process that is already implemented in the Graph View, have the logs to auto-refresh so it's easier to keep track of the different processes in the UI.
Thank you in advance!
| https://github.com/apache/airflow/issues/15783 | https://github.com/apache/airflow/pull/26169 | 07fe356de0743ca64d936738b78704f7c05774d1 | 1f7b296227fee772de9ba15af6ce107937ef9b9b | 2021-05-11T22:54:11Z | python | 2022-09-18T21:06:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,771 | [".pre-commit-config.yaml", "chart/values.schema.json", "chart/values.yaml", "chart/values_schema.schema.json", "docs/conf.py", "docs/helm-chart/parameters-ref.rst", "docs/spelling_wordlist.txt", "scripts/ci/pre_commit/pre_commit_json_schema.py"] | Automate docs for `values.yaml` via pre-commit config and break them into logical groups | Automate docs for values.yaml via pre-commit config and break them into logical groups like https://github.com/bitnami/charts/tree/master/bitnami/airflow | https://github.com/apache/airflow/issues/15771 | https://github.com/apache/airflow/pull/15827 | 8799b9f841892b642fff6fee4b021fc4204a33df | 2a8bae9db4bbb2c2f0e94c38676815952e9008d3 | 2021-05-10T23:58:37Z | python | 2021-05-13T20:58:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,768 | ["scripts/in_container/prod/entrypoint_prod.sh"] | PythonVirtualenvOperator fails with error from pip execution: Can not perform a '--user' install. | **Apache Airflow version**: 2.0.2
**Environment**:
- **Hardware configuration**: Macbook Pro 2017
- **OS**: macOS X Catalina 10.15.7
- **Kernel**: Darwin 19.6.0
- **Others**: Docker 20.10.6, docker-compose 1.29.1
**What happened**:
Running the demo `example_python_operator` dag fails on the `virtualenv_python` step. The call to pip via subprocess fails:
`subprocess.CalledProcessError: Command '['/tmp/venvt3_qnug6/bin/pip', 'install', 'colorama==0.4.0']' returned non-zero exit status 1.`
The error coming from pip is: `ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.`
**What you expected to happen**:
The call to pip succeeds, and the colorama dependency is installed into the virtualenv without attempting to install to user packages. The `example_python_operator` dag execution succeeds.
**How to reproduce it**:
Setup airflow 2.0.2 in docker as detailed in the Quickstart guide: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
Once running, enable and manually trigger the `example_python_operator` dag via the webUI.
The dag will fail at the `virtualenv_python` task.
**Anything else we need to know**:
Not a problem with the Airflow 2.0.1 docker-compose. Fairly certain this is due to the addition of the `PIP_USER` environment variable being set to `true` in this PR: https://github.com/apache/airflow/pull/14125
My proposed solution would be to prepend `PIP_USER=false` to the construction of the call to pip within `utils/python_virtualenv.py` here: https://github.com/apache/airflow/blob/25caeda58b50eae6ef425a52e794504bc63855d1/airflow/utils/python_virtualenv.py#L30 | https://github.com/apache/airflow/issues/15768 | https://github.com/apache/airflow/pull/15774 | 996965aad9874e9c6dad0a1f147d779adc462278 | 533f202c22a914b881dc70ddf673ec81ffc8efcd | 2021-05-10T20:34:08Z | python | 2021-05-11T09:17:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,748 | ["airflow/cli/commands/task_command.py"] | airflow tasks run --ship-dag not able to generate pickeled dag | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): No
**Environment**:
- **Cloud provider or hardware configuration**: local machine
- **OS** (e.g. from /etc/os-release): 18.04.5 LTS (Bionic Beaver)
- **Kernel** (e.g. `uname -a`): wsl2
**What happened**:
Getting Pickled_id: None
```
root@12c7fd58e084:/opt/airflow# airflow tasks run example_bash_operator runme_0 now --ship-dag --interactive
[2021-05-09 13:11:33,247] {dagbag.py:487} INFO - Filling up the DagBag from /files/dags
Running <TaskInstance: example_bash_operator.runme_0 2021-05-09T13:11:31.788923+00:00 [None]> on host 12c7fd58e084
Pickled dag <DAG: example_bash_operator> as pickle_id: None
Sending to executor.
[2021-05-09 13:11:34,722] {base_executor.py:82} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'example_bash_operator', 'runme_0', '2021-05-09T13:11:31.788923+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/example_dags/example_bash_operator.py']
[2021-05-09 13:11:34,756] {local_executor.py:81} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'example_bash_operator', 'runme_0', '2021-05-09T13:11:31.788923+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/example_dags/example_bash_operator.py']
[2021-05-09 13:11:34,757] {local_executor.py:386} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
[2021-05-09 13:11:34,817] {dagbag.py:487} INFO - Filling up the DagBag from /opt/airflow/airflow/example_dags/example_bash_operator.py
Running <TaskInstance: example_bash_operator.runme_0 2021-05-09T13:11:31.788923+00:00 [None]> on host 12c7fd58e084
```
**What you expected to happen**:
Pickled_id should get generated
```
Pickled dag <DAG: example_bash_operator> as pickle_id: None
```
**How to reproduce it**:
run below command from command line in airflow environment
```
airflow tasks run example_bash_operator runme_0 now --ship-dag --interactive
```
**Would like to submit PR for this issue**: YES
| https://github.com/apache/airflow/issues/15748 | https://github.com/apache/airflow/pull/15890 | d181604739c048c6969d8997dbaf8b159607904b | 86d0a96bf796fd767cf50a7224be060efa402d94 | 2021-05-09T13:16:14Z | python | 2021-06-24T17:27:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,742 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | Save pod name in xcom for KubernetesPodOperator. | Hello.
Kubernetes generates a unique pod name.
https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/kubernetes/pod_generator.py#L434-L458
It would be great if the pod name was available in Airflow after completing the task, so that, for example, we could use it to add [extra links](http://airflow.apache.org/docs/apache-airflow/stable/howto/define_extra_link.html) or use it as an argument in downstream tasks. To do this, we should save this name in XCOM table.
The operator for BigQuery works in a similar way.
https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/providers/google/cloud/operators/bigquery.py#L730
Thanks to this, we have links to the BigQuery console.
https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/providers/google/cloud/operators/bigquery.py#L600-L605
https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/providers/google/cloud/operators/bigquery.py#L57-L86
Best regards,
Kamil Breguła
| https://github.com/apache/airflow/issues/15742 | https://github.com/apache/airflow/pull/15755 | c493b4d254157f189493acbf5101167f753aa766 | 37d549bde79cd560d24748ebe7f94730115c0e88 | 2021-05-08T19:16:42Z | python | 2021-05-14T00:19:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,713 | ["Dockerfile", "Dockerfile.ci"] | Migrate to newer Node | We started to receive deprecation warnings (and artificial 20 second delay) while compiling assets for Airflow in master/
I think maybe it's the right time to migrate to newer Node. The old UI will still be there for quite a while.
This however, I think, requires rather heavy testing of the whole UI functionality.
Happy to collaborate on this one but possibly we should do it as part of bigger release?
@ryanahamilton @jhtimmins @mik-laj - WDYT? How heavy/dangerous this might be?
```
================================================================================
================================================================================
DEPRECATION WARNING
Node.js 10.x is no longer actively supported!
You will not receive security or critical stability updates for this version.
You should migrate to a supported version of Node.js as soon as possible.
Use the installation script that corresponds to the version of Node.js you
wish to install. e.g.
* https://deb.nodesource.com/setup_12.x — Node.js 12 LTS "Erbium"
* https://deb.nodesource.com/setup_14.x — Node.js 14 LTS "Fermium" (recommended)
* https://deb.nodesource.com/setup_15.x — Node.js 15 "Fifteen"
* https://deb.nodesource.com/setup_16.x — Node.js 16 "Gallium"
Please see https://github.com/nodejs/Release for details about which
version may be appropriate for you.
The NodeSource Node.js distributions repository contains
information both about supported versions of Node.js and supported Linux
distributions. To learn more about usage, see the repository:
https://github.com/nodesource/distributions
================================================================================
================================================================================
``` | https://github.com/apache/airflow/issues/15713 | https://github.com/apache/airflow/pull/15718 | 87e440ddd07935f643b93b6f2bbdb3b5e8500510 | 46d62782e85ff54dd9dc96e1071d794309497983 | 2021-05-07T13:10:31Z | python | 2021-05-07T16:46:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,708 | ["airflow/decorators/task_group.py", "tests/utils/test_task_group.py"] | @task_group returns int, but it appears in @task as TaskGroup | **Apache Airflow version**
13faa6912f7cd927737a1dc15630d3bbaf2f5d4d
**Environment**
- **Configuration**: Local Executor
- **OS** (e.g. from /etc/os-release): Mac OS 11.3
- **Kernel**: Darwin Kernel Version 20.4.0
- **Install tools**: `pip install -e .`
**The DAG**
```python
@task
def one():
return 1
@task_group
def trivial_group(inval):
@task
def add_one(i):
return i + 1
outval = add_one(inval)
return outval
@task
def print_it(inval):
print(inval)
@dag(schedule_interval=None, start_date=days_ago(1), default_args={"owner": "airflow"})
def wrap():
x = one()
y = trivial_group(x)
z = print_it(y)
wrap_dag = wrap()
```
**What happened**:
`print_it` had no predecessors and receives `<airflow.utils.task_group.TaskGroup object at 0x128921940>`
**What you expected to happen**:
`print_it` comes after `trivial_group.add_one` and receives `2`
The caller ends up with the task group itself, equivalent in the traditional api to `tg_ref` in:
```
with TaskGroup("trivial_group") tg_ref:
pass
````
This interrupts the ability to continue using the Task Flow api because passing it into a function annotated with `@task` fails to register the dependency with whatever magic gets it out of xcom and adds edges to the dag.
**To Replicate**
```
$ airflow dags test wrap $(date "+%Y-%m-%d")
``` | https://github.com/apache/airflow/issues/15708 | https://github.com/apache/airflow/pull/15779 | c8ef3a3539f17b39d0a41d10a631d8d9ee564fde | 303c89fea0a7cf8a857436182abe1b042d473022 | 2021-05-06T22:06:31Z | python | 2021-05-11T19:09:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,698 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | task_instance_mutation_hook not called by scheduler when importing airflow.models.taskinstance | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.12 (also tested with 1.10.15, 2.0.2 but less extensively)
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): None
**Environment**: Linux / Docker
- **Cloud provider or hardware configuration**: None
- **OS** (e.g. from /etc/os-release): Red Hat Enterprise Linux Server 7.9 (Maipo)
- **Kernel** (e.g. `uname -a`): Linux d7b9410c0f25 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**: Tested on both real Linux (Red Hat) and a docker inside a windows machine.
**What happened**: Custom `task_instance_mutation_hook` not called by scheduler even though airflow loads the `airflow_local_settings` module.
**What you expected to happen**:
`task_instance_mutation_hook` called before every task instance run.
I think the way `airflow.models.dagrun` loads `task_instance_mutation_hook` from `airflow_local_settings` does not work when `airflow_local_settings` imports `airflow.models.taskinstance` or `airflow.models.dagrun`.
**How to reproduce it**:
1. Added `airflow_local_settings.py` to \{AIRFLOW_HOME\}\config
```python
import logging
from airflow.models.taskinstance import TaskInstance
def task_instance_mutation_hook(ti: TaskInstance):
logging.getLogger("").warning("HERE IN task_instance_mutation_hook log")
print("HERE IN task_instance_mutation_hook")
ti.queue = "X"
```
2. See output `[2021-05-06 11:13:04,076] {settings.py:392} INFO - Loaded airflow_local_settings from /usr/local/airflow/config/airflow_local_settings.py.`
3. function is never called - log/print is not written and queue does not update.
4. Additionally, example code to reproduce code issue:
```python
import airflow
import airflow.models.dagrun
import inspect
print(inspect.getfile(airflow.settings.task_instance_mutation_hook))
print(inspect.getfile(airflow.models.dagrun.task_instance_mutation_hook))
```
outputs
```
/usr/local/airflow/config/airflow_local_settings.py
/opt/bb/lib/python3.7/site-packages/airflow/settings.py
```
5. when removing `from airflow.models.taskinstance import TaskInstance` from airflow_local_settings.py everything works as expected.
**Anything else we need to know**:
BTW, do the logs printed from `task_instance_mutation_hook` go anywhere? Even after I remove the import and the queue is update, I can't see anything in the logs files or in the scheduler console.
| https://github.com/apache/airflow/issues/15698 | https://github.com/apache/airflow/pull/15851 | 6b46af19acc5b561c1c5631a753cc07b1eca34f6 | 3919ee6eb9042562b6cafae7c34e476fbb413e13 | 2021-05-06T12:45:58Z | python | 2021-05-15T09:11:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,685 | ["airflow/configuration.py", "tests/www/views/test_views.py"] | Undefined `conf` when using AWS secrets manager backend and `sql_alchemy_conn_secret` | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.2.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.19 (server), 1.21 (client)
**Environment**:
- **Cloud provider or hardware configuration**: AWS EKS
- **OS** (e.g. from /etc/os-release): Docker image (apache/airflow:2.0.2-python3.7)
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
`airflow` bin fails during configuration initialization (stack below). I see a similar issue reported here: https://github.com/apache/airflow/issues/13254, but my error is slightly different.
```
File "/home/airflow/.local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/settings.py", line 37, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 1098, in <module>
conf = initialize_config()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 860, in initialize_config
conf.validate()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 199, in validate
self._validate_config_dependencies()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 227, in _validate_config_dependencies
is_sqlite = "sqlite" in self.get('core', 'sql_alchemy_conn')
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 328, in get
option = self._get_environment_variables(deprecated_key, deprecated_section, key, section)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 394, in _get_environment_variables
option = self._get_env_var_option(section, key)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 298, in _get_env_var_option
return _get_config_value_from_secret_backend(os.environ[env_var_secret_path])
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 83, in _get_config_value_from_secret_backend
secrets_client = get_custom_secret_backend()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 999, in get_custom_secret_backend
secrets_backend_cls = conf.getimport(section='secrets', key='backend')
NameError: name 'conf' is not defined
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
`airflow` to correctly initialize the configuration.
**How to reproduce it**:
`airflow.cfg`
```
[core]
# ...
sql_alchemy_conn_secret = some-key
# ...
[secrets]
backend = airflow.contrib.secrets.aws_secrets_manager.SecretsManagerBackend
backend_kwargs = {config_prefix: 'airflow/config', connections_prefix: 'airflow/connections', variables_prefix: 'airflow/variables'}
```
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15685 | https://github.com/apache/airflow/pull/16088 | 9d06ee8019ecbc07d041ccede15d0e322aa797a3 | 65519ab83ddf4bd6fc30c435b5bfccefcb14d596 | 2021-05-05T21:52:55Z | python | 2021-05-27T16:37:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,679 | ["airflow/providers/amazon/aws/transfers/mongo_to_s3.py", "tests/providers/amazon/aws/transfers/test_mongo_to_s3.py"] | MongoToS3Operator failed when running with a single query (not aggregate pipeline) | **Apache Airflow version**: 2.0.2
**What happened**:
`MongoToS3Operator` failed when running with a single query (not aggregate pipeline):
```sh
Traceback (most recent call last):
File "/home/airflow//bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow//lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow//lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/airflow//lib/python3.8/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/airflow//lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 385, in task_test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "/home/airflow//lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1413, in run
self._run_raw_task(
File "/home/airflow//lib/python3.8/site-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1138, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow//lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow//lib/python3.8/site-packages/airflow/providers/amazon/aws/transfers/mongo_to_s3.py", line 116, in execute
results = MongoHook(self.mongo_conn_id).find(
File "/home/airflow//lib/python3.8/site-packages/airflow/providers/mongo/hooks/mongo.py", line 144, in find
return collection.find(query, **kwargs)
File "/home/airflow//lib/python3.8/site-packages/pymongo/collection.py", line 1523, in find
return Cursor(self, *args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'allowDiskUse'
```
**What you expected to happen**:
I expect the data from MongoDB to be exported to a file in S3 with no errors.
**How to reproduce it**:
Run the following operator with a single `mongo_query` (no aggregate pipeline):
```python
export_to_s3 = MongoToS3Operator(
task_id='export_to_s3',
mongo_conn_id=Variable.get('mongo_conn_id'),
s3_conn_id=Variable.get('aws_conn_id'),
mongo_collection='my_mongo_collection',
mongo_query={},
s3_bucket=Variable.get('s3_bucket'),
s3_key="my_data.json",
replace=True,
dag=dag,
)
``` | https://github.com/apache/airflow/issues/15679 | https://github.com/apache/airflow/pull/15680 | e7293b05fa284daa8c55ae95a6dff8af31fbd03b | dab10d9fae6bfca0f9c0c504b77773d94ccee86d | 2021-05-05T17:09:22Z | python | 2021-05-10T14:13:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,656 | ["airflow/www/static/css/dags.css"] | Scrolling issue with new fast trigger with single DAG | **Apache Airflow version**: master
**What happened**:
If you have a single DAG, half of the new "fast trigger" dropdown is hidden on the dashboard and causes a scrollbar in the DAG table.
**How to reproduce it**:
Have a single DAG in your instance and click on the trigger button from the dashboard.

| https://github.com/apache/airflow/issues/15656 | https://github.com/apache/airflow/pull/15660 | d723ba5b5cfb45ce7f578c573343e86247a2d069 | a0eb747b8d73f71dcf471917e013669a660cd4dd | 2021-05-04T16:21:54Z | python | 2021-05-05T00:28:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,650 | ["airflow/utils/db.py"] | Docs: check_migrations more verbose documentation | **Description**
The documentation and the error message of the [check-migrations](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#check-migrations) / `def check_migrations` could be more verbose. This check can fail if the underlying database was never initialised.
**Use case / motivation**
We are deploying our airflow helm chart with terraform helm provider. This provider has a [bug](https://github.com/hashicorp/terraform-provider-helm/issues/683) with helm hook. If airflow would give us a bit more verbose error message why could the `check-migrations` fail, we would find the underlying error/bug much sooner.
**Are you willing to submit a PR?**
Yes
| https://github.com/apache/airflow/issues/15650 | https://github.com/apache/airflow/pull/15662 | e47f7e42b632ad78a204531e385ec09bcce10816 | 86ad628158eb728e56c817eea2bea4ddcaa571c2 | 2021-05-04T09:51:15Z | python | 2021-05-05T05:30:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,641 | ["airflow/models/dag.py", "docs/apache-airflow/concepts/dags.rst", "docs/apache-airflow/concepts/tasks.rst"] | Add documentation on what each parameter to a `sla_miss_callback` callable is | I couldn't find any official documentation specifying what each parameter to a `sla_miss_callback` callable are. This would be a great addition to know how to properly format the messages sent. | https://github.com/apache/airflow/issues/15641 | https://github.com/apache/airflow/pull/18305 | 2b62a75a34d44ac7d9ed83c02421ff4867875577 | dcfa14d60dade3fdefa001d10013466fe4d77f0d | 2021-05-03T21:18:17Z | python | 2021-09-18T19:18:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,636 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | WasbHook does not delete blobs with slash characters in prefix mode | **Apache Airflow version**: 2.0.1
**Environment**:
- **Cloud provider or hardware configuration**: docker container
- **OS** (e.g. from /etc/os-release): `Debian GNU/Linux 10 (buster)`
- **Kernel** (e.g. `uname -a`): `Linux 69a18af222ff 3.10.0-1160.15.2.el7.x86_64 #1 SMP Thu Jan 21 16:15:07 EST 2021 x86_64 GNU/Linux`
- **Install tools**: `pip`
- **Others**: `azure` extras
**What happened**:
`airflow.providers.microsoft.azure.hooks.wasb.WasbHook.delete_file` unable to delete blobs if both conditions below are true:
* `is_prefix` argument set to `True`
* the target blobs contain at least one '/' character in their names
**What you expected to happen**:
All files starting with the specified prefix are removed.
The problem is caused by this line: https://github.com/apache/airflow/blob/73187871703bce22783a42db3d3cec9045ee1de2/airflow/providers/microsoft/azure/hooks/wasb.py#L410
By not specifying `delimiter = ''` the listed blobs won't be terminal blobs if the target blobs happen to contain the default delimiter (`/`) in their name
**How to reproduce it**:
For example, consider the following scenario. We have a blob container `cnt` with two files:
* `parent/file1`
* `parent/file2`
The following call should delete both files:
``` python
wasb_hook= WasbHook(...)
wasb_hook.delete_file(
container_name='cnt',
blob_name='parent',
is_prefix=True,
)
```
But instead we get an error `Blob(s) not found: ('parent/',)`
| https://github.com/apache/airflow/issues/15636 | https://github.com/apache/airflow/pull/15637 | 91bb877ff4b0e0934cb081dd103898bd7386c21e | b1bd59440baa839eccdb2770145d0713ade4f82a | 2021-05-03T16:24:20Z | python | 2021-05-04T17:24:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,622 | ["airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/cloud/example_dags/example_dataproc.py", "airflow/providers/google/cloud/hooks/dataproc.py", "airflow/providers/google/cloud/operators/dataproc.py", "airflow/providers/google/cloud/sensors/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py", "tests/providers/google/cloud/sensors/test_dataproc.py"] | Inconsistencies with Dataproc Operator parameters | I'm looking at the GCP Dataproc operator and noticed that the `DataprocCreateClusterOperator` and `DataprocDeleteClusterOperator` require a `region` parameter, but other operators, like the `DataprocSubmitJobOperator` require a `location` parameter instead. I think it would be best to consistently enforce the parameter as `region`, because that's what is required in the protos for all of the [cluster CRUD operations](https://github.com/googleapis/python-dataproc/blob/master/google/cloud/dataproc_v1/proto/clusters.proto) and for [job submission](https://github.com/googleapis/python-dataproc/blob/d4b299216ad833f68ad63866dbdb2c8f2755c6b4/google/cloud/dataproc_v1/proto/jobs.proto#L741). `location` also feels too ambiguous imho, because it implies we could also pass a GCE zone, which in this case is either unnecessary or not supported (I can't remember which and it's too late in my Friday for me to double check 🙃 )
This might be similar to #13454 but I'm not 100% sure. If we think this is worth working on, I could maybe take this on as a PR, but it would be low priority for me, and if someone else wants to take it on, they should feel free to 😄
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Environment**:
running locally on a Mac with SQLite backend only running a unit test that makes sure the dag is compiled. Installed with Pip and the provided constraints file.
**What happened**:
if you pass `location` in the `DataprocCreateClusterOperator` the DAG won't compile and will throw an error `airflow.exceptions.AirflowException: Argument ['region']`
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
--->
| https://github.com/apache/airflow/issues/15622 | https://github.com/apache/airflow/pull/16034 | 5a5f30f9133a6c5f0c41886ff9ae80ea53c73989 | b0f7f91fe29d1314b71c76de0f11d2dbe81c5c4a | 2021-04-30T23:46:34Z | python | 2021-07-07T20:37:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,598 | ["airflow/providers/qubole/CHANGELOG.rst", "airflow/providers/qubole/hooks/qubole.py", "airflow/providers/qubole/hooks/qubole_check.py", "airflow/providers/qubole/operators/qubole.py", "airflow/providers/qubole/provider.yaml"] | Qubole Hook Does Not Support 'include_headers' |
**Description**
Qubole Hook and Operator do not support `include_header` param for getting results with headers
Add Support for `include_header` get_results(... arguments=[True])
**Use case / motivation**
It's very hard to work with CSV results from db without headers.
This is super important when using Qubole's databases.
**Are you willing to submit a PR?**
Not sure yet, I can give it a try
**Related Issues**
| https://github.com/apache/airflow/issues/15598 | https://github.com/apache/airflow/pull/15615 | edbc89c64033517fd6ff156067bc572811bfe3ac | 47a5539f7b83826b85b189b58b1641798d637369 | 2021-04-29T21:01:34Z | python | 2021-05-04T06:39:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,596 | ["airflow/dag_processing/manager.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst", "newsfragments/30076.significant.rst", "tests/dag_processing/test_manager.py"] | Using SLAs causes DagFileProcessorManager timeouts and prevents deleted dags from being recreated | **Apache Airflow version**: 2.0.1 and 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**: Celery executors, Redis + Postgres
- **Cloud provider or hardware configuration**: Running inside docker
- **OS** (e.g. from /etc/os-release): Centos (inside Docker)
**What happens**:
In 2.0.0 if you delete a dag from the GUI when the `.py` file is still present, the dag is re-added within a few seconds (albeit with no history etc. etc.). Upon attempting to upgrade to 2.0.1 we found that after deleting a dag it would take tens of minutes to come back (or more!), and its reappearance was seemingly at random (i.e. restarting schedulers / guis did not help).
It did not seem to matter which dag it was.
The problem still exists in 2.0.2.
**What you expected to happen**:
Deleting a dag should result in that dag being re-added in short order if the `.py` file is still present.
**Likely cause**
I've tracked it back to an issue with SLA callbacks. I strongly suspect the fix for Issue #14050 was inadvertently responsible, since that was in the 2.0.1 release. In a nutshell, it appears the dag_processor_manager gets into a state where on every single pass it takes so long to process SLA checks for one of the dag files that the entire processor times out and is killed. As a result, some of the dag files (that are queued behind the poison pill file) never get processed and thus we don't reinstate the deleted dag unless the system gets quiet and the SLA checks clear down.
To reproduce in _my_ setup, I created a clean airflow instance. The only materially important config setting I use is `AIRFLOW__SCHEDULER__PARSING_PROCESSES=1` which helps keep things deterministic.
I then started adding in dag files from the production system until I found a file that caused the problem. Most of our dags do not have SLAs, but this one did. After adding it, I started seeing lines like this in `dag_processor_manager.log` (file names have been changed to keep things simple)
```
[2021-04-29 16:27:19,259] {dag_processing.py:1129} ERROR - Processor for /home/airflow/dags/problematic.py with PID 309 started at 2021-04-29T16:24:19.073027+00:00 has timed out, killing it.
```
Additionally, the stats contained lines like:
```
File Path PID Runtime # DAGs # Errors Last Runtime Last Run
----------------------------------------------------------------- ----- --------- -------- ---------- -------------- -------------------
/home/airflow/dags/problematic.py 309 167.29s 8 0 158.78s 2021-04-29T16:24:19
```
(i.e. 3 minutes to process a single file!)
Of note, the parse time of the affected file got longer on each pass until the processor was killed. Increasing `AIRFLOW__CORE__DAG_FILE_PROCESSOR_TIMEOUT` to e.g. 300 did nothing to help; it simply bought a few more iterations of the parse loop before it blew up.
Browsing the log file for `scheduler/2021-04-29/problematic.py.log` I could see the following:
<details><summary>Log file entries in 2.0.2</summary>
```
[2021-04-29 16:06:44,633] {scheduler_job.py:629} INFO - Processing file /home/airflow/dags/problematic.py for tasks to queue
[2021-04-29 16:06:44,634] {logging_mixin.py:104} INFO - [2021-04-29 16:06:44,634] {dagbag.py:451} INFO - Filling up the DagBag from /home/airflow/dags/problematic
[2021-04-29 16:06:45,001] {scheduler_job.py:639} INFO - DAG(s) dict_keys(['PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-S2-weekends', 'PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-S2-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-S3-weekends', 'PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-S3-weekends']) retrieved from /home/airflow/dags/problematic.py
[2021-04-29 16:06:45,001] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends
[2021-04-29 16:06:46,398] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends
[2021-04-29 16:06:47,615] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends
[2021-04-29 16:06:48,852] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends
[2021-04-29 16:06:49,411] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends
[2021-04-29 16:06:50,156] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends
[2021-04-29 16:06:50,845] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-SP500_Index_1-weekends
[2021-04-29 16:06:52,164] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends
[2021-04-29 16:06:53,474] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends
[2021-04-29 16:06:54,731] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-SP500_Index_1-weekends
[2021-04-29 16:06:55,345] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends
[2021-04-29 16:06:55,920] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends
and so on for 100+ more lines like this...
```
</details>
Two important points: from the above logs:
1. We seem to be running checks on the same dags multiple times
2. The number of checks grows on each pass (i.e. the number of log lines beginning "Running SLA Checks..." increases on each pass until the processor manager is restarted, and then it begins afresh)
**Likely location of the problem**:
This is where I start to run out of steam. I believe the culprit is this line: https://github.com/apache/airflow/blob/2.0.2/airflow/jobs/scheduler_job.py#L1813
It seems to me that the above leads to a feedback where each time you send a dag callback to the processor you include a free SLA callback as well, hence the steadily growing SLA processing log messages / behaviour I observed. As noted above, this method call _was_ in 2.0.0 but until Issue #14050 was fixed, the SLAs were ignored, so the problem only kicked in from 2.0.1 onwards.
Unfortunately, my airflow-fu is not good enough for me to suggest a fix beyond the Gordian solution of removing the line completely (!); in particular, it's not clear to me how / where SLAs _should_ be being checked. Should the dag_processor_manager be doing them? Should it be another component (I mean, naively, I would have thought it should be the workers, so that SLA checks can scale with the rest of your system)? How should the checks be enqueued? I dunno enough to give a good answer. 🤷
**How to reproduce it**:
In our production system, it would blow up every time, immediately. _Reliably_ reproducing in a clean system depends on how fast your test system is; the trick appears to be getting the scan of the dag file to take long enough that the SLA checks start to snowball. The dag below did it for me; if your machine seems to be staying on top of processing the dags, try increasing the number of tasks in a single dag (or buy a slower computer!)
<details><summary>Simple dag that causes the problem</summary>
```
import datetime as dt
import pendulum
from airflow import DAG
from airflow.operators.bash import BashOperator
def create_graph(dag):
prev_task = None
for i in range(10):
next_task = BashOperator(
task_id=f'simple_task_{i}',
bash_command="echo SLA issue",
dag=dag)
if prev_task:
prev_task >> next_task
prev_task = next_task
def create_dag(name: str) -> DAG:
tz_to_use = pendulum.timezone('UTC')
default_args = {
'owner': '[email protected]',
'start_date': dt.datetime(2018, 11, 13, tzinfo=tz_to_use),
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False,
'sla': dt.timedelta(hours=13),
}
dag = DAG(name,
catchup=False,
default_args=default_args,
max_active_runs=10,
schedule_interval="* * * * *")
create_graph(dag)
return dag
for i in range(100):
name = f"sla_dag_{i}"
globals()[name] = create_dag(name)
```
</details>
To reproduce:
1. Configure an empty airflow instance, s.t. it only has one parsing process (as per config above).
2. Add the file above into the install. The file simply creates 100 near-trivial dags. On my system, airflow can't stay ahead, and is basically permanently busy processing the backlog. Your cpu may have more hamsters, in which case you'll need to up the number of tasks and/or dags.
2. Locate and tail the `scheduler/[date]/sla_example.py.log` file (assuming you called the above `sla_example.py`, of course)
3. This is the non-deterministic part. On my system, within a few minutes, the processor manager is taking noticeably longer to process the file and you should be able to see lots of SLA log messages like my example above ☝️. Like all good exponential growth it takes many iterations to go from 1 second to 1.5 seconds to 2 seconds, but not very long at all to go from 10 seconds to 30 to 💥
**Anything else we need to know**:
1. I'm working around this for now by simply removing the SLAs from the dag. This solves the problem since the SLA callbacks are then dropped. But SLAs are a great feature, and I'd like them back (please!).
2. Thanks for making airflow and thanks for making it this far down the report! | https://github.com/apache/airflow/issues/15596 | https://github.com/apache/airflow/pull/30076 | 851fde06dc66a9f8e852f9a763746a47c47e1bb7 | 53ed5620a45d454ab95df886a713a5e28933f8c2 | 2021-04-29T20:21:20Z | python | 2023-03-16T21:51:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,559 | ["airflow/settings.py", "tests/core/test_sqlalchemy_config.py", "tests/www/test_app.py"] | airflow dag success , but tasks not yet started,not scheduled. | hi,team:
my dag is 1 minute schedule,one parts dag state is success,but tasks state is not yet started in a dag:

how can to fix it? | https://github.com/apache/airflow/issues/15559 | https://github.com/apache/airflow/pull/15714 | 507bca57b9fb40c36117e622de3b1313c45b41c3 | 231d104e37da57aa097e5f726fe6d3031ad04c52 | 2021-04-28T03:58:29Z | python | 2021-05-09T08:45:16Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,558 | ["chart/templates/secrets/metadata-connection-secret.yaml", "chart/templates/secrets/result-backend-connection-secret.yaml", "chart/tests/test_metadata_connection_secret.py", "chart/tests/test_result_backend_connection_secret.py", "chart/values.schema.json", "chart/values.yaml"] | chart/templates/secrets/metadata-connection-secret.yaml postgres hardcode? | Can't deploy airflow chart with mysql backend?
I find chart/templates/secrets/metadata-connection-secret.yaml postgres with such code
data:
connection: {{ (printf "postgresql://%s:%s@%s:%s/%s?sslmode=%s" .Values.data.metadataConnection.user .Values.data.metadataConnection.pass $host $port $database .Values.data.metadataConnection.sslmode) | b64enc | quote }}
while chart/Values.yaml
data:
# If secret names are provided, use those secrets
metadataSecretName: ~
resultBackendSecretName: ~
brokerUrlSecretName: ~
# Otherwise pass connection values in
metadataConnection:
user: ~
pass: ~
Here we can't specify mysql or postgresql if we don't specify metadataSecretName.
| https://github.com/apache/airflow/issues/15558 | https://github.com/apache/airflow/pull/15616 | a4211e276fce6521f0423fe94b01241a9c43a22c | f8a70e1295a841326265fb5c5bf21cd1839571a7 | 2021-04-28T02:47:23Z | python | 2021-04-30T20:10:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,538 | ["airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/sensors/s3_key.py", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/sensors/test_s3_key.py"] | S3KeySensor wildcard fails to match valid unix wildcards | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.12
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: AWS MWAA
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**: In a DAG, we implemented an S3KeySensor with a wildcard. This was meant to match an S3 object whose name could include a variable digit. Using an asterisk in our name, we could detect the object, but when instead we used [0-9] we could not.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**: S3KeySensor bucket_key should be interpretable as any valid Unix wildcard pattern, probably as defined here: https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm or something similar.
<!-- What do you think went wrong? -->
I looked into the source code and have tracked this to the `get_wildcard_key` function in `S3_hook`: https://airflow.apache.org/docs/apache-airflow/1.10.14/_modules/airflow/hooks/S3_hook.html.
This function works by iterating over many objects in the S3 bucket and checking if any matches the wildcard. This checking is done with `fnmatch` that does support ranges.
The problem seems to be in a performance optimization. Instead of looping over all objects, which could be expensive in many cases, the code tries to select a Prefix for which all files that could meet the wildcard would have this prefix. This prefix is generated by splitting on the first character usage of `*` in the `wildcard_key`. That is the issue.
It only splits on `*`, which means if `foo_[0-9].txt` is passed in as the `wildcard_key`, the prefix will still be evaluated as `foo_[0-9].txt` and only objects that begin with that string will be listed. This would not catch an object named `foo_0`.
I believe the right fix to this would either be:
1. Drop the performance optimization of Prefix and list all objects in the bucket
2. Make sure to split on any special character when generating the prefix so that the prefix is accurate
**How to reproduce it**: This should be reproduceable with any DAG in S3, by placing a file that should meet set off a wildcard sensor where the wild-card includes a range. For example, the file `foo_1.txt` in bucket `my_bucket`, with an S3KeySensor where bucket_name='my_bucket', bucket_key='foo_[0-9].txt`, and wildcard_match=True
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**: This problem will occur every time
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15538 | https://github.com/apache/airflow/pull/18211 | 2f88009bbf8818f3b4b553a04ae3b848af43c4aa | 12133861ecefd28f1d569cf2d190c2f26f6fd2fb | 2021-04-26T20:30:10Z | python | 2021-10-01T17:36:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,536 | ["airflow/providers/apache/beam/hooks/beam.py", "tests/providers/apache/beam/hooks/test_beam.py", "tests/providers/google/cloud/hooks/test_dataflow.py"] | Get rid of state in Apache Beam provider hook | As discussed in https://github.com/apache/airflow/pull/15534#discussion_r620500075, we could possibly rewrite Beam Hook to remove the need of storing state in it. | https://github.com/apache/airflow/issues/15536 | https://github.com/apache/airflow/pull/29503 | 46d45e09cb5607ae583929f3eba1923a64631f48 | 7ba27e78812b890f0c7642d78a986fe325ff61c4 | 2021-04-26T17:29:42Z | python | 2023-02-17T14:19:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,532 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | Airflow 1.10.15 : The CSRF session token is missing when i try to trigger a new dag | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.15
https://raw.githubusercontent.com/apache/airflow/constraints-1.10.15/constraints-3.6.txt
**Kubernetes version**:
Client Version: v1.16.2
Server Version: v1.14.8-docker-1
**Environment**: python 3.6.8 + celeryExecutor + rbac set to false
- **OS** (e.g. from /etc/os-release): CentOS Linux 7 (Core)
- **Kernel** (e.g. `uname -a`): 3.10.0-1127.19.1.el7.x86_64
**What happened**: I have upgraded from 1.10.12 to 1.10.15, when i trigger a dag i have the exception below

<!-- (please include exact error messages if you can) -->
**What you expected to happen**: trigger a dag without exceptions
<!-- What do you think went wrong? -->
**How to reproduce it**: use airflow 1.10.15 and try to trigger an example dag example_bash_operator
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
How often does this problem occur: Every time i trigger a dag
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>webserver.log</summary> [2021-04-26 15:03:06,611] {__init__.py:50} INFO - Using executor CeleryExecutor
[2021-04-26 15:03:06,612] {dagbag.py:417} INFO - Filling up the DagBag from /home/airflow/dags
175.62.58.93 - - [26/Apr/2021:15:03:10 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+"
175.62.58.93 - - [26/Apr/2021:15:03:11 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+"
175.62.58.93 - - [26/Apr/2021:15:03:15 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+"
175.62.58.93 - - [26/Apr/2021:15:03:16 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+"
[2021-04-26 15:03:17,401] {csrf.py:258} INFO - The CSRF session token is missing.
10.30.180.137 - - [26/Apr/2021:15:03:17 +0000] "POST /admin/airflow/trigger?dag_id=example_bash_operator&origin=https://xxxxxx/admin/ HTTP/1.1" 400 150 "https://xxxxxxxxxxxx/admin/airflow/trigger?dag_id=example_bash_operator&origin=https://xxxxxxxxxxxx/admin/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36"
175.62.58.93 - - [26/Apr/2021:15:03:20 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+"
175.62.58.93 - - [26/Apr/2021:15:03:21 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+" </details>
| https://github.com/apache/airflow/issues/15532 | https://github.com/apache/airflow/pull/15546 | 5b2fe0e74013cd08d1f76f5c115f2c8f990ff9bc | dfaaf49135760cddb1a1f79399c7b08905833c21 | 2021-04-26T15:09:02Z | python | 2021-04-27T21:20:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,529 | ["setup.cfg"] | rich pinned to 9.2.0 in setup.cfg | This line is found in in `setup.cfg` which pins `rich` to 9.2.0. Is there a reason this is needed? The latest `rich` is already at 10.1.0. I briefly tested this version and see no issues.
What are the things we need to test before unpinning `rich` ? | https://github.com/apache/airflow/issues/15529 | https://github.com/apache/airflow/pull/15531 | dcb89327462cc72dc0146dc77d50a0399bc97f82 | 6b46af19acc5b561c1c5631a753cc07b1eca34f6 | 2021-04-26T10:21:10Z | python | 2021-05-15T09:04:01Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,526 | ["tests/kubernetes/kube_config", "tests/kubernetes/test_refresh_config.py"] | Improve test coverage of Kubernetes config_refresh | Kuberentes refresh_config has untested method https://codecov.io/gh/apache/airflow/src/master/airflow/kubernetes/refresh_config.py 75%
We might want to improve that. | https://github.com/apache/airflow/issues/15526 | https://github.com/apache/airflow/pull/18563 | 73fcbb0e4e151c9965fd69ba08de59462bbbe6dc | a6be59726004001214bd4d7e284fd1748425fa98 | 2021-04-26T07:33:28Z | python | 2021-10-13T23:30:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,524 | ["tests/cli/commands/test_task_command.py"] | Improve test coverage of task_command | Task command has a few missing commands not tested: https://codecov.io/gh/apache/airflow/src/master/airflow/cli/commands/task_command.py (77%)
| https://github.com/apache/airflow/issues/15524 | https://github.com/apache/airflow/pull/15760 | 37d549bde79cd560d24748ebe7f94730115c0e88 | 51e54cb530995edbb6f439294888a79724365647 | 2021-04-26T07:29:42Z | python | 2021-05-14T04:34:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,523 | ["tests/executors/test_kubernetes_executor.py"] | Improve test coverage of Kubernetes Executor | The Kubernetes executor has surprisingly low test coverage: 64%
https://codecov.io/gh/apache/airflow/src/master/airflow/executors/kubernetes_executor.py - looks like some of the "flush/end" code is not tested.
We might want to improve it. | https://github.com/apache/airflow/issues/15523 | https://github.com/apache/airflow/pull/15617 | cf583b9290b3c2c58893f03b12d3711cc6c6a73c | dd56875066486f8c7043fbc51f272933fa634a25 | 2021-04-26T07:28:03Z | python | 2021-05-04T21:08:21Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,483 | ["airflow/providers/apache/beam/operators/beam.py", "tests/providers/apache/beam/operators/test_beam.py"] | Dataflow operator checks wrong project_id | **Apache Airflow version**:
composer-1.16.1-airflow-1.10.15
**Environment**:
- **Cloud provider or hardware configuration**: Google Composer
**What happened**:
First, a bit of context. We have a single instance of airflow within its own GCP project, which runs dataflows jobs on different GCP projects.
Let's call the project which runs airflow project A, while the project where dataflow jobs are run project D.
We recently upgraded from 1.10.14 to 1.10.15 (`composer-1.14.2-airflow-1.10.14` to `composer-1.16.1-airflow-1.10.15`), and noticed that jobs were running successfully from the Dataflow console, but an error was being thrown when the `wait_for_done` call was being made by airflow to check if a dataflow job had ended. The error was reporting a 403 error code on Dataflow APIs when retrieving the job state. The error was:
```
{taskinstance.py:1152} ERROR - <HttpError 403 when requesting https://dataflow.googleapis.com/v1b3/projects/<PROJECT_A>/locations/us-east1/jobs/<JOB_NAME>?alt=json returned "(9549b560fdf4d2fe): Permission 'dataflow.jobs.get' denied on project: '<PROJECT_A>". Details: "(9549b560fdf4d2fe): Permission 'dataflow.jobs.get' denied on project: '<PROJECT_A>'">
```
**What you expected to happen**:
I noticed that the 403 code was thrown when looking up the job state within project A, while I expect this lookup to happen within project D (and to consequently NOT fail, since the associated service account has the correct permissions - since it managed to launch the job). I investigated a bit, and noticed that this looks like a regression introduced when upgrading to `composer-1.16.1-airflow-1.10.15`.
This version uses an image which automatically installs `apache-airflow-backport-providers-apache-beam==2021.3.13`, which backports the dataflow operator from v2. The previous version we were using was installing `apache-airflow-backport-providers-google==2020.11.23`
I checked the commits and changes, and noticed that this operator was last modified in https://github.com/apache/airflow/commit/1872d8719d24f94aeb1dcba9694837070b9884ca. Relevant lines from that commit are the following:
https://github.com/apache/airflow/blob/1872d8719d24f94aeb1dcba9694837070b9884ca/airflow/providers/google/cloud/operators/dataflow.py#L1147-L1162
while these are from the previous version:
https://github.com/apache/airflow/blob/70bf307f3894214c523701940b89ac0b991a3a63/airflow/providers/google/cloud/operators/dataflow.py#L965-L976
https://github.com/apache/airflow/blob/70bf307f3894214c523701940b89ac0b991a3a63/airflow/providers/google/cloud/hooks/dataflow.py#L613-L644
https://github.com/apache/airflow/blob/70bf307f3894214c523701940b89ac0b991a3a63/airflow/providers/google/cloud/hooks/dataflow.py#L965-L972
In the previous version, the job was started by calling `start_python_dataflow`, which in turn would call the `_start_dataflow` method, which would then create a local `job_controller` and use it to check if the job had ended. Throughout this chain of calls, the `project_id` parameter was passed all the way from the initialization of the `DataflowCreatePythonJobOperator` to the creation of the controller which would check if the job had ended.
In the latest relevant commit, this behavior was changed. The operator receives a project_id during intialization, and creates the job using the `start_python_pipeline` method, which receives the `project_id` as part of the `variables` parameter. However, the completion of the job is checked by the `dataflow_hook.wait_for_done` call. The DataFlowHook used here:
* does not specify the project_id when it is initialized
* does not specify the project_id as a parameter when making the call to check for completion (the `wait_for_done` call)
As a result, it looks like it is using the default GCP project ID (the one which the composer is running inside) and not the one used to create the Dataflow job. This explains why we can see the job launching successfully while the operator fails.
I think that specifying the `project_id` as a parameter in the `wait_for_done` call may solve the issue.
**How to reproduce it**:
- Instatiate a composer on a new GCP project.
- Launch a simple Dataflow job on another project
The Dataflow job will succeed (you can see no errors get thrown from the GCP console), but an error will be thrown in airflow logs.
**Note:** I am reporting a 403 because the service account I am using which is associated to airflow does not have the correct permissions. I suspect that, even with the correct permission, you may get another error (maybe 404, since there will be no job running with that ID within the project) but I have no way to test this at the moment.
**Anything else we need to know**:
This problem occurs every time I launch a Dataflow job on a project where the composer isn't running.
| https://github.com/apache/airflow/issues/15483 | https://github.com/apache/airflow/pull/24020 | 56fd04016f1a8561f1c02e7f756bab8805c05876 | 4a5250774be8f48629294785801879277f42cc62 | 2021-04-22T09:22:48Z | python | 2022-05-30T12:17:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,463 | ["scripts/in_container/_in_container_utils.sh"] | Inconsistency between the setup.py and the constraints file | **Apache Airflow version**: 2.0.2
**What happened**:
Airflow's 2.0.2's [constraints file](https://raw.githubusercontent.com/apache/airflow/constraints-2.0.2/constraints-3.8.txt) has used newer `oauthlib==3.1.0` and `request-oauthlib==1.3.0` than 2.0.1's [constraints file](https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.8.txt)
However both 2.0.2's [setup.py](https://github.com/apache/airflow/blob/10023fdd65fa78033e7125d3d8103b63c127056e/setup.py#L282-L286) and 2.0.1's [setup.py](https://github.com/apache/airflow/blob/beb8af5ac6c438c29e2c186145115fb1334a3735/setup.py#L273) don't allow these new versions
Image build with `google_auth` being an "extra" will fail if using `pip==21.0.1` **without** the `--use-deprecated=legacy-resolver` flag.
Another option is to use `pip==20.2.4`.
**What you expected to happen**:
The package versions in `setup.py` and `constraints-3.8.txt` should be consistent with each other.
<!-- What do you think went wrong? -->
**How to reproduce it**:
`docker build` with the following in the `Dockerfile`:
```
pip install apache-airflow[password,celery,redis,postgres,hive,jdbc,mysql,statsd,ssh,google_auth]==2.0.2 \
--constraint https://raw.githubusercontent.com/apache/airflow/constraints-2.0.2/constraints-3.8.txt
```
image build failed with
```
ERROR: Could not find a version that satisfies the requirement oauthlib!=2.0.3,!=2.0.4,!=2.0.5,<3.0.0,>=1.1.2; extra == "google_auth" (from apache-airflow[celery,google-auth,hive,jdbc,mysql,password,postgres,redis,ssh,statsd])
ERROR: No matching distribution found for oauthlib!=2.0.3,!=2.0.4,!=2.0.5,<3.0.0,>=1.1.2; extra == "google_auth"
``` | https://github.com/apache/airflow/issues/15463 | https://github.com/apache/airflow/pull/15470 | c5e302030de7512a07120f71f388ad1859b26ca2 | 5da74f668e68132144590d1f95008bacf6f8b45e | 2021-04-20T21:40:34Z | python | 2021-04-21T12:06:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,456 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_launcher.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator raises 404 Not Found when `is_delete_operator_pod=True` and the Pod fails. | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18
**Environment**: GKE
- **Cloud provider or hardware configuration**: GKE on GCP
- **OS** (e.g. from /etc/os-release): Debian 10
- **Kernel** (e.g. `uname -a`): 5.4.89+
**What happened**:
When executing a KuberentesPodOperator with `is_delete_operator_pod=True`, if the Pod doesn't complete successfully, then a 404 error is raised when attempting to get the final pod status. This doesn't cause any major operational issues to us as the Task fails anyway, however it does cause confusion for our users when looking at the logs for their failed runs.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1310, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 341, in execute
status = self.client.read_namespaced_pod(self.pod.metadata.name, self.namespace)
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/apis/core_v1_api.py", line 18446, in read_namespaced_pod
(data) = self.read_namespaced_pod_with_http_info(name, namespace, **kwargs)
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/apis/core_v1_api.py", line 18524, in read_namespaced_pod_with_http_info
return self.api_client.call_api('/api/v1/namespaces/{namespace}/pods/{name}', 'GET',
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 330, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 163, in __call_api
response_data = self.request(method, url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 351, in request
return self.rest_client.GET(url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 227, in GET
return self.request("GET", url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (404)
Reason: Not Found
```
**What you expected to happen**:
A 404 error should not be raised - the pod should either be deleted after the state is retrieved, or the final_state returned from `create_new_pod_for_operator` should be used.
**How to reproduce it**:
Run A KubernetesPodOperator that doesn't result Pod with state SUCCESS with `is_delete_operator_pod=True`
**Anything else we need to know**:
This appears to have been introduced here: https://github.com/apache/airflow/pull/11369 by adding:
```
status = self.client.read_namespaced_pod(self.pod.metadata.name, self.namespace)
```
if the pod state != SUCCESS
| https://github.com/apache/airflow/issues/15456 | https://github.com/apache/airflow/pull/15490 | d326149be856ca0f84b24a3ca50b9b9cea382eb1 | 4c9735ff9b0201758564fcd64166abde318ec8a7 | 2021-04-20T16:17:41Z | python | 2021-06-16T23:16:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,451 | ["airflow/providers/google/provider.yaml", "scripts/in_container/run_install_and_test_provider_packages.sh", "tests/core/test_providers_manager.py"] | No module named 'airflow.providers.google.common.hooks.leveldb' | **Apache Airflow version**:
2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
v1.18.18
**Environment**:
Cloud provider or hardware configuration: AWS
**What happened**:
Updated to Airflow 2.0.2 and a new warning appeared in webserver logs:
```
WARNING - Exception when importing 'airflow.providers.google.common.hooks.leveldb.LevelDBHook' from 'apache-airflow-providers-google' package: No module named 'airflow.providers.google.common.hooks.leveldb'
```
**What you expected to happen**:
No warning.
**How to reproduce it**:
Don't know the specific details. Have tried adding `pip install --upgrade apache-airflow-providers-google` but the error was still there.
**Anything else we need to know**:
I am not using LevelDB for anything in my code, as a result I don't understand from where this error is coming. | https://github.com/apache/airflow/issues/15451 | https://github.com/apache/airflow/pull/15453 | 63bec6f634ba67ec62a77c301e390b8354e650c9 | 42a1ca8aab905a0eb1ffb3da30cef9c76830abff | 2021-04-20T10:44:17Z | python | 2021-04-20T17:36:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,439 | ["airflow/jobs/local_task_job.py", "tests/jobs/test_local_task_job.py"] | DAG run state not updated while DAG is paused | **Apache Airflow version**:
2.0.0
**What happened**:
The state of a DAG run does not update while the DAG is paused. The _tasks_ continue to run if the DAG run was kicked off before the DAG was paused and eventually finish and are marked correctly. The DAG run state does not get updated and stays in Running state until the DAG is unpaused.
Screenshot:

**What you expected to happen**:
I feel like the more intuitive behavior would be to let the DAG run continue if it is paused, and to mark the DAG run state as completed the same way the tasks currently behave.
**How to reproduce it**:
It can be repoduced using the example DAG in the docs: https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html
You would kick off a DAG run, and then paused the DAG and see that even though the tasks finish, the DAG run is never marked as completed while the DAG is paused.
I have been able to reproduce this issue 100% of time. It seems like logic to update the DAG run state simply does not execute while the DAG is paused.
**Anything else we need to know**:
Some background on my use case:
As part of our deployment, we use the Airflow rest API to pause a DAG and then use the api to check the DAG run state and wait until all dag runs are finished. Because of this bug, any DAG run in progress when we paused the DAG will never be marked as completed.
| https://github.com/apache/airflow/issues/15439 | https://github.com/apache/airflow/pull/16343 | d53371be10451d153625df9105234aca77d5f1d4 | 3834df6ade22b33addd47e3ab2165a0b282926fa | 2021-04-19T18:27:33Z | python | 2021-06-17T23:29:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,434 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator name randomization | `KubernetesPodOperator.name` randomization should be decoupled from the way the name is set. Currently `name` is only randomized if the `name` kwarg is used. However, one could also want name randomization when a name is set in a `pod_template_file` or `full_pod_spec`.
Move the name randomization feature behind a new feature flag, defaulted to True.
**Related Issues**
#14167
| https://github.com/apache/airflow/issues/15434 | https://github.com/apache/airflow/pull/19398 | ca679c014cad86976c1b2e248b099d9dc9fc99eb | 854b70b9048c4bbe97abde2252b3992892a4aab0 | 2021-04-19T14:15:31Z | python | 2021-11-07T16:47:01Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,416 | ["BREEZE.rst", "scripts/in_container/configure_environment.sh"] | breeze should load local tmux configuration in 'breeze start-airflow' | **Description**
Currently, when we run
`
breeze start-airflow
`
**breeze** doesn't load local tmux configuration file **.tmux.conf** and we get default tmux configuration inside the containers.
**Use case / motivation**
Breeze must load local **tmux configuration** in to the containers and developers should be able to use their local configurations.
**Are you willing to submit a PR?**
YES
<!--- We accept contributions! -->
**Related Issues**
None
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/15416 | https://github.com/apache/airflow/pull/15454 | fdea6226742d36eea2a7e0ef7e075f7746291561 | 508cd394bcf8dc1bada8824d52ebff7bb6c86b3b | 2021-04-17T14:34:32Z | python | 2021-04-21T16:46:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,399 | ["airflow/models/pool.py", "tests/models/test_pool.py"] | Not scheduling since there are (negative number) open slots in pool | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.16
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Airflow fails to schedule any tasks after some time. The "Task Instance Details" tab of some of the failed tasks show the following:
```
('Not scheduling since there are %s open slots in pool %s and require %s pool slots', -3, 'transformation', 3)
```
Admin > Pools tab shows 0 Running Slots but 9 Queued Slots. Gets stuck in this state until airflow is restarted.
**What you expected to happen**:
Number of "open slots in pool" should never be negative!
**How to reproduce it**:
- Create/configure a pool with a small size (eg. 6)
- DAG with multiple tasks occupying multiple pool_slots (eg. pool_slots=3)
**Anything else we need to know**:
| https://github.com/apache/airflow/issues/15399 | https://github.com/apache/airflow/pull/15426 | 8711f90ab820ed420ef317b931e933a2062c891f | d7c27b85055010377b6f971c3c604ce9821d6f46 | 2021-04-16T05:14:41Z | python | 2021-04-19T22:14:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,384 | ["airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"] | Pagination doesn't work with tags filter |
**Apache Airflow version**:
2.0.1
**Environment**:
- **OS**: Linux Mint 19.2
- **Kernel**: 5.5.0-050500-generic #202001262030 SMP Mon Jan 27 01:33:36 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
**What happened**:
Seems that pagination doesn't work. I filter DAGs by tags and get too many results to get them at one page. When I click second page I'm redirected to the first one (actually, it doesn't matter if I click second, last or any other - I'm always getting redirected to the first one).
**What you expected to happen**:
I expect to be redirected to the correct page when I click number on the bottom of the page.
**How to reproduce it**:
1. Create a lot of DAGs with the same tag
2. Filter by tag
3. Go to the next page in the pagination bar
**Implementation example**:
```
from airflow import DAG
from airflow.utils.dates import days_ago
for i in range(200):
name = 'test_dag_' + str(i)
dag = DAG(
dag_id=name,
schedule_interval=None,
start_date=days_ago(2),
tags=['example1'],
)
globals()[name] = dag
```
| https://github.com/apache/airflow/issues/15384 | https://github.com/apache/airflow/pull/15411 | cb1344b63d6650de537320460b7b0547efd2353c | f878ec6c599a089a6d7516b7a66eed693f0c9037 | 2021-04-15T14:57:20Z | python | 2021-04-16T21:34:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 15,374 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Clearing a subdag task leaves parent dag in the failed state | **Apache Airflow version**:
2.0.1
**Kubernetes version**:
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
**What happened**:
Clearing a failed subdag task with Downstream+Recursive does not automatically set the state of the parent dag to 'running' so that the downstream parent tasks can execute.
The work around is to manually set the state of the parent dag to running after clearing the subdag task
**What you expected to happen**:
With airflow version 1.10.4 the parent dag was automatically set to 'running' for this same scenario
**How to reproduce it**:
- Clear a failed subdag task selecting option for Downstream+Recursive
- See that all down stream tasks in the subdag as well as the parent dag have been cleared
- See that the parent dag is left in 'failed' state.
| https://github.com/apache/airflow/issues/15374 | https://github.com/apache/airflow/pull/15562 | 18531f81848dbd8d8a0d25b9f26988500a27e2a7 | a4211e276fce6521f0423fe94b01241a9c43a22c | 2021-04-14T21:15:44Z | python | 2021-04-30T19:52:26Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.