status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 28,195 | ["airflow/providers/common/sql/operators/sql.py", "airflow/providers/common/sql/operators/sql.pyi", "tests/providers/common/sql/operators/test_sql.py"] | SQLTableCheckOperator doesn't correctly handle templated partition clause | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.1
### Apache Airflow version
2.5.0
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other Docker-based deployment
### Deployment details
Docker image based on the official image (couple of tools added) deployed on AWS ECS Fargate
### What happened
I have a task which uses the table check operator to run some table stakes data validation. But I don't want it failing every time it runs in our test environments which do not get records every day. So there's a templated switch in the sql:
```python
test_data_changed_yesterday = SQLTableCheckOperator(
task_id="test_data_changed_yesterday",
table="reporting.events",
conn_id="pg_conn",
checks={"changed_record_count": {"check_statement": "count(*) > 0"}},
partition_clause="""
{% if var.value.get('is_test_env', False) %}
modifieddate >= '2015-12-01T01:01:01.000Z'
{% else %}
modifieddate >= '{{ data_interval_start.isoformat() }}' and
modifieddate < '{{ data_interval_end.isoformat() }}'
{% endif %}
""",
)
```
This shows correctly in the rendered field for the task. Not pretty, but it works:
```
('\n'
' \n'
" modifieddate >= '2015-12-01T01:01:01.000Z'\n"
' \n'
' ')
```
However, in the logs I see it's running this query:
```
[2022-12-07, 11:49:34 UTC] {sql.py:364} INFO - Running statement: SELECT check_name, check_result FROM (
SELECT 'changed_record_count' AS check_name, MIN(changed_record_count) AS check_result
FROM (SELECT CASE WHEN count(*) > 0 THEN 1 ELSE 0 END AS changed_record_count
FROM reporting.events WHERE
{% if var.value.get('is_test_env', False) %}
modifieddate >= '2015-12-01T01:01:01.000Z'
{% else %}
modifieddate >= '{{ data_interval_start.isoformat() }}' and
modifieddate < '{{ data_interval_end.isoformat() }}'
{% endif %}
) AS sq
) AS check_table, parameters: None
```
Which unsurprisingly the DB rejects as invalid sql!
### What you think should happen instead
The rendered code should be used in the sql which is run!
I think this error comes about through this line: https://github.com/apache/airflow/blob/main/airflow/providers/common/sql/operators/sql.py#L576 which is run in the `__init__` method of the operator. i.e. before templating is applied in the build up to calling `execute(context)`.
### How to reproduce
Use the operator with a templated `partition_clause`
### Anything else
Happens every time
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28195 | https://github.com/apache/airflow/pull/28202 | aace30b50cab3c03479fd0c889d145b7435f26a9 | a6cda7cd230ef22f7fe042d6d5e9f78c660c4a75 | "2022-12-07T15:21:29Z" | python | "2022-12-09T23:04:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,171 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/models/abstractoperator.py", "airflow/models/taskinstance.py", "newsfragments/28172.misc.rst"] | Invalid retry date crashes scheduler "OverflowError: date value out of range" | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Our scheduler started failing with this trace:
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1187, in get_failed_dep_statuses
for dep_status in dep.get_dep_statuses(self, session, dep_context):
File "/usr/local/lib/python3.9/site-packages/airflow/ti_deps/deps/base_ti_dep.py", line 95, in get_dep_statuses
yield from self._get_dep_statuses(ti, session, dep_context)
File "/usr/local/lib/python3.9/site-packages/airflow/ti_deps/deps/not_in_retry_period_dep.py", line 47, in _get_dep_statuses
next_task_retry_date = ti.next_retry_datetime()
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1243, in next_retry_datetime
return self.end_date + delay
OverflowError: date value out of range
We found a dag with a large # of retries and exponential backoff will trigger this date error and take down the entire scheduler. The workaround is to force a max_delay setting.
The bug is here:
https://github.com/apache/airflow/blob/2.3.3/airflow/models/taskinstance.py#L1243
The current version seems to use the same code:
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1147
### What you think should happen instead
There are a few solutions. Exponential backoff should probably require a max delay value.
At the very least, it shouldn't kill the scheduler.
### How to reproduce
Create dag with exponential delay and force it to retry until it overflows.
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28171 | https://github.com/apache/airflow/pull/28172 | e948b55a087f98d25a6a4730bf58f61689cdb116 | 2cbe5960476b1f444e940d11177145e5ffadf613 | "2022-12-06T19:48:31Z" | python | "2022-12-08T18:51:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,167 | ["airflow/www/.babelrc", "airflow/www/babel.config.js", "airflow/www/jest.config.js", "airflow/www/package.json", "airflow/www/static/js/components/ReactMarkdown.tsx", "airflow/www/static/js/dag/details/NotesAccordion.tsx", "airflow/www/yarn.lock"] | Allow Markdown in Task comments | ### Description
Implement the support for Markdown in Task notes inside Airflow.
### Use case/motivation
It would be helpful to use markdown syntax in Task notes/comments for the following usecases:
- Formatting headers, lists, and tables to allow more complex note-taking.
- Parsing a URL to reference a ticket in an Issue ticketing system (Jira, Pagerduty, etc.)
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28167 | https://github.com/apache/airflow/pull/28245 | 78b72f4fa07cac009ddd6d43d54627381e3e9c21 | 74e82af7eefe1d0d5aa6ea1637d096e4728dea1f | "2022-12-06T16:57:16Z" | python | "2022-12-19T15:32:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,155 | ["airflow/www/views.py"] | Links to dag graph some times display incorrect dagrun | ### Apache Airflow version
2.5.0
### What happened
Open url `dags/gate/graph?dag_run_id=8256-8-1670328803&execution_date=2022-12-06T12%3A13%3A23.174592+00%3A00`
The graph is displaying a completely different dagrun.

If you are not careful to review all the content, you might continue looking at the wrong results, or worse cancel a run with Mark failed.
I got the link from one of our users, so not 100% sure if it was the original url. I believe there could be something wrong with the url-encoding of the last `+` character. In any case, if there are any inconsistencies in the URL parameters vs the found dagruns, it should not display another dagrun, rather redirect to grid-view or error message.
### What you think should happen instead
* dag_run_id should be only required parameter, or have precedence over execution_date
* Provided dag_run_id should always be the same run-id that is displayed in graph
* Inconsistencies in any parameters should display error or redirect to grid view.
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28155 | https://github.com/apache/airflow/pull/29066 | 48cab7cfebf2c7510d9fdbffad5bd06d8f4751e2 | 9dedf81fa18e57755aa7d317f08f0ea8b6c7b287 | "2022-12-06T12:53:33Z" | python | "2023-01-21T03:13:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,146 | ["airflow/models/xcom.py", "tests/models/test_taskinstance.py"] | Dynamic task context fails to be pickled | ### Apache Airflow version
2.5.0
### What happened
When I upgrade to 2.5.0, run dynamic task test failed.
```py
from airflow.decorators import task, dag
import pendulum as pl
@dag(
dag_id='test-dynamic-tasks',
schedule=None,
start_date=pl.today().add(days=-3),
tags=['example'])
def test_dynamic_tasks():
@task.virtualenv(requirements=[])
def sum_it(values):
print(values)
@task.virtualenv(requirements=[])
def add_one(value):
return value + 1
added_values = add_one.expand(value = [1,2])
sum_it(added_values)
dag = test_dynamic_tasks()
```
```log
*** Reading local file: /home/andi/airflow/logs/dag_id=test-dynamic-tasks/run_id=manual__2022-12-06T10:07:41.355423+00:00/task_id=sum_it/attempt=1.log
[2022-12-06, 18:07:53 CST] {taskinstance.py:1087} INFO - Dependencies all met for <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [queued]>
[2022-12-06, 18:07:53 CST] {taskinstance.py:1087} INFO - Dependencies all met for <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [queued]>
[2022-12-06, 18:07:53 CST] {taskinstance.py:1283} INFO -
--------------------------------------------------------------------------------
[2022-12-06, 18:07:53 CST] {taskinstance.py:1284} INFO - Starting attempt 1 of 1
[2022-12-06, 18:07:53 CST] {taskinstance.py:1285} INFO -
--------------------------------------------------------------------------------
[2022-12-06, 18:07:53 CST] {taskinstance.py:1304} INFO - Executing <Task(_PythonVirtualenvDecoratedOperator): sum_it> on 2022-12-06 10:07:41.355423+00:00
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:55} INFO - Started process 25873 to run task
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:82} INFO - Running: ['airflow', 'tasks', 'run', 'test-dynamic-tasks', 'sum_it', 'manual__2022-12-06T10:07:41.355423+00:00', '--job-id', '41164', '--raw', '--subdir', 'DAGS_FOLDER/andi/test-dynamic-task.py', '--cfg-path', '/tmp/tmphudvake2']
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:83} INFO - Job 41164: Subtask sum_it
[2022-12-06, 18:07:53 CST] {task_command.py:389} INFO - Running <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [running]> on host sh-dataops-airflow.jinde.local
[2022-12-06, 18:07:53 CST] {taskinstance.py:1511} INFO - Exporting the following env vars:
[email protected]
AIRFLOW_CTX_DAG_OWNER=andi
AIRFLOW_CTX_DAG_ID=test-dynamic-tasks
AIRFLOW_CTX_TASK_ID=sum_it
AIRFLOW_CTX_EXECUTION_DATE=2022-12-06T10:07:41.355423+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-12-06T10:07:41.355423+00:00
[2022-12-06, 18:07:53 CST] {process_utils.py:179} INFO - Executing cmd: /home/andi/airflow/venv38/bin/python -m virtualenv /tmp/venv7lc4m6na --system-site-packages
[2022-12-06, 18:07:53 CST] {process_utils.py:183} INFO - Output:
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - created virtual environment CPython3.8.0.final.0-64 in 220ms
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - creator CPython3Posix(dest=/tmp/venv7lc4m6na, clear=False, no_vcs_ignore=False, global=True)
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/andi/.local/share/virtualenv)
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - added seed packages: pip==22.2.1, setuptools==63.2.0, wheel==0.37.1
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
[2022-12-06, 18:07:54 CST] {process_utils.py:179} INFO - Executing cmd: /tmp/venv7lc4m6na/bin/pip install -r /tmp/venv7lc4m6na/requirements.txt
[2022-12-06, 18:07:54 CST] {process_utils.py:183} INFO - Output:
[2022-12-06, 18:07:55 CST] {process_utils.py:187} INFO - Looking in indexes: http://pypi:8081
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO -
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO - [notice] A new release of pip available: 22.2.1 -> 22.3.1
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO - [notice] To update, run: python -m pip install --upgrade pip
[2022-12-06, 18:08:00 CST] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/decorators/base.py", line 217, in execute
return_value = super().execute(context)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 356, in execute
return super().execute(context=serializable_context)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 175, in execute
return_value = self.execute_callable()
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 553, in execute_callable
return self._execute_python_callable_in_subprocess(python_path, tmp_path)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 397, in _execute_python_callable_in_subprocess
self._write_args(input_path)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 367, in _write_args
file.write_bytes(self.pickling_library.dumps({"args": self.op_args, "kwargs": self.op_kwargs}))
_pickle.PicklingError: Can't pickle <class 'sqlalchemy.orm.session.Session'>: it's not the same object as sqlalchemy.orm.session.Session
[2022-12-06, 18:08:00 CST] {taskinstance.py:1322} INFO - Marking task as FAILED. dag_id=test-dynamic-tasks, task_id=sum_it, execution_date=20221206T100741, start_date=20221206T100753, end_date=20221206T100800
[2022-12-06, 18:08:00 CST] {warnings.py:109} WARNING - /home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/utils/email.py:120: RemovedInAirflow3Warning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
send_mime_email(e_from=mail_from, e_to=recipients, mime_msg=msg, conn_id=conn_id, dryrun=dryrun)
[2022-12-06, 18:08:00 CST] {configuration.py:635} WARNING - section/key [smtp/smtp_user] not found in config
[2022-12-06, 18:08:00 CST] {email.py:229} INFO - Email alerting: attempt 1
[2022-12-06, 18:08:01 CST] {email.py:241} INFO - Sent an alert email to ['[email protected]']
[2022-12-06, 18:08:01 CST] {standard_task_runner.py:100} ERROR - Failed to execute job 41164 for task sum_it (Can't pickle <class 'sqlalchemy.orm.session.Session'>: it's not the same object as sqlalchemy.orm.session.Session; 25873)
[2022-12-06, 18:08:01 CST] {local_task_job.py:159} INFO - Task exited with return code 1
[2022-12-06, 18:08:01 CST] {taskinstance.py:2582} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
I expect this sample run passed.
### How to reproduce
_No response_
### Operating System
centos 7.9 3.10.0-1160.el7.x86_64
### Versions of Apache Airflow Providers
```
airflow-code-editor==5.2.2
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-microsoft-mssql==3.1.0
apache-airflow-providers-microsoft-psrp==2.0.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-samba==4.0.0
apache-airflow-providers-sftp==3.0.0
autopep8==1.6.0
brotlipy==0.7.0
chardet==3.0.4
pip-chill==1.0.1
pyopenssl==19.1.0
pysocks==1.7.1
python-ldap==3.4.2
requests-credssp==2.0.0
swagger-ui-bundle==0.0.9
tqdm==4.51.0
virtualenv==20.16.2
yapf==0.32.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28146 | https://github.com/apache/airflow/pull/28191 | 84a5faff0de2a56f898b8a02aca578b235cb12ba | e981dfab4e0f4faf1fb932ac6993c3ecbd5318b2 | "2022-12-06T10:40:01Z" | python | "2022-12-15T09:20:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,143 | ["airflow/www/static/js/api/useTaskLog.ts", "airflow/www/static/js/dag/details/taskInstance/Logs/LogBlock.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | Logs tab is automatically scrolling to the bottom while user is reading logs | ### Apache Airflow version
2.5.0
### What happened
Open the logs tab for a task that is currently running.
Scroll up to read things further up the log.
Every 30 seconds or so the log automatically scrolls down to the bottom again.
### What you think should happen instead
If the user has scrolled away from the bottom in the logs-panel, the live tailing of new logs should not scroll the view back to the bottom automatically.
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28143 | https://github.com/apache/airflow/pull/28386 | 5b54e8d21b1801d5e0ccd103592057f0b5a980b1 | 5c80d985a3102a46f198aec1c57a255e00784c51 | "2022-12-06T07:35:40Z" | python | "2022-12-19T01:00:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,121 | ["airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/sensors/test_sftp.py"] | SFTP Sensor fails to locate file | ### Apache Airflow version
2.5.0
### What happened
While creating SFTP sensor I have tried to find a file under directory. But I was getting error as Time Out, not found.
So after debugging code found that there is a issue with [poke function](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/sensors/sftp.html#SFTPSensor.poke).
As after getting matched file we are trying to find last modified time of the file using [self.hook.get_mod_time](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/hooks/sftp.html#SFTPHook.get_mod_time) which take full path (path + filename) and we are giving only filename as arguments.
### What you think should happen instead
I have solved that issue by adding path with filename and then calling [self.hook.get_mod_time](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/hooks/sftp.html#SFTPHook.get_mod_time) function.
Here is modified code,
```
def poke(self, context: Context) -> bool:
self.hook = SFTPHook(self.sftp_conn_id)
self.log.info("Poking for %s, with pattern %s", self.path, self.file_pattern)
if self.file_pattern:
file_from_pattern = self.hook.get_file_by_pattern(self.path, self.file_pattern)
if file_from_pattern:
'''actual_file_to_check = file_from_pattern'''
actual_file_to_check = self.path + file_from_pattern
else:
return False
else:
actual_file_to_check = self.path
try:
mod_time = self.hook.get_mod_time(actual_file_to_check)
self.log.info("Found File %s last modified: %s", str(actual_file_to_check), str(mod_time))
except OSError as e:
if e.errno != SFTP_NO_SUCH_FILE:
raise e
return False
self.hook.close_conn()
if self.newer_than:
_mod_time = convert_to_utc(datetime.strptime(mod_time, "%Y%m%d%H%M%S"))
_newer_than = convert_to_utc(self.newer_than)
return _newer_than <= _mod_time
else:
return True
```
### How to reproduce
You can get same issue by creating a DAG as mentioned
```
with DAG(
dag_id='sftp_sensor_dag',
max_active_runs=1,
default_args=default_args,
) as dag:
file_sensing_task = SFTPSensor(
task_id='sensor_for_file',
path= "Weekly/11/",
file_pattern = "*pdf*,
sftp_conn_id='sftp_hook_conn',
poke_interval=30
)
```
### Operating System
Microsoft Windows [Version 10.0.19044.2251]
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28121 | https://github.com/apache/airflow/pull/29467 | 72c3817a44eea5005761ae3b621e8c39fde136ad | 8e24387d6db177c662342245bb183bfd73fb9ee8 | "2022-12-05T15:15:46Z" | python | "2023-02-13T23:12:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,118 | ["airflow/jobs/base_job.py", "tests/jobs/test_base_job.py"] | Scheduler heartbeat warning message in Airflow UI displaying that scheduler is down sometimes incorrect | ### Apache Airflow version
main (development)
### What happened
Steps to reproduce:
1. run 2 replicas of scheduler
2. initiate shut down of one of the schedulers
3. In Airflow UI observe message
<img width="1162" alt="image" src="https://user-images.githubusercontent.com/1017130/205650336-fb1d8e39-2213-4aec-8530-abd1417db426.png">
3rd step should be done immediately after 2nd (refreshing UI page few times). 2nd and 3rd steps might be repeated for couple of times in order to reproduce.
### What you think should happen instead
Warning message shouldn't be displayed.
The issue is that for this warning message recent (with latest heartbet) scheduler job is fetched
https://github.com/apache/airflow/blob/f02a7e9a8292909b369daae6d573f58deed04440/airflow/jobs/base_job.py#L133.
And this may point to job which is not running (state!="running") and that is why we see warning message.
The warning message in this case is misleading as another replica of scheduler is running in parallel.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28118 | https://github.com/apache/airflow/pull/28119 | 3cd70ffee974c9f345aabb3a365dde4dbcdd84a4 | 56c0871dce2fb2b7ed2252e4b2d1d8d5d0c07c58 | "2022-12-05T13:38:56Z" | python | "2022-12-07T05:48:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,106 | ["airflow/models/dagrun.py"] | IndexError in `airflow dags test` re: scheduling delay stats | ### Apache Airflow version
2.5.0
### What happened
Very simple dag:
```python3
from airflow import DAG
from airflow.operators.bash import BashOperator
from datetime import datetime, timedelta
with DAG(dag_id="hello_world", schedule=timedelta(days=30 * 365), start_date=datetime(1970, 1, 1)) as dag:
(
BashOperator(task_id="hello", bash_command="echo hello")
>> BashOperator(task_id="world", bash_command="echo world")
)
```
Run it like `airflow dags test hello_world $(date +%Y-%m-%d)`
End of output:
```
[2022-12-04 21:24:02,993] {dagrun.py:606} INFO - Marking run <DagRun hello_world @ 2022-12-04T00:00:00+00:00: manual__2022-12-04T00:00:00+00:00, state:running, queued_at: None. externally triggered: False> successful
[2022-12-04 21:24:03,003] {dagrun.py:657} INFO - DagRun Finished: dag_id=hello_world, execution_date=2022-12-04T00:00:00+00:00, run_id=manual__2022-12-04T00:00:00+00:00, run_start_date=2022-12-04T00:00:00+00:00, run_end_date=2022-12-05 04:24:02.995279+00:00, run_duration=102242.995279, state=success, external_trigger=False, run_type=manual, data_interval_start=2022-12-04T00:00:00+00:00, data_interval_end=2052-11-26T00:00:00+00:00, dag_hash=None
[2022-12-04 21:24:03,004] {dagrun.py:878} WARNING - Failed to record first_task_scheduling_delay metric:
Traceback (most recent call last):
File "/home/matt/2022/12/04/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 866, in _emit_true_scheduling_delay_stats_for_finished_state
first_start_date = ordered_tis_by_start_date[0].start_date
IndexError: list index out of range
```
### What you think should happen instead
No warning (or, an explanation of what I can do address whatever it's warning about).
### How to reproduce
_No response_
### Operating System
NixOS 22.11 (gnu/linux)
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28106 | https://github.com/apache/airflow/pull/28138 | 7adf8a53ec8bd08a9c14418bf176574e149780c5 | b3d7e17e72c05fd149a5514e3796d46a241ac4f7 | "2022-12-05T04:28:57Z" | python | "2022-12-06T11:27:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,103 | ["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py"] | Type Error while using dynamodb_to_s3 operator | ### Discussed in https://github.com/apache/airflow/discussions/28102
<div type='discussions-op-text'>
<sup>Originally posted by **p-madduri** December 1, 2022</sup>
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
https://github.com/apache/airflow/blob/430e930902792fc37cdd2c517783f7dd544fbebf/airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py#L39
if we use the below class at line 39:
def _convert_item_to_json_bytes(item: dict[str, Any]) -> bytes:
return (json.dumps(item) + "\n").encode("utf-8")
its throwing below error
TypeError: Object of type Decimal is not JSON serializable.
can we use
class DecimalEncoder(json.JSONEncoder):
def encode(self, obj):
if isinstance(obj, Mapping):
return '{' + ', '.join(f'{self.encode(k)}: {self.encode(v)}' for (k, v) in obj.items()) + '}'
elif isinstance(obj, Iterable) and (not isinstance(obj, str)):
return '[' + ', '.join(map(self.encode, obj)) + ']'
elif isinstance(obj, Decimal):
return f'{obj.normalize():f}' # using normalize() gets rid of trailing 0s, using ':f' prevents scientific notation
else:
print(obj)
return super().encode(obj)
and need to update the code at line
https://github.com/apache/airflow/blob/430e930902792fc37cdd2c517783f7dd544fbebf/airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py#L99
This solution is suggested in this article:
https://randomwits.com/blog/export-dynamodb-s3
Airflow version of MWAA : 2.0.2
### What you think should happen instead
mentioned in what happened section
### How to reproduce
mentioned in what happened section
### Operating System
MAC
### Versions of Apache Airflow Providers
from airflow.providers.amazon.aws.transfers.dynamodb_to_s3 import DynamoDBToS3Operator
### Deployment
MWAA
### Deployment details
n/a
### Anything else
n/a
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/28103 | https://github.com/apache/airflow/pull/28158 | 39f501d4f4e87635c80d97bb599daf61096d23b8 | 0d90c62bac49de9aef6a31ee3e62d02e458b0d33 | "2022-12-05T01:50:23Z" | python | "2022-12-06T21:23:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,091 | ["docs/apache-airflow/howto/docker-compose/docker-compose.yaml"] | close | ### Apache Airflow version
2.5.0
### What happened
Hi there,i use this guide to install airflow
https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html
But when i run docker compose up airflow-init

### What you think should happen instead
In the tutorial, you said that if it succeeds, it will return the following content

### How to reproduce
_No response_
### Operating System
NAME="Ubuntu" VERSION="20.04.4 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.4 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal
### Versions of Apache Airflow Providers
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.5.0/docker-compose.yaml'
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28091 | https://github.com/apache/airflow/pull/28094 | 2d663df0552542efcef6e59bc2bc1586f8d1c7f3 | 9d73830209aa1de03f2de6e6461b8416011c6ba6 | "2022-12-04T07:43:20Z" | python | "2022-12-04T19:07:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,071 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Kubernetes logging errors - attempting to adopt taskinstance which was not specified by database | ### Apache Airflow version
2.4.3
### What happened
Using following config
```
executor = CeleryKubernetesExecutor
delete_worker_pods = False
```
1. Start a few dags running in kubernetes, wait for them to complete.
2. Restart Scheduler.
3. Logs are flooded with hundreds of errors like` ERROR - attempting to adopt taskinstance which was not specified by database: TaskInstanceKey(dag_id='xxx', task_id='yyy', run_id='zzz', try_number=1, map_index=-1)`
This is problematic because:
* Our installation has thousands of dags and pods so this becomes very noisy and the adoption-process adds excessive startup-time to the scheduler, up to a minute some times.
* It's hiding actual errors with resetting orphaned tasks, something that also happens for inexplicable reasons on scheduler restart with following log: `Reset the following 6 orphaned TaskInstances`. Making such much harder to debug. The cause of them can not be easily correlated with those that were not specified by database.
The cause of these logs are the Kubernetes executor on startup loads all pods (`try_adopt_task_instances`), it then cross references them with all `RUNNING` TaskInstances loaded via `scheduler_job.adopt_or_reset_orphaned_tasks`.
For all pods where a running TI can not be found, it logs the error above - But for TIs that were already completed this is not an error, and the pods should not have to be loaded at all.
I have an idea of adding some code in the kubernetes_executor that patches in something like a `completion-acknowleged`-label whenever a pod is completed (unless `delete_worker_pods` is set). Then on startup, all pods having this label can be excluded. Is this a good idea or do you see other potential solutions?
Another potential solution is to inside `try_adopt_task_instances` only fetch the exact pod-id specified in each task-instance, instead of listing all to later cross-reference them.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28071 | https://github.com/apache/airflow/pull/28899 | f2bedcbd6722cd43772007eecf7f55333009dc1d | f64ac5978fb3dfa9e40a0e5190ef88e9f9615824 | "2022-12-02T17:46:41Z" | python | "2023-01-18T20:05:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,070 | ["airflow/www/static/js/dag/InstanceTooltip.test.tsx", "airflow/www/static/js/dag/InstanceTooltip.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Details.tsx", "airflow/www/yarn.lock"] | task duration in grid view is different when viewed at different times. | ### Apache Airflow version
2.4.3
### What happened
I wrote this dag to test the celery executor's ability to tolerate OOMkills:
```python3
import numpy as np
from airflow import DAG
from airflow.decorators import task
from datetime import datetime, timedelta
from airflow.models.variable import Variable
import subprocess
import random
def boom():
np.ones((1_000_000_000_000))
def maybe_boom(boom_hostname, boom_count, boom_modulus):
"""
call boom(), but only under certain conditions
"""
try:
proc = subprocess.Popen("hostname", shell=True, stdout=subprocess.PIPE)
hostname = proc.stdout.readline().decode().strip()
# keep track of which hosts parsed the dag
parsed = Variable.get("parsed", {}, deserialize_json=True)
parsed.setdefault(hostname, 0)
parsed[hostname] = parsed[hostname] + 1
Variable.set("parsed", parsed, serialize_json=True)
# only blow up when the caller's condition is met
print(parsed)
try:
count = parsed[boom_hostname]
if hostname == boom_hostname and count % boom_modulus == boom_count:
print("boom")
boom()
except (KeyError, TypeError):
pass
print("no boom")
except:
# key errors show up because of so much traffic on the variable
# don't hold up parsing in those cases
pass
@task
def do_stuff():
# tasks randomly OOMkill also
if random.randint(1, 256) == 13:
boom()
run_size = 100
with DAG(
dag_id="oom_on_parse",
schedule=timedelta(seconds=30),
start_date=datetime(1970, 1, 1),
catchup=False,
):
# OOM part-way through the second run
# and every 3th run after that
maybe_boom(
boom_hostname="airflow-worker-0",
boom_count=run_size + 50,
boom_modulus=run_size * 3,
)
[do_stuff() for _ in range(run_size)]
```
I'm not surprised that tasks are failing. The dag occasionally tries to allocate 1Tb of memory. That's a good reason to fail. What surprises me is that occasionally, the run durations are reported as 23:59:30 when I've only been running the test for 5 minutes. Also, this number changes if I view it later, behold:

23:55:09 -> 23:55:03 -> 23:55:09, they're decreasing.
### What you think should happen instead
The duration should never be longer than I've had the deployment up, and whatever is reported, it should not change when viewed later on.
### How to reproduce
Using the celery executor, unpause the dag above. Wait for failures to show up. View their duration in the grid view.
This gist includes a script which shows all of the parameters I'm using (e.g. to helm and such): https://gist.github.com/MatrixManAtYrService/6e90a3b8c7c65b8d8b1deaccc8b6f042
### Operating System
k8s / helm / docker / macos
### Versions of Apache Airflow Providers
n/a
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
See script in this gist? https://gist.github.com/MatrixManAtYrService/6e90a3b8c7c65b8d8b1deaccc8b6f042
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28070 | https://github.com/apache/airflow/pull/28395 | 4d0fa01f72ac4a947db2352e18f4721c2e2ec7a3 | 11f30a887c77f9636e88e31dffd969056132ae8c | "2022-12-02T17:10:57Z" | python | "2022-12-16T18:04:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,065 | ["airflow/www/views.py", "tests/www/views/test_views_dagrun.py"] | Queue up new tasks always returns an empty list | ### Apache Airflow version
main (development)
### What happened
Currently when a new task is added to a dag and in the grid view, a user selects the top level of a dag run and then clicks on "Queue up new tasks", the list returned by the confirmation box is always empty.
It appears that where the list of tasks is expected to be set, [here](https://github.com/apache/airflow/blob/ada91b686508218752fee176d29d63334364a7f2/airflow/api/common/mark_tasks.py#L516), `res` will always be an empty list.
### What you think should happen instead
The UI should return a list of tasks that will be queued up once the confirmation button is pressed.
### How to reproduce
Create a dag, trigger the dag, allow it to complete.
Add a new task to the dag, click on "Queue up new tasks", the list will be empty.
### Operating System
n/a
### Versions of Apache Airflow Providers
2.3.3 and upwards including main. I've not looked at earlier releases.
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
I have a PR prepared for this issue.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28065 | https://github.com/apache/airflow/pull/28066 | e29d33b89f7deea6eafb03006c37b60692781e61 | af29ff0a8aa133f0476bf6662e6c06c67de21dd5 | "2022-12-02T11:45:05Z" | python | "2022-12-05T18:51:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,051 | ["airflow/utils/db_cleanup.py"] | airflow db clean command does not work with MySQL with enforce_gtid_consistency enabled | ### Apache Airflow version
2.4.3
### What happened
Tried executing the following command to clean old data in my service:
```
airflow db clean --skip-archive --clean-before-timestamp '2022-10-31 00:00:00+0000' --tables 'xcom, log, dag_run, task_instance' --verbose --yes
```
Then got the following error
```
...
[2022-12-02T08:34:18.065+0000] {db_cleanup.py:138} DEBUG - ctas query:
CREATE TABLE _airflow_deleted__dag_run__20221202083418 AS SELECT base.*
FROM dag_run AS base LEFT OUTER JOIN (SELECT dag_id, max(dag_run.start_date) AS max_date_per_group
FROM dag_run
WHERE external_trigger = false GROUP BY dag_id) AS latest ON base.dag_id = latest.dag_id AND base.start_date = max_date_per_group
WHERE base.start_date < :start_date_1 AND max_date_per_group IS NULL
[2022-12-02T08:34:18.069+0000] {cli_action_loggers.py:83} DEBUG - Calling callbacks: []
Traceback (most recent call last):
File "/opt/app-root/lib64/python3.8/site-packages/mysql/connector/connection_cext.py", line 565, in cmd_query
self._cmysql.query(
_mysql_connector.MySQLInterfaceError: Statement violates GTID consistency: CREATE TABLE ... SELECT.The above exception was the direct cause of the following exception:Traceback (most recent call last):
File "/opt/app-root/lib64/python3.8/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
self.dialect.do_execute(
File "/opt/app-root/lib64/python3.8/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
cursor.execute(statement, parameters)
File "/opt/app-root/lib64/python3.8/site-packages/mysql/connector/cursor_cext.py", line 279, in execute
result = self._cnx.cmd_query(
File "/opt/app-root/lib64/python3.8/site-packages/mysql/connector/connection_cext.py", line 573, in cmd_query
raise get_mysql_exception(
mysql.connector.errors.DatabaseError: 1786 (HY000): Statement violates GTID consistency: CREATE TABLE ... SELECT.The above exception was the direct cause of the following exception:Traceback (most recent call last):
...
```
our MySQL server has `enforce_gtid_consistency` parameter set to ON and it does not allow `CREATE TABLE ... SELECT` statement.
### What you think should happen instead
I expected that the command should work for MySQL with enforce_gtid_consistency enabled.
### How to reproduce
As described above.
### Operating System
K8S with base image is CentOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
Can we avoid this kind of issue by breaking the statement into 2, like:
```
CREATE TABLE _airflow_deleted__dag_run__20221202083418 LIKE dag_run;
INSERT _airflow_deleted__dag_run__20221202083418 SELECT * FROM SELECT base.*
FROM dag_run AS base LEFT OUTER JOIN (SELECT dag_id, max(dag_run.start_date) AS max_date_per_group
FROM dag_run
WHERE external_trigger = false GROUP BY dag_id) AS latest ON base.dag_id = latest.dag_id AND base.start_date = max_date_per_group
WHERE base.start_date < :start_date_1 AND max_date_per_group IS NULL;
```
Ref: https://stackoverflow.com/a/56068655/1526790
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28051 | https://github.com/apache/airflow/pull/29999 | 2f25ba572e0219c614c11cec1fa68dc80d0ec854 | 78cc2e89e5d46738664b7442dc6f5a00b23d1ef5 | "2022-12-02T09:00:51Z" | python | "2023-03-13T13:58:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,010 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "docs/apache-airflow/core-concepts/executor/celery.rst"] | Airflow does not pass through Celery's support for Redis Sentinel over SSL. | ### Apache Airflow version
2.4.3
### What happened
When configuring Airflow/Celery to use Redis Sentinel as a broker, the following pops up:
```
airflow.exceptions.AirflowException: The broker you configured does not support SSL_ACTIVE to be True. Please use RabbitMQ or Redis if you would like to use SSL for broker.
```
### What you think should happen instead
Celery has supported TLS on Redis Sentinel [for a while](https://docs.celeryq.dev/en/latest/history/whatsnew-5.1.html#support-redis-sentinel-with-ssl) now.
It looks like [this piece of code](https://github.com/apache/airflow/blob/main/airflow/config_templates/default_celery.py#L68-L88) explicitly prohibits from passing a valid Redis Sentinel TLS configuration through to Celery. (Since Sentinel broker URL's are prefixed with `sentinel://` instead of `redis://`.)
### How to reproduce
This problem can be reproduced by deploying Airflow using Docker with the following environment variables:
```
AIRFLOW__CELERY__BROKER_URL=sentinel://sentinel1:26379;sentinel://sentinel2:26379;sentinel://sentinel3:26379
AIRFLOW__CELERY__SSL_ACTIVE=true
AIRFLOW__CELERY_BROKER_TRANSPORT_OPTIONS__MASTER_NAME='some-master-name'
AIRFLOW__CELERY_BROKER_TRANSPORT_OPTIONS__PASSWORD='some-password'
AIRFLOW__LOGGING__LOGGING_LEVEL=DEBUG
```
Note that I'm not 100% certain of the syntax for the password environment var. I can't get to the point of testing this because without TLS connections to our internal brokers are denied (because they require TLS), and with TLS it doesn't attempt a connection because of the earlier linked code.
I've verified with the reference `redis-cli` that the settings we use for `master-name` does result in a valid response and the Sentinel set-up works as expected.
### Operating System
Docker (apache/airflow:2.4.3-python3.10)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Deployed using Nomad.
### Anything else
This is my first issue with this open source project. Please let me know if there's more relevant information I can provide to follow through on this issue.
I will try to make some time available soon to see if a simple code change in the earlier mentioned file would work, but as this is my first issue here I would still have to set-up a full development environment.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
If this is indeed a simple fix I'd be willing to look into making a PR. I would like some feedback on the problem first though if possible!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28010 | https://github.com/apache/airflow/pull/30352 | 800ade7da6ae49c52b4fe412c1c5a60ceffb897c | 2c270db714b7693a624ce70d178744ccc5f9e73e | "2022-11-30T15:15:32Z" | python | "2023-05-05T11:55:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,002 | ["airflow/models/dag.py", "airflow/www/views.py"] | Clearing dag run via UI fails on main branch and 2.5.0rc2 | ### Apache Airflow version
main (development)
### What happened
Create a simple dag, allow it to completely run through.
Next, when in grid view, on the left hand side click on the dag run at the top level.
On the right hand side, then click on "Clear existing tasks". This will error with the following on the web server:
```
[2022-11-29 17:55:05,939] {app.py:1742} ERROR - Exception on /dagrun_clear [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/opt/airflow/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
File "/opt/airflow/airflow/www/decorators.py", line 83, in wrapper
return f(*args, **kwargs)
File "/opt/airflow/airflow/www/views.py", line 2184, in dagrun_clear
confirmed=confirmed,
File "/opt/airflow/airflow/www/views.py", line 2046, in _clear_dag_tis
session=session,
File "/opt/airflow/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/airflow/models/dag.py", line 2030, in clear
exclude_task_ids=exclude_task_ids,
File "/opt/airflow/airflow/models/dag.py", line 1619, in _get_task_instances
tis = session.query(TaskInstance)
AttributeError: 'NoneType' object has no attribute 'query'
```
https://github.com/apache/airflow/blob/527fbce462429fc9836837378f801eed4e9d194f/airflow/models/dag.py#L1619
As per issue title, fails on main branch and `2.5.0rc2`. Works fine on `2.3.3` and `2.4.3`.
### What you think should happen instead
Tasks within the dag should be cleared as expected.
### How to reproduce
Run a dag, attempt to clear it within the UI at the top level of the dag.
### Operating System
Ran via breeze
### Versions of Apache Airflow Providers
N/A
### Deployment
Other 3rd-party Helm chart
### Deployment details
Tested via breeze.
### Anything else
Happens every time.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28002 | https://github.com/apache/airflow/pull/28003 | 527fbce462429fc9836837378f801eed4e9d194f | f43f50e3f11fa02a2025b4b68b8770d6456ba95d | "2022-11-30T08:18:26Z" | python | "2022-11-30T10:27:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,000 | ["airflow/providers/amazon/aws/hooks/redshift_sql.py", "docs/apache-airflow-providers-amazon/connections/redshift.rst", "tests/providers/amazon/aws/hooks/test_redshift_sql.py"] | Add IAM authentication to Amazon Redshift Connection by AWS Connection | ### Description
Allow authenticating to Redshift Cluster in `airflow.providers.amazon.aws.hooks.redshift_sql.RedshiftSQLHook` with temporary IAM Credentials.
This might be implemented by the same way as it already implemented into PostgreSQL Hook - manual obtain credentials by call [GetClusterCredentials](https://docs.aws.amazon.com/redshift/latest/APIReference/API_GetClusterCredentials.html) thought Redshift API.
https://github.com/apache/airflow/blob/56b5f3f4eed6a48180e9d15ba9bb9664656077b1/airflow/providers/postgres/hooks/postgres.py#L221-L235
Or by passing obtained temporary credentials into [redshift-connector](https://github.com/aws/amazon-redshift-python-driver#example-using-iam-credentials)
### Use case/motivation
This allows users connect to Redshift Cluster by re-use already existed [Amazon Web Services Connection](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/connections/aws.html)
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28000 | https://github.com/apache/airflow/pull/28187 | b7e5b47e2794fa0eb9ac2b22f2150d2fdd9ef2b1 | 2f247a2ba2fb7c9f1fe71567a80f0063e21a5f55 | "2022-11-30T05:09:08Z" | python | "2023-05-02T13:58:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,999 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | References to the 'kubernetes' section cause parse errors | ### Apache Airflow version
main (development)
### What happened
Here's a dag:
```python3
from airflow import DAG
from airflow.decorators import task
import airflow.configuration as conf
from datetime import datetime
@task
def print_this(this):
print(this)
with DAG(dag_id="config_ref", schedule=None, start_date=datetime(1970, 1, 1)) as dag:
namespace = conf.get("kubernetes", "NAMESPACE")
print_this(namespace)
```
In 2.4.3 it parses without error, but in main (as of 2e7a4bcb550538283f28550208b01515d348fb51) the reference to the "kubernetes" section breaks. Likely because of this: https://github.com/apache/airflow/pull/26873
```
❯ airflow dags list-import-errors
filepath | error
======================================+========================================================================================================
/Users/matt/2022/11/29/dags/config.py | Traceback (most recent call last):
| File "/Users/matt/src/airflow/airflow/configuration.py", line 595, in get
| return self._get_option_from_default_config(section, key, **kwargs)
| File "/Users/matt/src/airflow/airflow/configuration.py", line 605, in _get_option_from_default_config
| raise AirflowConfigException(f"section/key [{section}/{key}] not found in config")
| airflow.exceptions.AirflowConfigException: section/key not found in config
|
❯ python dags/config.py
[2022-11-29 21:30:05,300] {configuration.py:603} WARNING - section/key [kubernetes/namespace] not found in config
Traceback (most recent call last):
File "/Users/matt/2022/11/29/dags/config.py", line 13, in <module>
namespace = conf.get("kubernetes", "NAMESPACE")
File "/Users/matt/src/airflow/airflow/configuration.py", line 1465, in get
return conf.get(*args, **kwargs)
File "/Users/matt/src/airflow/airflow/configuration.py", line 595, in get
return self._get_option_from_default_config(section, key, **kwargs)
File "/Users/matt/src/airflow/airflow/configuration.py", line 605, in _get_option_from_default_config
raise AirflowConfigException(f"section/key [{section}/{key}] not found in config")
airflow.exceptions.AirflowConfigException: section/key [kubernetes/namespace] not found in config
```
To quote @jedcunningham :
> The backcompat layer only expects you to use the “new” section name.
### What you think should happen instead
The recent section name change should be registered so that the old name still works.
### How to reproduce
See above
### Operating System
Mac OS / venv
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
`pip install -e ~/src/airflow` into a fresh venv
### Anything else
Also, it's kind of weird that the important part of the error message (which section?) is missing from `list-import-errors`. I had to run the dag def like a script to realize that it was the kubernetes section that it was complaining about.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27999 | https://github.com/apache/airflow/pull/28008 | f1c4c27e4aed79eef01f2873fab3a66af2aa3fa0 | 3df03cc9331cb8984f39c5dbf0c9775ac362421e | "2022-11-30T04:36:36Z" | python | "2022-12-01T07:41:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,978 | ["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/hooks/snowflake.py", "airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/hooks/test_sql.py", "tests/providers/snowflake/operators/test_snowflake_sql.py"] | KeyError: 0 error with common-sql version 1.3.0 | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-apache-hive==4.0.1
apache-airflow-providers-apache-livy==3.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.0
apache-airflow-providers-databricks==3.3.0
apache-airflow-providers-dbt-cloud==2.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
```
### Apache Airflow version
2.4.3
### Operating System
Debian Bullseye
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
With the latest version of common-sql provider, the `get_records` from hook is now a ordinary dictionary, causing this KeyError with SqlSensor:
```
[2022-11-29, 00:39:18 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/sensors/base.py", line 189, in execute
poke_return = self.poke(context)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/sensors/sql.py", line 98, in poke
first_cell = records[0][0]
KeyError: 0
```
I have only tested with Snowflake, I haven't tested it with other databases. Reverting back to 1.2.0 solves the issue.
### What you think should happen instead
It should return an iterable list as usual with the query.
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.providers.common.sql.sensors.sql import SqlSensor
with DAG(
dag_id="sql_provider_snowflake_test",
schedule=None,
start_date=datetime(2022, 1, 1),
catchup=False,
):
t1 = SqlSensor(
task_id="snowflake_test",
conn_id="snowflake",
sql="select 0",
fail_on_empty=False,
poke_interval=20,
mode="poke",
timeout=60 * 5,
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27978 | https://github.com/apache/airflow/pull/28006 | 6c62985055e7f9a715c3ae47f6ff584ad8378e2a | d9cefcd0c50a1cce1c3c8e9ecb99cfacde5eafbf | "2022-11-29T00:52:53Z" | python | "2022-12-01T13:53:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,976 | ["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/hooks/snowflake.py", "airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/hooks/test_sql.py", "tests/providers/snowflake/operators/test_snowflake_sql.py"] | `SQLColumnCheckOperator` failures after upgrading to `common-sql==1.3.0` | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.2.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-salesforce==5.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-snowflake==3.2.0
Issue:
apache-airflow-providers-common-sql==1.3.0
### Apache Airflow version
2.4.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Problem occurred when upgrading from common-sql=1.2.0 to common-sql=1.3.0
Getting a `KEY_ERROR` when running a unique_check and null_check on a column.
1.3.0 log:
<img width="1609" alt="Screen Shot 2022-11-28 at 2 01 20 PM" src="https://user-images.githubusercontent.com/15257610/204390144-97ae35b7-1a2c-4ee1-9c12-4f3940047cde.png">
1.2.0 log:
<img width="1501" alt="Screen Shot 2022-11-28 at 2 00 15 PM" src="https://user-images.githubusercontent.com/15257610/204389994-7e8eae17-a346-41ac-84c4-9de4be71af20.png">
### What you think should happen instead
Potential causes:
- seems to be indexing based on the test query column `COL_NAME` instead of the table column `STRIPE_ID`
- the `record` from the test changed types went from a tuple to a list of dictionaries.
- no `tolerance` is specified for these tests, so `.get('tolerance')` looks like it will cause an error without a default specified like `.get('tolerance', None)`
Expected behavior:
- these tests continue to pass with the upgrade
- `tolerance` is not a required key.
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.providers.common.sql.operators.sql import SQLColumnCheckOperator
my_conn_id = "snowflake_default"
default_args={"conn_id": my_conn_id}
with DAG(
dag_id="airflow_providers_example",
schedule=None,
start_date=datetime(2022, 11, 27),
default_args=default_args,
) as dag:
create_table = SnowflakeOperator(
task_id="create_table",
sql=""" CREATE OR REPLACE TABLE testing AS (
SELECT
1 AS row_num,
'not null' AS field
UNION ALL
SELECT
2 AS row_num,
'test' AS field
UNION ALL
SELECT
3 AS row_num,
'test 2' AS field
)""",
)
column_checks = SQLColumnCheckOperator(
task_id="column_checks",
table="testing",
column_mapping={
"field": {"unique_check": {"equal_to": 0}, "null_check": {"equal_to": 0}}
},
)
create_table >> column_checks
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27976 | https://github.com/apache/airflow/pull/28006 | 6c62985055e7f9a715c3ae47f6ff584ad8378e2a | d9cefcd0c50a1cce1c3c8e9ecb99cfacde5eafbf | "2022-11-28T23:03:13Z" | python | "2022-12-01T13:53:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,955 | ["airflow/api/common/mark_tasks.py", "airflow/models/taskinstance.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/log_reader.py", "airflow/utils/state.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/views/test_views_grid.py"] | Latest log not shown in grid view for deferred task | ### Apache Airflow version
2.4.3
### What happened
In the grid view I do not see the logs for the latest try number if the task is in deferred state. I do see it in the "old" log view.
Grid view:

"Old" view:

It could have something to do with the deferred task getting its try_number reduced by 1 - in my example try_number=1 and next_try_number=2.
https://github.com/apache/airflow/blob/3e288abd0bc3e5788dcd7f6d9f6bef26ec4c7281/airflow/models/taskinstance.py#L1617-L1618
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Red Hat
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27955 | https://github.com/apache/airflow/pull/26993 | ad7f8e09f8e6e87df2665abdedb22b3e8a469b49 | f110cb11bf6fdf6ca9d0deecef9bd51fe370660a | "2022-11-28T07:12:45Z" | python | "2023-01-05T16:42:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,952 | ["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"] | can not use output of task decorator as input for external_task_ids of ExternalTaskSensor | ### Apache Airflow version
2.4.3, 2.5.0
### What happened
when use output from task decorator as parameter (external_task_ids) in ExternalTaskSensor, it show up this log:
```
Broken DAG: [+++++/airflow/dags/TEST_NEW_PIPELINE.py] Traceback (most recent call last):
File "+++++/env3.10.5/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 408, in apply_defaults
result = func(self, **kwargs, default_args=default_args)
File "+++++/env3.10.5/lib/python3.10/site-packages/airflow/sensors/external_task.py", line 164, in __init__
if external_task_ids and len(external_task_ids) > len(set(external_task_ids)):
TypeError: object of type 'PlainXComArg' has no len()
```
note: +++++ is just a mask for irrelevant information.
### What you think should happen instead
this document https://airflow.apache.org/docs/apache-airflow/stable/tutorial/taskflow.html
show that we can use it without any warning, note about it.
found relative problem in [https://github.com/apache/airflow/issues/27328](url)
should move all the check in __\_\_init\_\___ into __poke__ method.
### How to reproduce
```
from airflow.decorators import dag, task
from airflow.operators.python import get_current_context
from airflow.sensors.external_task import ExternalTaskSensor
from datetime import datetime
configure = {"dag_id": "test_new_skeleton",
"schedule": None,
"start_date": datetime(2022,1,1),
}
@task
def preprocess_dependency() -> list:
return ["random-task-name"]
@dag(**configure)
def pipeline():
t_preprocess = preprocess_dependency()
task_dependency = ExternalTaskSensor(task_id=f"Check_Dependency",
external_dag_id='random-dag-name-that-exist',
external_task_ids=t_preprocess ,
poke_interval=60,
mode="reschedule",
timeout=172800,
allowed_states=['success'],
failed_states=['failed', 'skipped'],
check_existence=True,)
dag = pipeline()
```
### Operating System
REHL 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27952 | https://github.com/apache/airflow/pull/28692 | 3d89797889e43bda89d4ceea37130bdfbc3db32c | 7f18fa96e434c64288d801904caf1fcde18e2cbf | "2022-11-27T15:47:45Z" | python | "2023-01-04T11:39:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,936 | ["airflow/www/static/js/components/Table/Cells.tsx"] | Datasets triggered run modal is not scrollable | ### Apache Airflow version
main (development)
### What happened
Datasets modal which used to display triggered runs is not scrollable even if there are records

### What you think should happen instead
It should be scrollable if there are records to display
### How to reproduce
1. trigger a datasets dag with multiple triggered runs
2. click on datasets
3. click on uri which have multiple triggered runs
DAG-
```
from airflow import Dataset, DAG
from airflow.operators.python import PythonOperator
from datetime import datetime
fan_out = Dataset("fan_out")
fan_in = Dataset("fan_in")
# the leader
with DAG(
dag_id="momma_duck", start_date=datetime(1970, 1, 1), schedule_interval=None
) as leader:
PythonOperator(
task_id="has_outlet", python_callable=lambda: None, outlets=[fan_out]
)
# the many
for i in range(1, 40):
with DAG(
dag_id=f"duckling_{i}", start_date=datetime(1970, 1, 1), schedule=[fan_out]
) as duck:
PythonOperator(
task_id="has_outlet", python_callable=lambda: None, outlets=[fan_in]
)
globals()[f"duck_{i}"] = duck
# the straggler
with DAG(
dag_id="straggler_duck", start_date=datetime(1970, 1, 1), schedule=[fan_in]
) as straggler:
PythonOperator(task_id="has_outlet", python_callable=lambda: None)
```
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27936 | https://github.com/apache/airflow/pull/27965 | a158fbb6bde07cd20003680a4cf5e7811b9eda98 | 5e4f4a3556db5111c2ae36af1716719a8494efc7 | "2022-11-26T07:18:43Z" | python | "2022-11-29T01:16:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,932 | ["airflow/executors/base_executor.py", "airflow/providers/celery/executors/celery_executor.py", "airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow-providers-celery/cli-ref.rst", "docs/apache-airflow-providers-celery/index.rst", "docs/apache-airflow-providers-cncf-kubernetes/cli-ref.rst", "docs/apache-airflow-providers-cncf-kubernetes/index.rst"] | AIP-51 - Executor Specific CLI Commands | ### Overview
Some Executors have their own first class CLI commands (now that’s hardcoding/coupling!) which setup or modify various components related to that Executor.
### Examples
- **5a**) Celery Executor commands: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L1689-L1734
- **5b**) Kubernetes Executor commands: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L1754-L1771
- **5c**) Default CLI parser has hardcoded logic for Celery and Kubernetes Executors specifically: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L63-L99
### Proposal
Update the BaseExecutor interface with a pluggable mechanism to vend CLI `GroupCommands` and parsers. Executor subclasses would then implement these methods, if applicable, which would then be called to fetch commands and parsers from within Airflow Core cli parser code. We would then migrate the existing Executor CLI code from cli_parser to the respective Executor class.
Pseudo-code example for vending `GroupCommand`s:
```python
# Existing code in cli_parser.py
...
airflow_commands: List[CLICommand] = [
GroupCommand(
name='dags',
help='Manage DAGs',
subcommands=DAGS_COMMANDS,
),
...
]
# New code to add groups vended by executor classes
executor_cls, _ = ExecutorLoader.import_executor_cls(conf.get('core', 'EXECUTOR'))
airflow_commands.append(executor_cls.get_cli_group_commands())
...
``` | https://github.com/apache/airflow/issues/27932 | https://github.com/apache/airflow/pull/33081 | bbc096890512ba2212f318558ca1e954ab399657 | 879fd34e97a5343e6d2bbf3d5373831b9641b5ad | "2022-11-25T23:28:44Z" | python | "2023-08-04T17:26:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,929 | ["airflow/executors/base_executor.py", "airflow/executors/celery_kubernetes_executor.py", "airflow/executors/debug_executor.py", "airflow/executors/local_kubernetes_executor.py", "airflow/sensors/base.py", "tests/sensors/test_base.py"] | AIP-51 - Single Threaded Executors | ### Overview
Some Executors, currently a subset of the local Executors, run in a single threaded fashion and have certain limitations and requirements, many of which are hardcoded. To add a new single threaded Executor would require changes to core Airflow code.
Note: This coupling often shows up with SQLite compatibility checks since it does not support multiple connections.
### Examples
- **2a**) SQLite check done in configuration.py: https://github.com/apache/airflow/blob/26f94c5370587f73ebd935cecf208c6a36bdf9b6/airflow/configuration.py#L412-L419
- **2b**) When running in standalone mode SQLite compatibility is checked: https://github.com/apache/airflow/blob/26f94c5370587f73ebd935cecf208c6a36bdf9b6/airflow/cli/commands/standalone_command.py#L160-L165
- **2c**) Sensors in `poke` mode can block execution of DAGs when running with single process Executors, currently hardcoded to DebugExecutor (although should also include SequentialExecutor): https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/sensors/base.py#L243
### Proposal
A static method or attribute on the Executor class which can be checked by core code.
There is a precedent already set with the `supports_ad_hoc_ti_run` attribute, see:
https://github.com/apache/airflow/blob/fb741fd87254e235f99d7d67e558dafad601f253/airflow/executors/kubernetes_executor.py#L435 https://github.com/apache/airflow/blob/26f94c5370587f73ebd935cecf208c6a36bdf9b6/airflow/www/views.py#L1735-L1737 | https://github.com/apache/airflow/issues/27929 | https://github.com/apache/airflow/pull/28934 | 0359a42a3975d0d7891a39abe4395bdd6f210718 | e5730364b4eb5a3b30e815ca965db0f0e710edb6 | "2022-11-25T23:28:05Z" | python | "2023-01-23T21:26:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,909 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py"] | Add export_format to template_fields of BigQueryToGCSOperator | ### Description
There might be an use case where the export_format can be based on some dynamic values. So, adding export_format will help developers in future
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27909 | https://github.com/apache/airflow/pull/27910 | 3fef6a47834b89b99523db6d97d6aa530657a008 | f0820e8d9e8a36325987278bcda2bd69bd53f3a5 | "2022-11-25T10:10:10Z" | python | "2022-11-25T20:26:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,907 | ["airflow/www/decorators.py"] | Password is not masked in audit logs for connections/variables | ### Apache Airflow version
main (development)
### What happened
Password for connections and variables with secret in the name are not masked in audit logs.
<img width="1337" alt="Screenshot 2022-11-25 at 12 58 59 PM" src="https://user-images.githubusercontent.com/88504849/203932123-c47fd66f-8e63-4bc6-9bf1-b9395cb26675.png">
<img width="1352" alt="Screenshot 2022-11-25 at 12 56 32 PM" src="https://user-images.githubusercontent.com/88504849/203932220-3f02984c-94b5-4773-8767-6f19cb0ceff0.png">
<img width="1328" alt="Screenshot 2022-11-25 at 1 43 40 PM" src="https://user-images.githubusercontent.com/88504849/203933183-e97b2358-9414-45c8-ab8f-d2f913117301.png">
### What you think should happen instead
Password/value should be masked
### How to reproduce
1. Create a connection or variable(with secret in the name i.e. test_secret)
2. Open audit logs
3. Observe the password
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27907 | https://github.com/apache/airflow/pull/27923 | 5e45cb019995e8b80104b33da1c93eefae12d161 | 1e73b1cea2d507d6d09f5eac6a16b649f8b52522 | "2022-11-25T08:14:51Z" | python | "2022-11-25T21:23:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,878 | ["airflow/jobs/backfill_job.py", "airflow/models/abstractoperator.py", "airflow/models/taskinstance.py", "airflow/utils/task_group.py"] | Task failing with error "missing upstream values" when running a dag with dynamic task group mapping | ### Apache Airflow version
main (development)
### What happened
Task failing with error "missing upstream values" when running a dag with dynamic task group mapping
error -
```
[2022-11-23, 15:50:07 UTC] {abstractoperator.py:456} ERROR - Cannot expand <Task(_PythonDecoratedOperator): increment_and_verify.hello_there> for run manual__2022-11-23T15:49:59.771915+00:00; missing upstream values: ['x']
```
### What you think should happen instead
It should succeed
### How to reproduce
Run the below dag code
```
from airflow import DAG
from airflow.models.taskinstance import TaskInstance
from airflow.operators.dummy import DummyOperator
from datetime import datetime, timedelta
from airflow.decorators import task, task_group
with DAG(
dag_id="taskmap_taskgroup",
tags=["AIP_42"],
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as dag:
@task
def onetwothree():
return [1,2,3]
@task
def hello_there(arg):
print(arg)
@task_group
def increment_and_verify(x):
hello_there(x)
increment_and_verify.expand(x=onetwothree())>>DummyOperator(task_id="done")
```
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27878 | https://github.com/apache/airflow/pull/27876 | 1f20b77872de22e303ae7dae22199a012e217469 | e939a641b2c1aab27beed5984ca52cd8fde14f01 | "2022-11-24T06:51:57Z" | python | "2022-11-24T09:19:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,864 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | current_state method of TaskInstance fails for mapped task instance | ### Apache Airflow version
2.4.3
### What happened
`current_state` method on TaskInstance doesn't filter by `map_index` so calling this method on mapped task instance fails.
https://github.com/apache/airflow/blob/fb7c6afc8cb7f93909bd2e654ea185eb6abcc1ea/airflow/models/taskinstance.py#L708-L726
### What you think should happen instead
map_index should also be filtered in the query to return single TaskInstance object.
### How to reproduce
```python
with create_session() as session:
print(session.query(TaskInstance).filter(TaskInstance.dag_id == "divide_by_zero",
TaskInstance.map_index == 1,
TaskInstance.run_id == 'scheduled__2022-11-22T00:00:00+00:00')
.scalar().current_state())
---------------------------------------------------------------------------
MultipleResultsFound Traceback (most recent call last)
Input In [7], in <cell line: 1>()
1 with create_session() as session:
----> 2 print(session.query(TaskInstance).filter(TaskInstance.dag_id == "divide_by_zero", TaskInstance.map_index == 1, TaskInstance.run_id == 'scheduled__2022-11-22T00:00:00+00:00').scalar().current_state())
File ~/stuff/python/airflow/airflow/utils/session.py:75, in provide_session.<locals>.wrapper(*args, **kwargs)
73 else:
74 with create_session() as session:
---> 75 return func(*args, session=session, **kwargs)
File ~/stuff/python/airflow/airflow/models/taskinstance.py:725, in TaskInstance.current_state(self, session)
708 @provide_session
709 def current_state(self, session: Session = NEW_SESSION) -> str:
710 """
711 Get the very latest state from the database, if a session is passed,
712 we use and looking up the state becomes part of the session, otherwise
(...)
715 :param session: SQLAlchemy ORM Session
716 """
717 return (
718 session.query(TaskInstance.state)
719 .filter(
720 TaskInstance.dag_id == self.dag_id,
721 TaskInstance.task_id == self.task_id,
722 TaskInstance.run_id == self.run_id,
723
724 )
--> 725 .scalar()
726 )
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py:2803, in Query.scalar(self)
2801 # TODO: not sure why we can't use result.scalar() here
2802 try:
-> 2803 ret = self.one()
2804 if not isinstance(ret, collections_abc.Sequence):
2805 return ret
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py:2780, in Query.one(self)
2762 def one(self):
2763 """Return exactly one result or raise an exception.
2764
2765 Raises ``sqlalchemy.orm.exc.NoResultFound`` if the query selects
(...)
2778
2779 """
-> 2780 return self._iter().one()
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/engine/result.py:1162, in Result.one(self)
1134 def one(self):
1135 # type: () -> Row
1136 """Return exactly one row or raise an exception.
1137
1138 Raises :class:`.NoResultFound` if the result returns no
(...)
1160
1161 """
-> 1162 return self._only_one_row(True, True, False)
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/engine/result.py:620, in ResultInternal._only_one_row(self, raise_for_second_row, raise_for_none, scalar)
618 if next_row is not _NO_ROW:
619 self._soft_close(hard=True)
--> 620 raise exc.MultipleResultsFound(
621 "Multiple rows were found when exactly one was required"
622 if raise_for_none
623 else "Multiple rows were found when one or none "
624 "was required"
625 )
626 else:
627 next_row = _NO_ROW
MultipleResultsFound: Multiple rows were found when exactly one was required
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27864 | https://github.com/apache/airflow/pull/27898 | c931d888936a958ae40b69077d35215227bf1dff | 51c70a5d6990a6af1188aab080ae2cbe7b935eb2 | "2022-11-23T17:27:58Z" | python | "2022-12-03T16:08:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,842 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator no longer uses field_delimiter or time_partitioning | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
google=8.5.0
### Apache Airflow version
2.4.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
The newest version of the google providers no longer provides the `field_delimiter` or `time_partitioning` fields to the bq job configuration for the GCStoBQ transfers. Looking at the code it seems like this behavior was removed during the change to use deferrable operations
### What you think should happen instead
These fields should continue to be provided
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27842 | https://github.com/apache/airflow/pull/27961 | 5cdff505574822ad3d2a226056246500e4adea2f | 2d663df0552542efcef6e59bc2bc1586f8d1c7f3 | "2022-11-22T17:31:55Z" | python | "2022-12-04T19:02:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,838 | ["airflow/providers/common/sql/operators/sql.py"] | apache-airflow-providers-common-sql==1.3.0 breaks BigQuery operators | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
**Airflow version**: 2.3.4 (Cloud Composer 2.0.32)
**Issue**: `apache-airflow-providers-common-sql==1.3.0` breaks all BigQuery operators provided by the `apache-airflow-providers-google==8.4.0` package. The error is as follows:
```python
Broken DAG: [/home/airflow/gcs/dags/test-dag.py] Traceback (most recent call last):
File "/home/airflow/gcs/dags/test-dag.py", line 6, in <module>
from airflow.providers.google.cloud.operators.bigquery import BigQueryExecuteQueryOperator
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 35, in <module>
from airflow.providers.common.sql.operators.sql import (
ImportError: cannot import name '_get_failed_checks' from 'airflow.providers.common.sql.operators.sql' (/opt/python3.8/lib/python3.8/site-packages/airflow/providers/common/sql/operators/sql.py)
```
**Why this issue is tricky**: other providers such as `apache-airflow-providers-microsoft-mssql==3.3.0` and `apache-airflow-providers-oracle==3.5.0` have a dependency on `apache-airflow-providers-common-sql>=1.3.0` and will therefore install it when adding to the Composer environment
**Current mitigation**: Downgrade provider packages such that `apache-airflow-providers-common-sql==1.2.0` is installed instead
### What you think should happen instead
A minor version upgrade of `apache-airflow-providers-common-sql` (1.2.0 to 1.3.0) should not break other providers (e.g. apache-airflow-providers-google==8.4.0)
### How to reproduce
- Deploy fresh deployment of Composer `composer-2.0.32-airflow-2.3.4`
- Install `apache-airflow-providers-common-sql==1.3.0` via Pypi package install feature
- Deploy a dag that uses one of the BigQuery operators, such as
```python
import airflow
from airflow import DAG
from datetime import timedelta
from airflow.providers.google.cloud.operators.bigquery import BigQueryExecuteQueryOperator
default_args = {
'start_date': airflow.utils.dates.days_ago(0),
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
dag = DAG(
'test-dag',
default_args=default_args,
schedule_interval=None,
dagrun_timeout=timedelta(minutes=20))
t1 = BigQueryExecuteQueryOperator(
...
)
```
### Operating System
Ubuntu 18.04.6 LTS
### Versions of Apache Airflow Providers
- apache-airflow-providers-apache-beam @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_apache_beam-4.0.0-py3-none-any.whl
- apache-airflow-providers-cncf-kubernetes @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_cncf_kubernetes-4.4.0-py3-none-any.whl
- apache-airflow-providers-common-sql @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_common_sql-1.3.0-py3-none-any.whl
- apache-airflow-providers-dbt-cloud @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_dbt_cloud-2.2.0-py3-none-any.whl
- apache-airflow-providers-ftp @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_ftp-3.1.0-py3-none-any.whl
- apache-airflow-providers-google @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_google-8.4.0-py3-none-any.whl
- apache-airflow-providers-hashicorp @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_hashicorp-3.1.0-py3-none-any.whl
- apache-airflow-providers-http @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_http-4.0.0-py3-none-any.whl
- apache-airflow-providers-imap @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_imap-3.0.0-py3-none-any.whl
- apache-airflow-providers-mysql @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_mysql-3.2.1-py3-none-any.whl
- apache-airflow-providers-postgres @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_postgres-5.2.2-py3-none-any.whl
- apache-airflow-providers-sendgrid @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_sendgrid-3.0.0-py3-none-any.whl
- apache-airflow-providers-sqlite @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_sqlite-3.2.1-py3-none-any.whl
- apache-airflow-providers-ssh @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_ssh-3.2.0-py3-none-any.whl
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27838 | https://github.com/apache/airflow/pull/27843 | 0b0d2990fdb31749396305433b0f8cc54db7aee8 | dbb4b59dcbc8b57243d1588d45a4d2717c3e7758 | "2022-11-22T14:50:19Z" | python | "2022-11-23T10:12:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,837 | ["airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks - Run job by job name not working with DatabricksRunNowDeferrableOperator | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==3.3.0
### Apache Airflow version
2.4.2
### Operating System
Mac OS 13.0
### Deployment
Virtualenv installation
### Deployment details
Virtualenv deployment with Python 3.10
### What happened
Submitting a Databricks job run by name (`job_name`) with the deferrable version (`DatabricksRunNowDeferrableOperator`) does not actually fill the `job_id` and the Databricks API responds with an HTTP 400 bad request - attempting to run a job (POST `https://<databricks-instance>/api/2.1/jobs/run-now`) without an ID specidied.
Sample errors from the Airflow logs:
```
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://[subdomain].azuredatabricks.net/api/2.1/jobs/run-now
During handling of the above exception, another exception occurred:
[...truncated message...]
airflow.exceptions.AirflowException: Response: b'{"error_code":"INVALID_PARAMETER_VALUE","message":"Job 0 does not exist."}', Status Code: 400
```
### What you think should happen instead
The deferrable version (`DatabricksRunNowDeferrableOperator`) should maintain the behavior of the parent class (`DatabricksRunNowOperator`) and use the `job_name` to find the `job_id`.
The following logic is missing in the deferrable version:
```
# Sample from the DatabricksRunNowOperator#execute
hook = self._hook
if "job_name" in self.json:
job_id = hook.find_job_id_by_name(self.json["job_name"])
if job_id is None:
raise AirflowException(f"Job ID for job name {self.json['job_name']} can not be found")
self.json["job_id"] = job_id
del self.json["job_name"]
```
### How to reproduce
To reproduce, use a deferrable run now operator with the job name as an argument in an airflow task:
```
from airflow.providers.databricks.operators.databricks import DatabricksRunNowDeferrableOperator
DatabricksRunNowDeferrableOperator(
job_name='some-name',
# Other args
)
```
### Anything else
The problem occurs at every call.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27837 | https://github.com/apache/airflow/pull/32806 | c4b6f06f6e2897b3f1ee06440fc66f191acee9a8 | 58e21c66fdcc8a416a697b4efa852473ad8bd6fc | "2022-11-22T13:54:22Z" | python | "2023-07-25T03:21:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,824 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | DAG Run fails when chaining multiple empty mapped tasks | ### Apache Airflow version
2.4.3
### What happened
A significant fraction of the DAG Runs of a DAG that has 2+ consecutive mapped tasks which are are being passed an empty list are marked as failed when all tasks are either succeeding or being skipped. This was supposedly fixed with issue #25200 but the problem still persists.

### What you think should happen instead
The DAG Run should be marked success.
### How to reproduce
The real world version of this DAG has several mapped tasks that all point to the same list, and that list is frequently empty. I have made a minimal reproducible example.
```
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="break_mapping", start_date=datetime(2022, 3, 4)) as dag:
@task
def add_one(x: int):
return x + 1
@task
def say_hi():
print("Hi")
@task
def say_bye():
print("Bye")
added_values = add_one.expand(x=[])
added_more_values = add_one.expand(x=[])
added_more_more_values = add_one.expand(x=[])
say_hi() >> say_bye() >> added_values
added_values >> added_more_values >> added_more_more_values
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27824 | https://github.com/apache/airflow/pull/27964 | b60006ae26c41e887ec0102bce8b726fce54007d | f89ca94c3e60bfae888dfac60c7472d207f60f22 | "2022-11-22T01:31:41Z" | python | "2022-11-29T07:34:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,818 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Triggering a DAG with the same run_id as a scheduled one causes the scheduler to crash | ### Apache Airflow version
2.5.0
### What happened
A user with access to manually triggering DAGs can trigger a DAG. provide a run_id that matches the pattern used when creating scheduled runs and cause the scheduler to crash due to database unique key violation:
```
2022-12-12 12:58:00,793] {scheduler_job.py:776} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 759, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 885, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 956, in _do_scheduling
self._create_dagruns_for_dags(guard, session)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/retries.py", line 78, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/usr/local/lib/python3.8/site-packages/tenacity/__init__.py", line 384, in __iter__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.8/site-packages/tenacity/__init__.py", line 351, in iter
return fut.result()
File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 437, in result
return self.__get_result()
File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/usr/local/lib/python3.8/site-packages/airflow/utils/retries.py", line 87, in wrapped_function
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1018, in _create_dagruns_for_dags
query, dataset_triggered_dag_info = DagModel.dags_needing_dagruns(session)
File "/usr/local/lib/python3.8/site-packages/airflow/models/dag.py", line 3341, in dags_needing_dagruns
for x in session.query(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter
result = self.session.execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1713, in execute
conn = self._connection_for_bind(bind)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind
return self._transaction._connection_for_bind(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
self._assert_active()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
raise sa_exc.PendingRollbackError(
sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dag_run_dag_id_run_id_key"
DETAIL: Key (dag_id, run_id)=(example_branch_dop_operator_v3, scheduled__2022-12-12T12:57:00+00:00) already exists.
[SQL: INSERT INTO dag_run (dag_id, queued_at, execution_date, start_date, end_date, state, run_id, creating_job_id, external_trigger, run_type, conf, data_interval_start, data_interval_end, last_scheduling_decision, dag_hash, log_template_id, updated_at) VALUES (%(dag_id)s, %(queued_at)s, %(execution_date)s, %(start_date)s, %(end_date)s, %(state)s, %(run_id)s, %(creating_job_id)s, %(external_trigger)s, %(run_type)s, %(conf)s, %(data_interval_start)s, %(data_interval_end)s, %(last_scheduling_decision)s, %(dag_hash)s, (SELECT max(log_template.id) AS max_1
FROM log_template), %(updated_at)s) RETURNING dag_run.id]
[parameters: {'dag_id': 'example_branch_dop_operator_v3', 'queued_at': datetime.datetime(2022, 12, 12, 12, 58, 0, 435945, tzinfo=Timezone('UTC')), 'execution_date': DateTime(2022, 12, 12, 12, 57, 0, tzinfo=Timezone('UTC')), 'start_date': None, 'end_date': None, 'state': <DagRunState.QUEUED: 'queued'>, 'run_id': 'scheduled__2022-12-12T12:57:00+00:00', 'creating_job_id': 1, 'external_trigger': False, 'run_type': <DagRunType.SCHEDULED: 'scheduled'>, 'conf': <psycopg2.extensions.Binary object at 0x7f283a82af60>, 'data_interval_start': DateTime(2022, 12, 12, 12, 57, 0, tzinfo=Timezone('UTC')), 'data_interval_end': DateTime(2022, 12, 12, 12, 58, 0, tzinfo=Timezone('UTC')), 'last_scheduling_decision': None, 'dag_hash': '1653a588de69ed25c5b1dcfef928479c', 'updated_at': datetime.datetime(2022, 12, 12, 12, 58, 0, 436871, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a)
```
Worse yet, the scheduler will keep crashing after a restart with the same exception.
### What you think should happen instead
A user should not be able to crash the scheduler from the UI.
I see 2 alternatives for solving this:
1. Reject custom run_id that would (or could) collide with a scheduled one, preventing this situation from happening.
2. Handle the database error and assign a different run_id to the scheduled run.
### How to reproduce
1. Find an unpaused DAG.
2. Trigger DAG w/ config, set the run id to something like scheduled__2022-11-21T12:00:00+00:00 (adjust the time to be in the future where there is no run yet).
3. Let the manual DAG run finish.
4. Wait for the scheduler to try to schedule another DAG run with the same run id.
5. :boom:
6. Attempt to restart the scheduler.
7. :boom:
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-postgres==5.3.1
### Deployment
Docker-Compose
### Deployment details
I'm using a Postgres docker container as a metadata database that is linked via docker networking to the scheduler and the rest of the components. Scheduler, workers and webserver are all running in separate containers (using CeleryExecutor backed by a Redis container), though I do not think it is relevant in this case.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27818 | https://github.com/apache/airflow/pull/28397 | 8fb7be2fb5c64cc2f31a05034087923328b1137a | 7ccbe4e7eaa529641052779a89e34d54c5a20f72 | "2022-11-21T12:38:19Z" | python | "2022-12-22T01:54:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,799 | ["airflow/cli/commands/task_command.py", "airflow/utils/cli.py"] | First task in Quick Start fails | ### Apache Airflow version
2.4.3
### What happened
When I ran
`airflow tasks run example_bash_operator runme_0 2015-01-01`
I got the error
```
(venv) myusername@MacBook-Air airflow % airflow tasks run example_bash_operator runme_0 2015-01-01
[2022-11-14 23:49:19,228] {dagbag.py:537} INFO - Filling up the DagBag from /Users/myusername/airflow/dags
[2022-11-14 23:49:19,228] {cli.py:225} WARNING - Dag '\x1b[01mexample_bash_operator\x1b[22m' not found in path /Users/myusername/airflow/dags; trying path /Users/myusername/airflow/dags
[2022-11-14 23:49:19,228] {dagbag.py:537} INFO - Filling up the DagBag from /Users/myusername/airflow/dags
Traceback (most recent call last):
File "/Users/myusername/airflow/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/utils/cli.py", line 103, in wrapper
return f(*args, **kwargs)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/cli/commands/task_command.py", line 366, in task_run
dag = get_dag(args.subdir, args.dag_id, include_examples=False)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/utils/cli.py", line 228, in get_dag
raise AirflowException(
airflow.exceptions.AirflowException: Dag 'example_bash_operator' could not be found; either it does not exist or it failed to parse.
```
### What you think should happen instead
Successful completion of the task
### How to reproduce
1. Create a Python venv based on Python 3.10
2. Follow the [Quick Start instructions](https://airflow.apache.org/docs/apache-airflow/stable/start.html) through `airflow tasks run example_bash_operator runme_0 2015-01-01`
3. The error should appear
### Operating System
MacOS 12.5.1
### Versions of Apache Airflow Providers
Not applicable
### Deployment
Other
### Deployment details
This is just on my local machine
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27799 | https://github.com/apache/airflow/pull/27813 | d8dbdccef7cc14af7bacbfd4ebc48d8aabfaf7f0 | b9729d9e469f7822212e0d6d76e10d95411e739a | "2022-11-20T02:41:51Z" | python | "2022-11-21T09:29:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,715 | [".pre-commit-config.yaml", "STATIC_CODE_CHECKS.rst", "dev/breeze/src/airflow_breeze/pre_commit_ids.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_static-checks.svg"] | Add pre-commit rule to validate using `urlsplit` rather than `urlparse` | ### Body
Originally suggested in https://github.com/apache/airflow/pull/27389#issuecomment-1297252026
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/27715 | https://github.com/apache/airflow/pull/27841 | cd01650192b74573b49a20803e4437e611a4cf33 | a99254ffd36f9de06feda6fe45773495632e3255 | "2022-11-16T14:49:46Z" | python | "2023-02-20T01:06:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,714 | ["airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "airflow/www/utils.py", "airflow/www/views.py"] | Re-use recent DagRun JSON-configurations | ### Description
Allow users to re-use recent DagRun configurations upon running a DAG.
This can be achieved by adding a dropdown that contains some information about recent configurations. When user selects an item, the relevant JSON configuration can be pasted to the "Configuration JSON" textbox.
<img width="692" alt="Screen Shot 2022-11-16 at 16 22 30" src="https://user-images.githubusercontent.com/39705397/202209536-c709ec75-c768-48ab-97d4-82b02af60569.png">
<img width="627" alt="Screen Shot 2022-11-16 at 16 22 38" src="https://user-images.githubusercontent.com/39705397/202209553-08828521-dba2-4e83-8e2a-6dec850086de.png">
<img width="612" alt="Screen Shot 2022-11-16 at 16 38 40" src="https://user-images.githubusercontent.com/39705397/202209755-0946521a-e1a5-44cb-ae74-d43ca3735f31.png">
### Use case/motivation
Commonly, DAGs are triggered using repetitive configurations. Sometimes the same configuration is used for triggering a DAG, and sometimes, the configuration differs by just a few parameters.
This interaction forces a user to store the templates he uses somewhere on his machine or to start searching for the configuration he needs in `dagrun/list/`, which does take extra time.
It will be handy to offer a user an option to select one of the recent configurations upon running a DAG.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27714 | https://github.com/apache/airflow/pull/27805 | 7f0332de2d1e57cde2e031f4bb7b4e6844c4b7c1 | e2455d870056391eed13e32e2d0ed571cc7089b4 | "2022-11-16T14:39:23Z" | python | "2022-12-01T22:03:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,698 | ["airflow/kubernetes/pod_template_file_examples/git_sync_template.yaml", "chart/values.schema.json", "chart/values.yaml", "newsfragments/27698.significant.rst"] | Update git-sync with newer version | ### Official Helm Chart version
1.7.0 (latest released)
### What happened
The current git-sync image that is used is coming up on one year old. It is also using the deprecated `--wait` arg.
### What you think should happen instead
In order to stay current, we should update git-sync from 3.4.0 to 3.6.1.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27698 | https://github.com/apache/airflow/pull/27848 | af9143eacdff62738f6064ae7556dd8f4ca8d96d | 98221da0d96b102b009d422870faf7c5d3d931f4 | "2022-11-15T23:01:42Z" | python | "2023-01-21T18:00:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,695 | ["airflow/providers/apache/hive/hooks/hive.py", "tests/providers/apache/hive/hooks/test_hive.py"] | Improve filtering for invalid schemas in Hive hook | ### Description
#27647 has introduced filtering for invalid schemas in Hive hook based on the characters `;` and `!`. I'm wondering if a more generic filtering could be introduced, e.g. one that adheres to the regex `[^a-z0-9_]`, since Hive schemas (and table names) can only contain alphanumeric characters and the character `_`.
Note: since the Hive metastore [stores schemas and tables in lowercase](https://stackoverflow.com/questions/57181316/how-to-keep-column-names-in-camel-case-in-hive/57183048#57183048), checking against `[^a-z0-9_]` is probably better than `[^a-zA-Z0-9_]`.
### Use case/motivation
Ensure that Hive schemas used in `apache-airflow-providers-apache-hive` hooks contain no invalid characters.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27695 | https://github.com/apache/airflow/pull/27808 | 017ed9ac662d50b6e2767f297f36cb01bf79d825 | 2d45f9d6c30aabebce3449eae9f152ba6d2306e2 | "2022-11-15T17:04:45Z" | python | "2022-11-27T13:31:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,645 | ["airflow/www/views.py"] | Calendar view does not load when using CronTriggerTimeTable | ### Apache Airflow version
2.4.2
### What happened
Create a DAG and set the schedule parameter using a CronTriggerTimeTable instance. Enable the DAG so that there is DAG run data. Try to access the Calendar View for the DAG. An ERR_EMPTY_RESPONSE error is displayed instead of the page.
The Calendar View is accessible for other DAGs that are using the schedule_interval set to a cron string instead.
### What you think should happen instead
The Calendar View should have been displayed.
### How to reproduce
Create a DAG and set the schedule parameter to a CronTriggerTimeTable instance. Enable the DAG and allow some DAG runs to occur. Try to access the Calender View for the DAG.
### Operating System
Red Hat Enterprise Linux 8.6
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
Airflow 2.4.2 installed via pip with Python3.9 to venv using constraints.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27645 | https://github.com/apache/airflow/pull/28411 | 4b3eb77e65748b1a6a31116b0dd55f8295fe8a20 | 467a5e3ab287013db2a5381ef4a642e912f8b45b | "2022-11-13T19:53:24Z" | python | "2022-12-28T05:52:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,622 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | AirflowException Crashing the Scheduler During the scheduling loop (_verify_integrity_if_dag_changed) | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
### Deployment
* Airflow Version: 2.4.1
* Infrastructure: AWS ECS
* Number of DAG: 162
```
Version: [v2.4.1](https://pypi.python.org/pypi/apache-airflow/2.4.1)
Git Version: .release:2.4.1+7b979def75923ba28dd64e31e613043d29f34fce
```
### The issue
We have seen this issue when the Scheduler is trying to schedule **too many DAG (140+)** around the same time
```
[2022-11-11T00:15:00.311+0000] {{dagbag.py:196}} WARNING - Serialized DAG mongodb-assistedbreakdown-jobs-processes no longer exists
[2022-11-11T00:15:00.312+0000] {{scheduler_job.py:763}} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 746, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 866, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 948, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1292, in _schedule_dag_run
self._verify_integrity_if_dag_changed(dag_run=dag_run, session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1321, in _verify_integrity_if_dag_changed
dag_run.verify_integrity(session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 874, in verify_integrity
dag = self.get_dag()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 484, in get_dag
raise AirflowException(f"The DAG (.dag) for {self} needs to be set")
airflow.exceptions.AirflowException: The DAG (.dag) for <DagRun mongodb-assistedbreakdown-jobs-processes @ 2022-11-10 00:10:00+00:00: scheduled__2022-11-10T00:10:00+00:00, state:running, queued_at: 2022-11-11 00:10:09.363852+00:00. externally triggered: False> needs to be set
```
Main Cause
```
raise AirflowException(f"The DAG (.dag) for {self} needs to be set")
```
[We believe this is happening here, airflow github](https://github.com/apache/airflow/blob/7b979def75923ba28dd64e31e613043d29f34fce/airflow/jobs/scheduler_job.py#L1318)
We saw a large amount of Connection hitting our airflow Database, but CPU was around 60%. Is there any workaround or configuration that can help the scheduler not crash when this happen?
### What you think should happen instead
Can the scheduler be safe, or when it come back to reschedule the dags that got stuck
### How to reproduce
_No response_
### Operating System
Amazon Linux 2, Fargate deployment using the airflow Image
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
AWS ECS Fargate
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27622 | https://github.com/apache/airflow/pull/27720 | a5d5bd0232b98c6b39e587dd144086f4b7d8664d | 15e842da56d9b3a1c2f47f9dec7682a4230dbc41 | "2022-11-11T15:58:20Z" | python | "2022-11-27T10:51:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,593 | ["airflow/callbacks/callback_requests.py", "airflow/exceptions.py", "airflow/models/taskinstance.py", "airflow/serialization/enums.py", "airflow/serialization/serialized_objects.py", "tests/__init__.py", "tests/callbacks/test_callback_requests.py", "tests/serialization/test_serialized_objects.py"] | Object of type V1Pod is not JSON serializable after detecting zombie jobs cause Scheduler CrashLoopBack | ### Apache Airflow version
2.4.2
### What happened
Some dags have tasks with pod_override in executor_config become zombie tasks. Airflow Scheduler run and crash with exception:
```
[2022-11-10T15:29:59.886+0000] {scheduler_job.py:1526} ERROR - Detected zombie job: {'full_filepath': '/opt/airflow/dags/path.py', 'processor_subdir': '/opt/airflow/dags', 'msg': "{'DAG Id': 'dag_id', 'Task Id': 'taskid', 'Run Id': 'manual__2022-11-10T10:21:25.330307+00:00', 'Hostname': 'hostname'}", 'simple_task_instance': <airflow.models.taskinstance.SimpleTaskInstance object at 0x7fde9c91dcd0>, 'is_failure_callback': True}
[2022-11-10T15:29:59.887+0000] {scheduler_job.py:763} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 746, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 878, in _run_scheduler_loop
next_event = timers.run(blocking=False)
File "/usr/local/lib/python3.7/sched.py", line 151, in run
action(*argument, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/event_scheduler.py", line 37, in repeat
action(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1527, in _find_zombies
self.executor.send_callback(request)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/executors/base_executor.py", line 400, in send_callback
self.callback_sink.send(request)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 480, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 477, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/db_callback_request.py", line 46, in __init__
self.callback_data = callback.to_json()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/callbacks/callback_requests.py", line 89, in to_json
return json.dumps(dict_obj)
File "/usr/local/lib/python3.7/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type V1Pod is not JSON serializable
```
### What you think should happen instead
DbCallbackRequest should do to_json successfully
### How to reproduce
Start airflow with KubernetesExecutor
Make zombie task.
### Operating System
docker.io/apache/airflow:2.4.2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27593 | https://github.com/apache/airflow/pull/27609 | dc03b9081f47c11d6c3beb1a2c30bb75385c125c | 92389cf090f336073337517f2460c2914a9f0d4b | "2022-11-10T16:06:17Z" | python | "2022-11-16T15:43:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,592 | ["airflow/providers/amazon/aws/hooks/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"] | AWS GlueJobOperator is not updating job config if job exists | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
### Apache Airflow version
2.2.5
### Operating System
Linux Ubuntu
### Deployment
Virtualenv installation
### Deployment details
Airflow deployed on ec2 instance
### What happened
`GlueJobOperator` from airflow-amazon-provider is not updating job configuration (like its arguments or number of workers for example) if the job already exists and if there was a change in the configuration for example:
```python
def get_or_create_glue_job(self) -> str:
"""
Creates(or just returns) and returns the Job name
:return:Name of the Job
"""
glue_client = self.get_conn()
try:
get_job_response = glue_client.get_job(JobName=self.job_name)
self.log.info("Job Already exist. Returning Name of the job")
return get_job_response['Job']['Name']
except glue_client.exceptions.EntityNotFoundException:
self.log.info("Job doesn't exist. Now creating and running AWS Glue Job")
...
```
Is there a particular reason to not doing it? Or it was just not done during the implementation of the operarot?
### What you think should happen instead
_No response_
### How to reproduce
Create a `GlueJobOperator` with a simple configuration:
```python
from airflow.providers.amazon.aws.operators.glue import GlueJobOperator
submit_glue_job = GlueJobOperator(
task_id='submit_glue_job',
job_name='test_glue_job
job_desc='test glue job',
script_location='s3://bucket/path/to/the/script/file',
script_args={},
s3_bucket='bucket',
concurrent_run_limit=1,
retry_limit=0,
num_of_dpus=5,
wait_for_completion=False
)
```
Then update one of the initial configuration like `num_of_dpus=10` and validate that the operator is not updating glue job configuration on AWS when it is run again.
### Anything else
There is `GlueCrawlerOperator` which is similar to GlueJobOperator and is doing it:
```python
def execute(self, context: Context):
"""
Executes AWS Glue Crawler from Airflow
:return: the name of the current glue crawler.
"""
crawler_name = self.config['Name']
if self.hook.has_crawler(crawler_name):
self.hook.update_crawler(**self.config)
else:
self.hook.create_crawler(**self.config)
...
```
This behavior could be reproduced in the AWSGlueJobOperator if we agree to do it.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27592 | https://github.com/apache/airflow/pull/27893 | 4fdfef909e3b9a22461c95e4ee123a84c47186fd | b609ab9001102b67a047b3078dc0b67fbafcc1e1 | "2022-11-10T16:00:05Z" | python | "2022-12-06T14:29:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,569 | ["airflow/jobs/scheduler_job.py"] | Remove unused attribute (self.using_mysql) from scheduler_job.py | ### Apache Airflow version
2.4.2
### What happened
Hi all,
I would like to update the scheduler_job.py script by removing the unused attribute -> `self.using_mysql`.
I understand that `self.using_sqlite` attribute is used to define the variable `async_mode` in scheduler_job.py code, but noticed that `self.using_mysql` is not used.
You can check the code from [scheduler_job.py](https://github.com/apache/airflow/blob/main/airflow/jobs/scheduler_job.py),
```python
def __init__(...):
# Check what SQL backend we use
sql_conn: str = conf.get_mandatory_value('database', 'sql_alchemy_conn').lower()
self.using_sqlite = sql_conn.startswith('sqlite') # <- only self.using_sqlite is used for async_mode
self.using_mysql = sql_conn.startswith('mysql')
...
def _execute(self) -> None:
...
async_mode = not self.using_sqlite
```
not sure it was used before, but no longer used for the current scheduler.
If it's not the major change, would you mind me to remove it from the script?
Thanks in advance!
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
-
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27569 | https://github.com/apache/airflow/pull/27571 | e84c032cec911fd738ebb0b93a38994a10e3ca27 | 2ac45b011d04ac141a15abfc24cc054687cadbc2 | "2022-11-09T10:38:42Z" | python | "2022-11-09T17:58:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,556 | ["airflow/providers/amazon/aws/hooks/glue_crawler.py", "airflow/providers/amazon/aws/operators/glue_crawler.py", "tests/providers/amazon/aws/hooks/test_glue_crawler.py", "tests/providers/amazon/aws/operators/test_glue_crawler.py"] | Using GlueCrawlerOperator fails when using tags | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are using tags on resource in AWS. When setting tags when using `GlueCrawlerOperator` it works the first time, when Airflow creates the crawler. However on subsequent runs in fails because `boto3.get_crawler()` does not return the Tags. Hence we get the error below.
```
[2022-11-08, 14:48:49 ] {taskinstance.py:1774} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/glue_crawler.py", line 80, in execute
self.hook.update_crawler(**self.config)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/glue_crawler.py", line 86, in update_crawler
key: value for key, value in crawler_kwargs.items() if current_crawler[key] != crawler_kwargs[key]
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/glue_crawler.py", line 86, in <dictcomp>
key: value for key, value in crawler_kwargs.items() if current_crawler[key] != crawler_kwargs[key]
KeyError: 'Tags'
```
### What you think should happen instead
Ignore tags when checking if the crawler should be updated.
### How to reproduce
Use `GlueCrawlerOperator` with Tags like below and trigger the task multiple times. It will fail the second time around.
```
GlueCrawlerOperator(
dag=dag,
task_id="the_task_id",
config={
"Name": "name_of_the_crwaler",
"Role": "some-role",
"DatabaseName": "some_database",
"Targets": {"S3Targets": [{"Path": "s3://..."}]},
"TablePrefix": "a_table_prefix",
"RecrawlPolicy": {
"RecrawlBehavior": "CRAWL_EVERYTHING"
},
"SchemaChangePolicy": {
"UpdateBehavior": "UPDATE_IN_DATABASE",
"DeleteBehavior": "DELETE_FROM_DATABASE"
},
"Tags": {
"TheTag": "value-of-my-tag"
}
}
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
```
apache-airflow-providers-cncf-kubernetes==3.0.0
apache-airflow-providers-google==6.7.0
apache-airflow-providers-amazon==3.2.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-http==2.1.2
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-ssh==2.4.3
apache-airflow-providers-jdbc==2.1.3
```
### Deployment
Other 3rd-party Helm chart
### Deployment details
Airflow v2.2.5
Self-hosted Airflow in Kubernetes.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27556 | https://github.com/apache/airflow/pull/28005 | b3d7e17e72c05fd149a5514e3796d46a241ac4f7 | 3ee5c404b7a0284fc1f3474519b3833975aaa644 | "2022-11-08T14:16:14Z" | python | "2022-12-06T11:37:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,512 | ["airflow/www/static/js/dag/Main.tsx", "airflow/www/static/js/dag/details/Dag.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Nav.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "airflow/www/static/js/dag/grid/index.tsx", "airflow/www/static/js/datasets/index.tsx", "airflow/www/static/js/utils/useOffsetHeight.tsx"] | Resizable grid view components | ### Description
~1. Ability to change change the split ratio of the grid section and the task details section.~ - already done in #27273

2. Ability for the log window to be resized.

3. Would love if the choices stuck between reloads as well.
### Use case/motivation
I love the new grid view and use it day to day to check logs quickly. It would be easier to do so without having to scroll within the text box if you could resize the grid view to accommodate a larger view of the logs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27512 | https://github.com/apache/airflow/pull/27560 | 7ea8475128009b348a82d122747ca1df2823e006 | 65bfea2a20830baa10d2e1e8328c07a7a11bbb0c | "2022-11-04T21:09:12Z" | python | "2022-11-17T20:10:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,509 | ["airflow/models/dataset.py", "tests/models/test_taskinstance.py"] | Removing DAG dataset dependency when it is already ready results in SQLAlchemy cascading delete error | ### Apache Airflow version
2.4.2
### What happened
I have a DAG that is triggered by three datasets. When I remove one or more of these datasets, the web server fails to update the DAG, and `airflow dags reserialize` fails with an `AssertionError` within SQLAlchemy. Full stack trace below:
```
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
docker-airflow-scheduler-1 | return func(*args, session=session, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/dag_processing/processor.py", line 781, in process_file
docker-airflow-scheduler-1 | dagbag.sync_to_db(processor_subdir=self._dag_directory, session=session)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
docker-airflow-scheduler-1 | return func(*args, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 644, in sync_to_db
docker-airflow-scheduler-1 | for attempt in run_with_db_retries(logger=self.log):
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __iter__
docker-airflow-scheduler-1 | do = self.iter(retry_state=retry_state)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 349, in iter
docker-airflow-scheduler-1 | return fut.result()
docker-airflow-scheduler-1 | File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
docker-airflow-scheduler-1 | return self.__get_result()
docker-airflow-scheduler-1 | File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
docker-airflow-scheduler-1 | raise self._exception
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 658, in sync_to_db
docker-airflow-scheduler-1 | DAG.bulk_write_to_db(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
docker-airflow-scheduler-1 | return func(*args, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2781, in bulk_write_to_db
docker-airflow-scheduler-1 | session.flush()
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
docker-airflow-scheduler-1 | self._flush(objects)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
docker-airflow-scheduler-1 | transaction.rollback(_capture_exception=True)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
docker-airflow-scheduler-1 | compat.raise_(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
docker-airflow-scheduler-1 | raise exception
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
docker-airflow-scheduler-1 | flush_context.execute()
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
docker-airflow-scheduler-1 | rec.execute(self)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
docker-airflow-scheduler-1 | self.dependency_processor.process_deletes(uow, states)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
docker-airflow-scheduler-1 | self._synchronize(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
docker-airflow-scheduler-1 | sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
docker-airflow-scheduler-1 | raise AssertionError(
docker-airflow-scheduler-1 | AssertionError: Dependency rule tried to blank-out primary key column 'dataset_dag_run_queue.dataset_id' on instance '<DatasetDagRunQueue at 0xffff5d213d00>'
```
### What you think should happen instead
The DAG does not properly load in the UI, and no error is displayed. Instead, the old datasets that have been removed should be removed as dependencies and the DAG should be updated with the new dataset dependencies.
### How to reproduce
Initial DAG:
```python
def foo():
pass
@dag(
dag_id="test",
start_date=pendulum.datetime(2022, 1, 1),
catchup=False,
schedule=[
Dataset('test/1'),
Dataset('test/2'),
Dataset('test/3'),
]
)
def test_dag():
@task
def test_task():
foo()
test_task()
test_dag()
```
At least one of the datasets should be 'ready'. Now `dataset_dag_run_queue` will look something like below:
```
airflow=# SELECT * FROM dataset_dag_run_queue ;
dataset_id | target_dag_id | created_at
------------+-------------------------------------+-------------------------------
16 | test | 2022-11-02 19:47:53.938748+00
(1 row)
```
Then, update the DAG with new datasets:
```python
def foo():
pass
@dag(
dag_id="test",
start_date=pendulum.datetime(2022, 1, 1),
catchup=False,
schedule=[
Dataset('test/new/1'), # <--- updated
Dataset('test/new/2'),
Dataset('test/new/3'),
]
)
def test_dag():
@task
def test_task():
foo()
test_task()
test_dag()
```
Now you will observe the error in the web server logs or when running `airflow dags reserialize`.
I suspect this issue is related to handling of cascading deletes on the `dataset_id` foreign key for the run queue table. Dataset `id = 16` is one of the datasets that has been renamed.
### Operating System
docker image - apache/airflow:2.4.2-python3.9
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
```
### Deployment
Docker-Compose
### Deployment details
Running using docker-compose locally.
### Anything else
To trigger this problem the dataset to be removed must be in the "ready" state so that there is an entry in `dataset_dag_run_queue`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27509 | https://github.com/apache/airflow/pull/27538 | 7297892558e94c8cc869b175e904ca96e0752afe | fc59b02cfac7fd691602edc92a7abac38ed51531 | "2022-11-04T16:21:02Z" | python | "2022-11-07T13:03:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,507 | ["airflow/providers/http/hooks/http.py"] | Making logging for HttpHook optional | ### Description
In tasks that perform multiple requests, the log file is getting cluttered by the logging in `run`, line 129
I propose that we add a kwarg `log_request` with default value True to control this behavior
### Use case/motivation
reduce unnecessary entries in log files
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27507 | https://github.com/apache/airflow/pull/28911 | 185faab2112c4d3f736f8d40350401d8c1cac35b | a9d5471c66c788d8469ca65556e5820f1e96afc1 | "2022-11-04T16:04:07Z" | python | "2023-01-13T21:09:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,483 | ["airflow/www/views.py"] | DAG loading very slow in Graph view when using Dynamic Tasks | ### Apache Airflow version
2.4.2
### What happened
The web UI is very slow when loading the Graph view on DAGs that have a large number of expansions in the mapped tasks.
The problem is very similar to the one described in #23786 (resolved), but for the Graph view instead of the grid view.
It takes around 2-3 minutes to load DAGs that have ~1k expansions, with the default Airflow settings the web server worker will timeout. One can configure [web_server_worker_timeout](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#web-server-worker-timeout) to increase the timeout wait time.
### What you think should happen instead
The Web UI takes a reasonable amount of time to load the Graph view after the dag run is finished.
### How to reproduce
Same way as in #23786, you can create a mapped task that spans a large number of expansions then when you run it, the Graph view will take a very long amount of time to load and eventually time out.
You can use this code to generate multiple dags with `2^x` expansions. After running the DAGs you should notice how slow it is when attempting to open the Graph view of the DAGs with the largest number of expansions.
```python
from datetime import datetime
from airflow.models import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
}
initial_scale = 7
max_scale = 12
scaling_factor = 2
for scale in range(initial_scale, max_scale + 1):
dag_id = f"dynamic_task_mapping_{scale}"
with DAG(
dag_id=dag_id,
default_args=default_args,
catchup=False,
schedule_interval=None,
start_date=datetime(1970, 1, 1),
render_template_as_native_obj=True,
) as dag:
start = EmptyOperator(task_id="start")
mapped = PythonOperator.partial(
task_id="mapped",
python_callable=lambda m: print(m),
).expand(
op_args=[[x] for x in list(range(2**scale))]
)
end = EmptyOperator(task_id="end")
start >> mapped >> end
globals()[dag_id] = dag
```
### Operating System
MacOS Version 12.6 (Apple M1)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==4.0.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-sqlite==3.2.1
```
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27483 | https://github.com/apache/airflow/pull/29791 | 0db38ad1a2cf403eb546f027f2e5673610626f47 | 60d98a1bc2d54787fcaad5edac36ecfa484fb42b | "2022-11-03T08:46:08Z" | python | "2023-02-28T05:15:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,479 | ["airflow/www/fab_security/views.py"] | webserver add role to an existing user -> KeyError: 'userinfoedit' | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
airflow 2.4.1
a existing user only have the role Viewer
I add with the UI the role Admin
click on button save
then error ->
```log
[03/Nov/2022:01:28:08 +0000] "POST /XXXXXXXX/users/edit/2 HTTP/1.1" 302 307 "https://XXXXXXXXXXXXX.net/XXXXXXXX/users/edit/2" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0"
[2022-11-03T01:28:09.014+0000] {app.py:1742} ERROR - Exception on /users/show/1 [GET]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/security/decorators.py", line 133, in wraps
return f(self, *args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/fab_security/views.py", line 222, in show
widgets['show'].template_args['actions'].pop('userinfoedit')
KeyError: 'userinfoedit'
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
1.7.0
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27479 | https://github.com/apache/airflow/pull/27537 | 9409293514cef574179a5320ed3ed50881064423 | 6434b5770877d75fba3c0c49fd808d6413367ab4 | "2022-11-03T01:33:41Z" | python | "2022-11-08T13:45:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,478 | ["airflow/models/dagrun.py", "airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Scheduler crash when clear a previous run of a normal task that is now a mapped task | ### Apache Airflow version
2.4.2
### What happened
I have clear a task A that was a normal task but that is now a mapped task
```log
[2022-11-02 23:33:20 +0000] [17] [INFO] Worker exiting (pid: 17)
2022-11-02T23:33:20.390911528Z Traceback (most recent call last):
2022-11-02T23:33:20.390935788Z File "/usr/local/bin/airflow", line 8, in <module>
2022-11-02T23:33:20.390939798Z sys.exit(main())
2022-11-02T23:33:20.390942302Z File "/usr/local/lib/python3.10/site-packages/airflow/__main__.py", line 39, in main
2022-11-02T23:33:20.390944924Z args.func(args)
2022-11-02T23:33:20.390947345Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 52, in command
2022-11-02T23:33:20.390949893Z return func(*args, **kwargs)
2022-11-02T23:33:20.390952237Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/cli.py", line 103, in wrapper
2022-11-02T23:33:20.390954862Z return f(*args, **kwargs)
2022-11-02T23:33:20.390957163Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 85, in scheduler
2022-11-02T23:33:20.390959672Z _run_scheduler_job(args=args)
2022-11-02T23:33:20.390961979Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 50, in _run_scheduler_job
2022-11-02T23:33:20.390964496Z job.run()
2022-11-02T23:33:20.390966931Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/base_job.py", line 247, in run
2022-11-02T23:33:20.390969441Z self._execute()
2022-11-02T23:33:20.390971778Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 746, in _execute
2022-11-02T23:33:20.390974368Z self._run_scheduler_loop()
2022-11-02T23:33:20.390976612Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 866, in _run_scheduler_loop
2022-11-02T23:33:20.390979125Z num_queued_tis = self._do_scheduling(session)
2022-11-02T23:33:20.390981458Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 946, in _do_scheduling
2022-11-02T23:33:20.390984819Z callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
2022-11-02T23:33:20.390988440Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/retries.py", line 78, in wrapped_function
2022-11-02T23:33:20.390991893Z for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
2022-11-02T23:33:20.391008515Z File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 384, in __iter__
2022-11-02T23:33:20.391012668Z do = self.iter(retry_state=retry_state)
2022-11-02T23:33:20.391016220Z File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 351, in iter
2022-11-02T23:33:20.391019633Z return fut.result()
2022-11-02T23:33:20.391022534Z File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
2022-11-02T23:33:20.391025820Z return self.__get_result()
2022-11-02T23:33:20.391029555Z File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
2022-11-02T23:33:20.391033787Z raise self._exception
2022-11-02T23:33:20.391037611Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/retries.py", line 87, in wrapped_function
2022-11-02T23:33:20.391040339Z return func(*args, **kwargs)
2022-11-02T23:33:20.391042660Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 1234, in _schedule_all_dag_runs
2022-11-02T23:33:20.391045166Z for dag_run in dag_runs:
2022-11-02T23:33:20.391047413Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2887, in __iter__
2022-11-02T23:33:20.391049815Z return self._iter().__iter__()
2022-11-02T23:33:20.391052252Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2894, in _iter
2022-11-02T23:33:20.391054786Z result = self.session.execute(
2022-11-02T23:33:20.391057119Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1688, in execute
2022-11-02T23:33:20.391059741Z conn = self._connection_for_bind(bind)
2022-11-02T23:33:20.391062247Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1529, in _connection_for_bind
2022-11-02T23:33:20.391065901Z return self._transaction._connection_for_bind(
2022-11-02T23:33:20.391069140Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
2022-11-02T23:33:20.391078064Z self._assert_active()
2022-11-02T23:33:20.391081939Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
2022-11-02T23:33:20.391085250Z raise sa_exc.PendingRollbackError(
2022-11-02T23:33:20.391087747Z sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_fail_ti_fkey" on table "task_fail"
2022-11-02T23:33:20.391091226Z DETAIL: Key (dag_id, task_id, run_id, map_index)=(kubernetes_dag, task-one, scheduled__2022-11-01T00:00:00+00:00, -1) is still referenced from table "task_fail".
2022-11-02T23:33:20.391093987Z
2022-11-02T23:33:20.391102116Z [SQL: UPDATE task_instance SET map_index=%(map_index)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s]
2022-11-02T23:33:20.391105554Z [parameters: {'map_index': 0, 'task_instance_dag_id': 'kubernetes_dag', 'task_instance_task_id': 'task-one', 'task_instance_run_id': 'scheduled__2022-11-01T00:00:00+00:00', 'task_instance_map_index': -1}]
2022-11-02T23:33:20.391108241Z (Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a)
2022-11-02T23:33:20.489698500Z [2022-11-02 23:33:20 +0000] [7] [INFO] Shutting down: Master
```
### What you think should happen instead
Airflow should evaluate the existing and previous runs as mapped task of 1 task
cause I can't see the logs anymore of a task that is now a mapped task
### How to reproduce
dag with a normal task A
run dag
task A success
edit dag to make task A a mapped task ( without changing name of task )
clear task
scheduler crash
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27478 | https://github.com/apache/airflow/pull/29645 | e02bfc870396387ef2052ab375cdd2a54e704ae2 | a770edfac493f3972c10a43e45bcd0e7cfaea65f | "2022-11-02T23:43:43Z" | python | "2023-02-20T19:45:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,462 | ["airflow/models/dag.py", "tests/sensors/test_external_task_sensor.py"] | Clearing the parent dag will not clear child dag's mapped tasks | ### Apache Airflow version
2.4.2
### What happened
In the scenario where we have 2 dags, 1 dag dependent on the other by having an ExternalTaskMarker on the parent dag pointing to the child dag and we have some number of mapped tasks in the child dag that have been expanded (map_index is not -1).
If we were to clear the parent dag, the child dag's mapped tasks will NOT be cleared. It will not appear in the "Task instances to be cleared" list
### What you think should happen instead
I believe the behaviour should be having the child dag's mapped tasks cleared when the parent dag is cleared.
### How to reproduce
1. Create a parent dag with an ExternalTaskMarker
2. Create a child dag which has some ExternalTaskSensor that the ExternalTaskMarker is pointing to
3. Add any number of mapped tasks downstream of that ExternalTaskSensor
4. Clear the parent dag's ExternalTaskMarker (or any task upstream of it)
### Operating System
Mac OS Monterey 12.6
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27462 | https://github.com/apache/airflow/pull/27501 | bc0063af99629e6b3eb5c76c88ac5bfaf92afaaf | 5ce9c827f7bcdef9c526fd4416533fc481de4675 | "2022-11-02T05:55:29Z" | python | "2022-11-17T01:54:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,449 | ["airflow/jobs/local_task_job.py", "airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "tests/jobs/test_local_task_job.py", "tests/models/test_taskinstance.py"] | Dynamic tasks marked as `upstream_failed` when none of their upstream tasks are `failed` or `upstream_failed` | ### Apache Airflow version
2.4.2
### What happened
There is a mapped task is getting marked as `upstream_failed` when none of its upstream tasks are `failed` or `upstream_failed`.

In the above graph view, if `first_task` finishes before `second_task`, `first_task` immediately tries to expand `middle_task`. **Note - this is an important step to reproduce - The order the tasks finish matter.**
Note that the value of the Airflow configuration variable [`schedule_after_task_execution`](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#schedule-after-task-execution) must be `True` (the default) for this to occur.
The expansion occurs when the Task supervisor performs the "mini scheduler", in [this line in `dagrun.py`](https://github.com/apache/airflow/blob/63638cd2162219bd0a67caface4b1f1f8b88cc10/airflow/models/dagrun.py#L749).
Which then marks `middle_task` as `upstream_failed` in [this line in `mappedoperator.py`](https://github.com/apache/airflow/blob/63638cd2162219bd0a67caface4b1f1f8b88cc10/airflow/models/mappedoperator.py#L652):
```
# If the map length cannot be calculated (due to unavailable
# upstream sources), fail the unmapped task.
```
I believe this was introduced by the PR [Fail task if mapping upstream fails](https://github.com/apache/airflow/pull/25757).
### What you think should happen instead
The dynamic tasks should successfully execute. I don't think the mapped task should expand because its upstream task hasn't completed at the time it's expanded. If the upstream task were to complete earlier, it would expand successfully.
### How to reproduce
Execute this DAG, making sure Airflow configuration `schedule_after_task_execution` is set to default value `True`.
```
from datetime import datetime, timedelta
import time
from airflow import DAG, XComArg
from airflow.operators.python import PythonOperator
class PrintIdOperator(PythonOperator):
def __init__(self, id, **kwargs) -> None:
super().__init__(**kwargs)
self.op_kwargs["id"] = id
DAG_ID = "test_upstream_failed_on_mapped_operator_expansion"
default_args = {
"owner": "airflow",
"depends_on_past": False,
"retry_delay": timedelta(minutes=1),
"retries": 0,
}
def nop(id):
print(f"{id=}")
def get_ids(delay: int = 0):
print(f"Delaying {delay} seconds...")
time.sleep(delay)
print("Done!")
return [0, 1, 2]
with DAG(
dag_id=DAG_ID,
default_args=default_args,
start_date=datetime(2022, 8, 3),
catchup=False,
schedule=None,
max_active_runs=1,
) as dag:
second_task = PythonOperator(
task_id="second_task",
python_callable=get_ids,
op_kwargs={"delay": 10}
)
first_task = PythonOperator(
task_id="first_task",
python_callable=get_ids,
)
middle_task = PrintIdOperator.partial(
task_id="middle_task",
python_callable=nop,
).expand(id=XComArg(second_task))
last_task = PythonOperator(
task_id="last_task",
python_callable=nop,
op_kwargs={"id": 1},
)
[first_task, middle_task] >> last_task
```
### Operating System
debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27449 | https://github.com/apache/airflow/pull/27506 | 47a2b9ee7f1ff2cc1cc1aa1c3d1b523c88ba29fb | ed92e5d521f958642615b038ec13068b527db1c4 | "2022-11-01T18:00:20Z" | python | "2022-11-09T14:05:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,429 | ["BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/setup_commands_config.py", "dev/breeze/src/airflow_breeze/utils/reinstall.py"] | Incorrect command displayed in warning when breeze dependencies are changed | ### Apache Airflow version
main (development)
### What happened
During switching between some old branches I noticed the below warning. The `--force` option seems to have been removed in 3dfa44566c948cb2db016e89f84d6fe37bd6d824 and is default now. This message could be updated in below places
https://github.com/apache/airflow/blob/b29ca4e77d4d80fb1f4d6d4b497a3a14979dd244/dev/breeze/src/airflow_breeze/utils/reinstall.py#L50
https://github.com/apache/airflow/blob/b29ca4e77d4d80fb1f4d6d4b497a3a14979dd244/dev/breeze/src/airflow_breeze/utils/reinstall.py#L59
Also here I guess
https://github.com/apache/airflow/blob/b29ca4e77d4d80fb1f4d6d4b497a3a14979dd244/dev/breeze/src/airflow_breeze/utils/path_utils.py#L232
```
$ breeze shell
Breeze dependencies changed since the installation!
This might cause various problems!!
If you experience problems - reinstall Breeze with:
breeze setup self-upgrade --force
This should usually take couple of seconds.
```
```
breeze setup self-upgrade --force
Usage: breeze setup self-upgrade [OPTIONS]
Try running the '--help' flag for more information.
╭─ Error ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ No such option: --force │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
To find out more, visit https://github.com/apache/airflow/blob/main/BREEZE.rst
```
### What you think should happen instead
The warning message could be updated with correct option.
### How to reproduce
_No response_
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27429 | https://github.com/apache/airflow/pull/27438 | 98bd9b3d6b58bac3d019d3c7f8c6983a9dee463e | 10d2a71073a23b0b8c9fae0de5a79fb4f3ac1935 | "2022-11-01T04:56:29Z" | python | "2022-11-01T11:56:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,409 | ["airflow/models/skipmixin.py", "airflow/operators/python.py", "tests/models/test_skipmixin.py", "tests/operators/test_python.py"] | Improve error message when branched task_id does not exist | ### Body
This issue is to handle the TODO left in the code:
https://github.com/apache/airflow/blob/64174ce25ae800a38e712aa0bd62a5893ea2ff99/airflow/operators/python.py#L211
Related: https://github.com/apache/airflow/pull/18471/files#r716030104
Currently only BranchPythonOperator will show informed error message when the requested branched task_id does not exist other branch operators will show :
```
File "/usr/local/lib/python3.9/site-packages/airflow/models/skipmixin.py", line 147, in skip_all_except
branch_task_ids = set(branch_task_ids)
TypeError: 'NoneType' object is not iterable
```
This task is to generalize the solution of https://github.com/apache/airflow/pull/18471/ to all branch operators.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/27409 | https://github.com/apache/airflow/pull/27434 | fc59b02cfac7fd691602edc92a7abac38ed51531 | baf2f3fc329d5b4029d9e17defb84cefcd9c490a | "2022-10-31T14:01:34Z" | python | "2022-11-07T13:38:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,402 | ["chart/values.yaml", "helm_tests/airflow_aux/test_configmap.py"] | #26415 Broke flower dashboard | ### Discussed in https://github.com/apache/airflow/discussions/27401
<div type='discussions-op-text'>
<sup>Originally posted by **Flogue** October 25, 2022</sup>
### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.4.1
### Kubernetes Version
1.24.6
### Helm Chart configuration
```
flower:
enabled: true
```
### Docker Image customisations
None
### What happened
Flower dashboard is unreachable.
"Failed to load resource: net::ERR_CONNECTION_RESET" in browser console
### What you think should happen instead
Dashboard should load.
### How to reproduce
Just enable flower:
```
helm install airflow-rl apache-airflow/airflow --namespace airflow-np --set flower.enables=true
kubectl port-forward svc/airflow-rl-flower 5555:5555 --namespace airflow-np
```
### Anything else
A quick fix for this is:
```
config:
celery:
flower_url_prefix: ''
```
Basically, the new default value '/' makes it so the scripts and links read:
`<script src="//static/js/jquery....`
where it should be:
`<script src="/static/js/jquery....`
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/27402 | https://github.com/apache/airflow/pull/33134 | ca5acda1617a5cdb1d04f125568ffbd264209ec7 | 6e4623ab531a1b6755f6847d2587d014a387560d | "2022-10-31T03:49:04Z" | python | "2023-08-07T20:04:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,396 | ["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | CloudWatch task handler doesn't fall back to local logs when Amazon CloudWatch logs aren't found | This is really a CloudWatch handler issue - not "airflow" core.
### Discussed in https://github.com/apache/airflow/discussions/27395
<div type='discussions-op-text'>
<sup>Originally posted by **matthewblock** October 24, 2022</sup>
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We recently activated AWS Cloudwatch logs. We were hoping the logs server would gracefully handle task logs that previously existed but were not written to Cloudwatch, but when fetching the remote logs failed (expected), the logs server didn't fall back to local logs.
```
*** Reading remote log from Cloudwatch log_group: <our log group> log_stream: <our log stream>
```
### What you think should happen instead
According to documentation [Logging for Tasks](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/logging-tasks.html#writing-logs-locally), when fetching remote logs fails, the logs server should fall back to looking for local logs:
> In the Airflow UI, remote logs take precedence over local logs when remote logging is enabled. If remote logs can not be found or accessed, local logs will be displayed.
This should be indicated by the message `*** Falling back to local log`.
If this is not the intended behavior, the documentation should be modified to reflect the intended behavior.
### How to reproduce
1. Run a test DAG without [AWS CloudWatch logging configured](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/logging/cloud-watch-task-handlers.html)
2. Configure AWS CloudWatch remote logging and re-run a test DAG
### Operating System
Debian buster-slim
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/27396 | https://github.com/apache/airflow/pull/27564 | 3aed495f50e8bc0e22ff90efee7671a73168b19e | c490a328f4d0073052d8b5205c7c4cab96c3d559 | "2022-10-31T02:25:54Z" | python | "2022-11-11T00:40:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,358 | ["docs/apache-airflow/executor/kubernetes.rst"] | Airflow 2.2.2 pod_override does not override `args` of V1Container | ### Apache Airflow version
2.2.2
### What happened
I have a bash sensor defined as follows:
```python
foo_sensor_task = BashSensor(
task_id="foo_task",
poke_interval=3600,
bash_command="python -m foo.run",
retries=0,
executor_config={
"pod_template_file: "path-to-file-yaml",
"pod_override": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(name="base, image="foo-image", args=["abc"])
]
)
)
}
)
```
Entrypoint command in the `foo-image` is `python -m foo.run`. However, when I deploy the image onto Openshift (Kubernetes), the command somehow turns out to be the following:
```bash
python -m foo.run airflow tasks run foo_dag foo_sensor_task manual__2022-10-28T21:08:39+00:00 ...
```
which is wrong.
### What you think should happen instead
I assume the expected command should override `args` (see V1Container `args` value above) and therefore should be:
```bash
python -m foo.run abc
```
and **not**:
```bash
python -m foo.run airflow tasks run foo_dag foo_sensor_task manual__2022-10-28T21:08:39+00:00 ...
```
### How to reproduce
To reproduce the above issue, create a simple DAG and a sensor as defined above. Use a sample image and try to override the args. I cannot provide the same code due to NDA.
### Operating System
RHLS 7.9
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
apache-airflow-providers-cncf-kubernetes==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-sqlite==2.0.1
### Deployment
Other
### Deployment details
N/A
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27358 | https://github.com/apache/airflow/pull/27450 | aa36f754e2307ccd8a03987b81ea1e1a04b03c14 | 8f5e100f30764e7b1818a336feaa8bb390cbb327 | "2022-10-29T01:08:10Z" | python | "2022-11-02T06:08:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,345 | ["airflow/utils/log/file_task_handler.py", "airflow/utils/log/logging_mixin.py", "tests/utils/test_logging_mixin.py"] | Duplicate log lines in CloudWatch after upgrade to 2.4.2 | ### Apache Airflow version
2.4.2
### What happened
We upgraded airflow from 2.4.1 to 2.4.2 and immediately notice that every task log line is duplicated _into_ CloudWatch. Comparing logs from tasks run before upgrade and after upgrade indicates that the issue is not in how the logs are displayed in Airflow, but rather that it now produces two log lines instead of one.
When observing both the CloudWatch log streams and the Airflow UI, we can see duplicate log lines for ~_all_~ most log entries post upgrade, whilst seeing single log lines in tasks before upgrade.
This happens _both_ for tasks ran in a remote `EcsRunTaskOperator`'s as well as in regular `PythonOperator`'s.
### What you think should happen instead
A single non-duplicate log line should be produced into CloudWatch.
### How to reproduce
From my understanding now, any setup on 2.4.2 that uses CloudWatch remote logging will produce duplicate log lines. (But I have not been able to confirm other setups)
### Operating System
Docker: `apache/airflow:2.4.2-python3.9` - Running on AWS ECS Fargate
### Versions of Apache Airflow Providers
```
apache-airflow[celery,postgres,apache.hive,jdbc,mysql,ssh,amazon,google,google_auth]==2.4.2
apache-airflow-providers-amazon==6.0.0
```
### Deployment
Other Docker-based deployment
### Deployment details
We are running a docker inside Fargate ECS on AWS.
The following environment variables + config in CloudFormation control remote logging:
```
- Name: AIRFLOW__LOGGING__REMOTE_LOGGING
Value: True
- Name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
Value: !Sub "cloudwatch://${TasksLogGroup.Arn}"
```
### Anything else
We did not change any other configuration during the upgrade, simply bumped the requirements for provider list + docker image from 2.4.1 to 2.4.2.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27345 | https://github.com/apache/airflow/pull/27591 | 85ec17fbe1c07b705273a43dae8fbdece1938e65 | 933fefca27a5cd514c9083040344a866c7f517db | "2022-10-28T10:32:13Z" | python | "2022-11-10T17:58:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,328 | ["airflow/providers/sftp/operators/sftp.py", "tests/providers/sftp/operators/test_sftp.py"] | SFTPOperator throws object of type 'PlainXComArg' has no len() when using with Taskflow API | ### Apache Airflow Provider(s)
sftp
### Versions of Apache Airflow Providers
apache-airflow-providers-sftp==4.1.0
### Apache Airflow version
2.4.2 Python 3.10
### Operating System
Debian 11 (Official docker image)
### Deployment
Docker-Compose
### Deployment details
Base image is apache/airflow:2.4.2-python3.10
### What happened
When combining Taskflow API and SFTPOperator, it throws an exception that didn't happen with apache-airflow-providers-sftp 4.0.0
### What you think should happen instead
The DAG should work as expected
### How to reproduce
```python
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.providers.sftp.operators.sftp import SFTPOperator
with DAG(
"example_sftp",
schedule="@once",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
tags=["example"],
) as dag:
@task
def get_file_path():
return "test.csv"
local_filepath = get_file_path()
upload = SFTPOperator(
task_id=f"upload_file_to_sftp",
ssh_conn_id="sftp_connection",
local_filepath=local_filepath,
remote_filepath="test.csv",
)
```
### Anything else
```logs
[2022-10-27T15:21:38.106+0000]` {logging_mixin.py:120} INFO - [2022-10-27T15:21:38.102+0000] {dagbag.py:342} ERROR - Failed to import: /opt/airflow/dags/test.py
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/dagbag.py", line 338, in parse
loader.exec_module(new_module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/airflow/dags/test.py", line 21, in <module>
upload = SFTPOperator(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 408, in apply_defaults
result = func(self, **kwargs, default_args=default_args)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/sftp/operators/sftp.py", line 116, in __init__
if len(self.local_filepath) != len(self.remote_filepath):
TypeError: object of type 'PlainXComArg' has no len()
```
It looks like the offending code was introduced in commit 5f073e38dd46217b64dbc16d7b1055d89e8c3459
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27328 | https://github.com/apache/airflow/pull/29068 | 10f0f8bc4be521fd8c6cdd057cc02b6ea2c2d5c1 | bac7b3027d57d2a31acb9a2d078c6af4dc777162 | "2022-10-27T15:45:48Z" | python | "2023-01-20T19:32:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,290 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Publish a container's port(s) to the host with DockerOperator | ### Description
[`create_container` method](https://github.com/docker/docker-py/blob/bc0a5fbacd7617fd338d121adca61600fc70d221/docker/api/container.py#L370) has a `ports` param to open inside the container, and the `host_config` to [declare port bindings](https://github.com/docker/docker-py/blob/bc0a5fbacd7617fd338d121adca61600fc70d221/docker/api/container.py#L542).
We can learn from [Expose port using DockerOperator](https://stackoverflow.com/questions/65157416/expose-port-using-dockeroperator) for this feature on DockerOperator. I have already tested it and works, also created a custom docker decorator based on this DockerOperator extension.
### Use case/motivation
I would like to publish the container's port(s) that is created with DockerOperator to the host. These changes should also be applied to the Docker decorator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27290 | https://github.com/apache/airflow/pull/30730 | cb1ecb0647d459999041ee6018f8f282fc25b09b | d8c0e3009a649ce057595539b96a566b7faa5584 | "2022-10-26T07:56:51Z" | python | "2023-05-17T09:03:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,282 | ["airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | KubernetesPodOperator: Option to show logs from all containers in a pod | ### Description
Currently, KubernetesPodOperator fetches logs using
```
self.pod_manager.fetch_container_logs(
pod=self.pod,
container_name=self.BASE_CONTAINER_NAME,
follow=True,
)
```
and so only shows log from the main container in a pod. It would be very useful/helpful to have the possibility to fetch logs for all the containers in a pod.
### Use case/motivation
Making the cause of failed KubernetesPodOperator tasks a lot more visible.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27282 | https://github.com/apache/airflow/pull/31663 | e7587b3369af30848c3cf1c7eff9e801b1440793 | 9a0f41ba53185031bc2aa56ead2928ae4b20de99 | "2022-10-25T23:29:19Z" | python | "2023-07-06T09:49:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,237 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"] | BigQueryCheckOperator fail in deferrable mode even if col val 0 | ### Apache Airflow version
main (development)
### What happened
The Bigquery hook `get_records` always return a list of string irrespective of the Bigquery table column type. So if even my table column has value 0 the task succeeds since ```boo("0")``` return ```True```
https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/hooks/bigquery.py#L3062
### What you think should happen instead
Bigquery hook ```get_records``` should return the value having the correct col type
### How to reproduce
create an Airflow google cloud connection `google_cloud_default` and try the below DAG.
Make sure to update the DATASET and Table name.
Your table first row should contain a 0 value in this case expected behaviour is DAG should fail but it will pass
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.providers.google.cloud.operators.bigquery import BigQueryCheckOperator
default_args = {
"execution_timeout": timedelta(minutes=30),
}
with DAG(
dag_id="bq_check_op",
start_date=datetime(2022, 8, 22),
schedule_interval=None,
catchup=False,
default_args=default_args,
tags=["example", "async", "bigquery"],
) as dag:
check_count = BigQueryCheckOperator(
task_id="check_count",
sql=f"SELECT * FROM DATASET.TABLE",
use_legacy_sql=False,
deferrable=True
)
```an
### Operating System
Mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27237 | https://github.com/apache/airflow/pull/27236 | 95e5675714f12c177e30d83a14d28222b06d217b | 1447158e690f3d63981b3d8ec065665ec91ca54e | "2022-10-24T20:10:28Z" | python | "2022-10-31T04:21:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,232 | ["airflow/operators/python.py"] | ExternalPythonOperator: AttributeError: 'python_path' is configured as a template field but ExternalPythonOperator does not have this attribute. | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using the ExternalPythonOperator directly in v2.4.2 as opposed to via the @task.external decorator described in https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html#externalpythonoperator causes the following error:
```
AttributeError: 'python_path' is configured as a template field but ExternalPythonOperator does not have this attribute.
```
This seems to be due to https://github.com/apache/airflow/blob/main/airflow/operators/python.py#L624 having 'python_path' as an additional template field, instead of 'python', which is the correct additional keyword argument for the operator
### What you think should happen instead
We should change https://github.com/apache/airflow/blob/main/airflow/operators/python.py#L624 to
read:
```
template_fields: Sequence[str] = tuple({'python'} | set(PythonOperator.template_fields))
```
instead of
```
template_fields: Sequence[str] = tuple({'python_path'} | set(PythonOperator.template_fields))
```
This has been verified by adding:
```
ExternalPythonOperator.template_fields = tuple({'python'} | set(PythonOperator.template_fields))
```
in the sample DAG code below, which causes the DAG to run successfully
### How to reproduce
```
import airflow
from airflow.models import DAG
from airflow.operators.python import ExternalPythonOperator
args = dict(
start_date=airflow.utils.dates.days_ago(3),
email=["[email protected]"],
email_on_failure=False,
email_on_retry=False,
retries=0
)
dag = DAG(
dag_id='test_dag',
default_args=args,
schedule_interval='0 20 * * *',
catchup=False,
)
def print_kwargs(*args, **kwargs):
print('args', args)
print('kwargs', kwargs)
with dag:
def print_hello():
print('hello')
# Due to a typo in the airflow library :(
# ExternalPythonOperator.template_fields = tuple({'python'} | set(PythonOperator.template_fields))
t1 = ExternalPythonOperator(
task_id='test_task',
python='/opt/airflow/miniconda/envs/nexus/bin/python',
python_callable=print_kwargs
)
```
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27232 | https://github.com/apache/airflow/pull/27256 | 544c93f0a4d2673c8de64d97a7a8128387899474 | 27a92fecc9be30c9b1268beb60db44d2c7b3628f | "2022-10-24T16:18:43Z" | python | "2022-10-31T04:34:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,228 | ["airflow/serialization/serialized_objects.py", "tests/www/views/test_views_trigger_dag.py"] | Nested Parameters Break for DAG Run Configurations | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow Version Used: 2.3.3
This bug report is being created out of the following discussion - https://github.com/apache/airflow/discussions/25064
With the following DAG definition (with nested params):
```
DAG(
dag_id="some_id",
start_date=datetime(2021, 1, 1),
catchup=False,
doc_md=__doc__,
schedule_interval=None,
params={
"accounts": Param(
[{'name': 'account_name_1', 'country': 'usa'}],
schema = {
"type": "array",
"minItems": 1,
"items": {
"type": "object",
"default": {"name": "account_name_1", "country": "usa"},
"properties": {
"name": {"type": "string"},
"country": {"type": "string"},
},
"required": ["name", "country"]
},
}
),
}
)
```
**Note:** It does not matter whether `Param` and JSONSchema is used or not, I mean you can try to put a simple nested object too.
Then the UI displays the following:
```
{
"accounts": null
}
```
### What you think should happen instead
Following is what the UI should display instead:
```
{
"accounts": [
{
"name": "account_name_1",
"country": "usa"
}
]
}
```
### How to reproduce
_No response_
### Operating System
Debian Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
Although I am personally using Composer, it is most likely related to Airflow only given there are more non-Composer folks facing this (from the discussion's original author and the Slack community).
### Anything else
I have put some more explanation and a quick way to reproduce this [as a comment in the discussion](https://github.com/apache/airflow/discussions/25064#discussioncomment-3907974) linked.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27228 | https://github.com/apache/airflow/pull/27482 | 2d2f0daad66416d565e874e35b6a487a21e5f7b1 | 9409293514cef574179a5320ed3ed50881064423 | "2022-10-24T09:58:34Z" | python | "2022-11-08T13:43:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,225 | ["airflow/www/templates/analytics/google_analytics.html"] | Tracking User Activity Issue: Google Analytics tag version is not up-to-date | ### Apache Airflow version
2.4.1
### What happened
Airflow uses the previous Google Analytics tag version so Google Analytics does not collect User Activity Metric from Airflow
### What you think should happen instead
The Tracking User Activity feature should work properly with Google Analytics
### How to reproduce
- Configure to use Google Analytics with Airflow
- Google Analytics does not collect User Activity Metric from Airflow
Note: with the upgraded Google Analytics tag it works properly
https://support.google.com/analytics/answer/9304153#add-tag&zippy=%2Cadd-your-tag-using-google-tag-manager%2Cfind-your-g--id-for-any-platform-that-accepts-a-g--id%2Cadd-the-google-tag-directly-to-your-web-pages
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27225 | https://github.com/apache/airflow/pull/27226 | 55f8a63d012d4ca5ca726195bed4b38e9b1a05f9 | 5e6cec849a5fa90967df1447aba9521f1cfff3d0 | "2022-10-24T09:00:49Z" | python | "2022-10-27T13:25:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,200 | ["airflow/models/serialized_dag.py"] | Handle TODO: .first() is not None can be changed to .scalar() | ### Body
The TODO part of:
https://github.com/apache/airflow/blob/d67ac5932dabbf06ae733fc57b48491a8029b8c2/airflow/models/serialized_dag.py#L156-L158
can now be addressed since we are on sqlalchemy 1.4+ and https://github.com/sqlalchemy/sqlalchemy/issues/5481 is resolved
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/27200 | https://github.com/apache/airflow/pull/27323 | 6f20d4d3247e44ea04558226aeeed09bf8379173 | 37c0038a18ace092079d23988f76d90493ff294c | "2022-10-22T17:01:34Z" | python | "2022-10-31T02:31:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,182 | ["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | SSHOperator ignores cmd_timeout | ### Apache Airflow Provider(s)
ssh
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.4.1
### Operating System
linux
### Deployment
Other
### Deployment details
_No response_
### What happened
Hi,
SSHOperator documentation states that we should be using cmd_timeout instead of timeout
```
:param timeout: (deprecated) timeout (in seconds) for executing the command. The default is 10 seconds.
Use conn_timeout and cmd_timeout parameters instead.
```
But the code doesn't use cmd_timeout at all - and it's still passing `self.timeout` when running the ssh command:
```
return self.ssh_hook.exec_ssh_client_command(
ssh_client, command, timeout=self.timeout, environment=self.environment, get_pty=self.get_pty
)
```
It seems to me that we should `self.cmd_timeout` here instead. When creating the hook, it correctly uses `self.conn_timeout`.
I'll try to work on a PR for this.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27182 | https://github.com/apache/airflow/pull/27184 | cfd63df786e0c40723968cb8078f808ca9d39688 | dc760b45eaeccc3ff35a5acdfe70968ca0451331 | "2022-10-21T12:29:48Z" | python | "2022-11-07T02:07:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,166 | ["airflow/www/static/css/flash.css", "airflow/www/static/js/dag/grid/TaskName.test.tsx", "airflow/www/static/js/dag/grid/TaskName.tsx", "airflow/www/static/js/dag/grid/index.test.tsx"] | Carets in Grid view are the wrong way around | ### Apache Airflow version
main (development)
### What happened
When expanding tasks to see sub-tasks in the Grid UI, the carets to expand the task are pointing the wrong way.
### What you think should happen instead
Can you PLEASE use the accepted Material UI standard for expansion & contraction - https://mui.com/material-ui/react-list/#nested-list
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27166 | https://github.com/apache/airflow/pull/28624 | 69ab7d8252f830d8c1a013d34f8305a16da26bcf | 0ab881a4ab78ca7d30712c893a6f01b83eb60e9e | "2022-10-20T15:52:50Z" | python | "2023-01-02T21:01:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,165 | ["airflow/providers/google/cloud/hooks/workflows.py", "tests/providers/google/cloud/hooks/test_workflows.py"] | WorkflowsCreateExecutionOperator execution argument only receive bytes | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==7.0.0`
### Apache Airflow version
2.3.2
### Operating System
Ubuntu 20.04.5 LTS (Focal Fossa)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
WorkflowsCreateExecutionOperator triggers google cloud workflows and execution param receives argument as {"argument": {"key": "val", "key", "val"...}
But, When I passed argument as dict using render_template_as_native_obj=True, protobuf error occured TypeError: {'projectId': 'project-id', 'location': 'us-east1'} has type dict, but expected one of: bytes, unicode.
When I passed argument as bytes {"argument": b'{\n "projectId": "project-id",\n "location": "us-east1"\n}' It working.
### What you think should happen instead
execution argument should be Dict instead of bytes.
### How to reproduce
not working
```python
from airflow import DAG
from airflow.models.param import Param
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.google.cloud.operators.workflows import WorkflowsCreateExecutionOperator
with DAG(
dag_id="continual_learning_deid_norm_h2h_test",
params={
"location": Param(type="string", default="us-east1"),
"project_id": Param(type="string", default="project-id"),
"workflow_id": Param(type="string", default="orkflow"),
"workflow_execution_info": {
"argument": {
"projectId": "project-id",
"location": "us-east1"
}
}
},
render_template_as_native_obj=True
) as dag:
execution = "{{ params.workflow_execution_info }}"
create_execution = WorkflowsCreateExecutionOperator(
task_id="create_execution",
location="{{ params.location }}",
project_id="{{ params.project_id }}",
workflow_id="{{ params.workflow_id }}",
execution="{{ params.workflow_execution_info }}"
)
start_operator = DummyOperator(task_id='test_task')
start_operator >> create_execution
```
working
```python
from airflow import DAG
from airflow.models.param import Param
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.google.cloud.operators.workflows import WorkflowsCreateExecutionOperator
with DAG(
dag_id="continual_learning_deid_norm_h2h_test",
params={
"location": Param(type="string", default="us-east1"),
"project_id": Param(type="string", default="project-id"),
"workflow_id": Param(type="string", default="orkflow"),
"workflow_execution_info": {
"argument": b'{\n "projectId": "project-id",\n "location": "us-east1"\n}'
}
},
render_template_as_native_obj=True
) as dag:
execution = "{{ params.workflow_execution_info }}"
create_execution = WorkflowsCreateExecutionOperator(
task_id="create_execution",
location="{{ params.location }}",
project_id="{{ params.project_id }}",
workflow_id="{{ params.workflow_id }}",
execution="{{ params.workflow_execution_info }}"
)
start_operator = DummyOperator(task_id='test_task')
start_operator >> create_execution
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27165 | https://github.com/apache/airflow/pull/27361 | 9c41bf35e6149d4edfc585d97c348a4f864e7973 | 332c01d6e0bef41740e8fbc2c9600e7b3066615b | "2022-10-20T14:50:46Z" | python | "2022-10-31T05:35:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,146 | ["airflow/providers/dbt/cloud/hooks/dbt.py", "docs/apache-airflow-providers-dbt-cloud/connections.rst", "tests/providers/dbt/cloud/hooks/test_dbt_cloud.py"] | dbt Cloud Provider Not Compatible with emea.dbt.com | ### Apache Airflow Provider(s)
dbt-cloud
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
_No response_
### What happened
Trying to use the provider with dbt Cloud's new EMEA region (https://docs.getdbt.com/docs/deploy/regions) but not able to use the emea.dbt.com as a tenant, as it automatically adds `.getdbt.com` to the tenant
### What you think should happen instead
We should be able to change the entire URL - and it could still default to cloud.getdbt.com
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27146 | https://github.com/apache/airflow/pull/28890 | ed8788bb80764595ba2872cba0d2da9e4b137e07 | 141338b24efeddb9460b53b8501654b50bc6b86e | "2022-10-19T15:41:37Z" | python | "2023-01-12T19:25:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,140 | ["airflow/cli/commands/dag_processor_command.py", "airflow/jobs/dag_processor_job.py", "tests/cli/commands/test_dag_processor_command.py"] | Invalid livenessProbe for Standalone DAG Processor | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.3.4
### Kubernetes Version
1.22.12-gke.1200
### Helm Chart configuration
```yaml
dagProcessor:
enabled: true
```
### Docker Image customisations
```dockerfile
FROM apache/airflow:2.3.4-python3.9
USER root
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
RUN apt-get update && apt-get install -y google-cloud-cli
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
RUN sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
USER airflow
```
### What happened
Current DAG Processor livenessProbe is the following:
```
CONNECTION_CHECK_MAX_COUNT=0 AIRFLOW__LOGGING__LOGGING_LEVEL=ERROR exec /entrypoint \
airflow jobs check --hostname $(hostname)
```
This command checks the metadata DB searching for an active job whose hostname is the current pod's one (_airflow-dag-processor-xxxx_).
However, after running the dag-processor pod for more than 1 hour, there are no jobs with the processor hostname in the jobs table.


As a consequence, the livenessProbe fails and the pod is constantly restarting.
After investigating the code, I found out that DagFileProcessorManager is not creating jobs in the metadata DB, so the livenessProbe is not valid.
### What you think should happen instead
A new job should be created for the Standalone DAG Processor.
By doing that, the _airflow jobs check --hostname <hostname>_ command would work correctly and the livenessProbe wouldn't fail
### How to reproduce
1. Deploy airflow with a standalone dag-processor.
2. Wait for ~ 5 minutes
3. Check that the livenessProbe has been failing for 5 minutes and the pod has been restarted.
### Anything else
I think this behavior is inherited from the NOT standalone dag-processor mode (the livenessProbe checks for a SchedulerJob, that in fact contains the "DagProcessorJob")
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27140 | https://github.com/apache/airflow/pull/28799 | 1edaddbb1cec740db2ff2a86fb23a3a676728cb0 | 0018b94a4a5f846fc87457e9393ca953ba0b5ec6 | "2022-10-19T14:02:51Z" | python | "2023-02-21T09:54:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,107 | ["airflow/providers/dbt/cloud/operators/dbt.py", "tests/providers/dbt/cloud/operators/test_dbt_cloud.py"] | Dbt cloud download artifact to a path not present fails | ### Apache Airflow Provider(s)
dbt-cloud
### Versions of Apache Airflow Providers
```
apache-airflow-providers-dbt-cloud==2.2.0
```
### Apache Airflow version
2.2.5
### Operating System
Ubuntu 18.04
### Deployment
Composer
### Deployment details
_No response_
### What happened
Instructing `DbtCloudGetJobRunArtifactOperator` to save results in a path like `{{ var.value.base_path }}/dbt_run_warehouse/{{ run_id }}/run_results.json` fails because it contains a path not created yet.
```
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/dbt/cloud/operators/dbt.py", line 216, in execute
with open(self.output_file_name, "w") as file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/airflow/gcs/data/dbt/dbt_run_warehouse/manual__2022-10-17T22:18:25.469526+00:00/run_results.json'
```
### What you think should happen instead
It should create this path and dump the content of the requested artefact
### How to reproduce
```python
with DAG('test') as dag:
DbtCloudGetJobRunArtifactOperator(
task_id='dbt_run_warehouse',
run_id=12341234,
path='run_results.json',
output_file_name='{{ var.value.dbt_base_target_folder }}/dbt_run_warehouse/{{ run_id }}/run_results.json'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27107 | https://github.com/apache/airflow/pull/29048 | 6190e34388394b0f8b0bc01c66d56a0e8277fe6c | f805b4154a8155823d7763beb9b6da76889ebd62 | "2022-10-18T10:35:01Z" | python | "2023-01-23T17:08:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,096 | ["airflow/providers/amazon/aws/hooks/rds.py", "airflow/providers/amazon/aws/operators/rds.py", "airflow/providers/amazon/aws/sensors/rds.py", "tests/providers/amazon/aws/hooks/test_rds.py", "tests/providers/amazon/aws/operators/test_rds.py", "tests/providers/amazon/aws/sensors/test_rds.py"] | Use Boto waiters instead of customer _await_status method for RDS Operators | ### Description
Currently some code in RDS Operators use boto waiters and some uses a custom `_await_status`, the former is preferred over the later (waiters are vetted code provided by the boto sdk, they have features like exponential backoff, etc). See [this discussion thread](https://github.com/apache/airflow/pull/27076#discussion_r997325535) for more details/context.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27096 | https://github.com/apache/airflow/pull/27410 | b717853e4c17d67f8ea317536c98c7416eb080ca | 2bba98f109cc7737f4293a195e03a0cc21a624cb | "2022-10-17T17:46:53Z" | python | "2022-11-17T17:02:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,079 | ["airflow/macros/__init__.py", "tests/macros/test_macros.py"] | Option to deserialize JSON from last log line in BashOperator and DockerOperator before sending to XCom | ### Description
In order to create an XCom value with a BashOperator or a DockerOperator, we can use the option `do_xcom_push` that pushes to XCom the last line of the command logs.
It would be interesting to provide an option `xcom_json` to deserialize this last log line in case it's a JSON string, before sending it as XCom. This would allow to access its attributes later in other tasks with the `xcom_pull()` method.
### Use case/motivation
See my StackOverflow post : https://stackoverflow.com/questions/74083466/how-to-deserialize-xcom-strings-in-airflow
Consider a DAG containing two tasks: `DAG: Task A >> Task B` (BashOperators or DockerOperators). They need to communicate through XComs.
- `Task A` outputs the informations through a one-line json in stdout, which can then be retrieve in the logs of `Task A`, and so in its *return_value* XCom key if `xcom_push=True`. For instance : `{"key1":1,"key2":3}`
- `Task B` only needs the `key2` information from `Task A`, so we need to deserialize the *return_value* XCom of `Task A` to extract only this value and pass it directly to `Task B`, using the jinja template `{{xcom_pull('task_a')['key2']}}`. Using it as this results in `jinja2.exceptions.UndefinedError: 'str object' has no attribute 'key2'` because *return_value* is just a string.
For example we can deserialize Airflow Variables in jinja templates (ex: `{{ var.json.my_var.path }}`). Globally I would like to do the same thing with XComs.
**Current workaround**:
We can create a custom Operator (inherited from BashOperator or DockerOperator) and augment the `execute` method:
1. execute the original `execute` method
2. intercepts the last log line of the task
3. tries to `json.loads()` it in a Python dictionnary
4. finally return the output (which is now a dictionnary, not a string)
The previous jinja template `{{ xcom_pull('task_a')['key2'] }}` is now working in `task B`, since the XCom value is now a Python dictionnary.
```python
class BashOperatorExtended(BashOperator):
def execute(self, context):
output = BashOperator.execute(self, context)
try:
output = json.loads(output)
except:
pass
return output
class DockerOperatorExtended(DockerOperator):
def execute(self, context):
output = DockerOperator.execute(self, context)
try:
output = json.loads(output)
except:
pass
return output
```
But creating a new operator just for that purpose is not really satisfying..
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27079 | https://github.com/apache/airflow/pull/28930 | d20300018a38159f5452ae16bc9df90b1e7270e5 | ffdc696942d96a14a5ee0279f950e3114817055c | "2022-10-16T20:14:05Z" | python | "2023-02-19T14:41:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,069 | ["tests/jobs/test_local_task_job.py"] | test_heartbeat_failed_fast not failing fast enough. | ### Apache Airflow version
main (development)
### What happened
In the #14915 change to make tests run in parallel, the heartbeat interval threshold was raised an order of magnitude from 0.05 to 0.5. Though I frequently see tests failing in PRs due breaching that threshold by a tiny amount. Do we need to increase that theshold again? CC @potiuk
Example below where the time was `0.5193889999999999`, `0.0193...` past the threshold for the test:
```
=================================== FAILURES ===================================
_________________ TestLocalTaskJob.test_heartbeat_failed_fast __________________
self = <tests.jobs.test_local_task_job.TestLocalTaskJob object at 0x7f4088400950>
def test_heartbeat_failed_fast(self):
"""
Test that task heartbeat will sleep when it fails fast
"""
self.mock_base_job_sleep.side_effect = time.sleep
dag_id = 'test_heartbeat_failed_fast'
task_id = 'test_heartbeat_failed_fast_op'
with create_session() as session:
dag_id = 'test_heartbeat_failed_fast'
task_id = 'test_heartbeat_failed_fast_op'
dag = self.dagbag.get_dag(dag_id)
task = dag.get_task(task_id)
dr = dag.create_dagrun(
run_id="test_heartbeat_failed_fast_run",
state=State.RUNNING,
execution_date=DEFAULT_DATE,
start_date=DEFAULT_DATE,
session=session,
)
ti = dr.task_instances[0]
ti.refresh_from_task(task)
ti.state = State.QUEUED
ti.hostname = get_hostname()
ti.pid = 1
session.commit()
job = LocalTaskJob(task_instance=ti, executor=MockExecutor(do_update=False))
job.heartrate = 2
heartbeat_records = []
job.heartbeat_callback = lambda session: heartbeat_records.append(job.latest_heartbeat)
job._execute()
assert len(heartbeat_records) > 2
for i in range(1, len(heartbeat_records)):
time1 = heartbeat_records[i - 1]
time2 = heartbeat_records[i]
# Assert that difference small enough
delta = (time2 - time1).total_seconds()
> assert abs(delta - job.heartrate) < 0.5
E assert 0.5193889999999999 < 0.5
E + where 0.5193889999999999 = abs((2.519389 - 2))
E + where 2 = <airflow.jobs.local_task_job.LocalTaskJob object at 0x7f408835a7d0>.heartrate
tests/jobs/test_local_task_job.py:312: AssertionError
```
(source)[https://github.com/apache/airflow/actions/runs/3253568905/jobs/5341352671]
### What you think should happen instead
Tests should not be flaky and should pass reliably :)
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27069 | https://github.com/apache/airflow/pull/27397 | f35b41e7533b09052dfcc591ec25c58207f1518c | 594c6eef6938d7a4975a0d87003160c2390d7ebb | "2022-10-15T02:17:24Z" | python | "2022-10-31T16:54:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,065 | ["airflow/config_templates/airflow_local_settings.py", "airflow/utils/log/non_caching_file_handler.py", "newsfragments/27065.misc.rst"] | Log files are still being cached causing ever-growing memory usage when scheduler is running | ### Apache Airflow version
2.4.1
### What happened
My Airflow scheduler memory usage started to grow after I turned on the `dag_processor_manager` log by doing
```bash
export CONFIG_PROCESSOR_MANAGER_LOGGER=True
```
see the red arrow below

By looking closely at the memory usage as mentioned in https://github.com/apache/airflow/issues/16737#issuecomment-917677177, I discovered that it was the cache memory that's keep growing:

Then I turned off the `dag_processor_manager` log, memory usage returned to normal (not growing anymore, steady at ~400 MB)
This issue is similar to #14924 and #16737. This time the culprit is the rotating logs under `~/logs/dag_processor_manager/dag_processor_manager.log*`.
### What you think should happen instead
Cache memory shouldn't keep growing like this
### How to reproduce
Turn on the `dag_processor_manager` log by doing
```bash
export CONFIG_PROCESSOR_MANAGER_LOGGER=True
```
in the `entrypoint.sh` and monitor the scheduler memory usage
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
k8s
### Anything else
I'm not sure why the previous fix https://github.com/apache/airflow/pull/18054 has stopped working :thinking:
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27065 | https://github.com/apache/airflow/pull/27223 | 131d339696e9568a2a2dc55c2a6963897cdc82b7 | 126b7b8a073f75096d24378ffd749ce166267826 | "2022-10-14T20:50:24Z" | python | "2022-10-25T08:38:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,057 | ["airflow/models/trigger.py"] | Race condition in multiple triggerer process can lead to both picking up same trigger. | ### Apache Airflow version
main (development)
### What happened
Currently airflow triggerer loop picks triggers to process by below steps
query_unassinged_Triggers
update_triggers from above id
query which triggers are assigned to current process
If two triggerer process executes above queries in below order
query unassigned trigger both will get all triggers then if one triggerer completes 2nd and 3rd operation before 2nd triggerer does 2nd operation that will lead to both triggerer running same triggers
there is sync happening after that but unnecessary cleanup operations are done in that case.
### What you think should happen instead
There should be locking on rows which are updated.
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
HA setup with multiple triggerers can have this issue
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27057 | https://github.com/apache/airflow/pull/27072 | 4e55d7fa2b7d5f8d63465d2c5270edf2d85f08c6 | 9c737f6d192ef864dd4cde89a0a90c53f5336566 | "2022-10-14T11:29:13Z" | python | "2022-10-31T01:30:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,029 | ["airflow/providers/apache/druid/hooks/druid.py"] | Druid Operator is not getting host | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We use Airflow 2.3.3. I see that this test is successful, but I take a this error. This is the picture
```
File "/home/airflow/.local/lib/python3.7/site-packages/requests/sessions.py", line 792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}")
```
<img width="1756" alt="Screen Shot 2022-10-12 at 15 34 40" src="https://user-images.githubusercontent.com/47830986/195560866-0527c5f6-3795-460b-b78b-2488e2a77bfb.png">
<img width="1685" alt="Screen Shot 2022-10-12 at 15 37 27" src="https://user-images.githubusercontent.com/47830986/195560954-f5604d10-eb7d-4bab-b10b-2684d8fbe4a2.png">
I take dag like this


Also I tried this type but I failed
```python
ingestion_2 = SimpleHttpOperator(
task_id='test_task',
method='POST',
http_conn_id=DRUID_CONN_ID,
endpoint='/druid/indexer/v1/task',
data=json.dumps(read_file),
dag=dag,
do_xcom_push=True,
headers={
'Content-Type': 'application/json'
},
response_check=lambda response: response.json()['Status'] == 200)
```
I get this log
```
[2022-10-13, 06:16:46 UTC] {http.py:143} ERROR - {"error":"Missing type id when trying to resolve subtype of [simple type, class org.apache.druid.indexing.common.task.Task]: missing type id property 'type'\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 1]"}
```
I don't know this is bug or issue or networking problem but can we check this?
P.S - We use Airflow on Kubernetes so that we can not debug it.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27029 | https://github.com/apache/airflow/pull/27174 | 7dd7400dd4588e063078986026e14ea606a55a76 | 8b5f1d91936bb87ba9fa5488715713e94297daca | "2022-10-13T09:42:34Z" | python | "2022-10-31T10:19:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,010 | ["airflow/dag_processing/manager.py", "tests/dag_processing/test_manager.py"] | DagProcessor doesnt pick new files until queued file parsing completes | ### Apache Airflow version
2.4.1
### What happened
When there are large number of dag files, lets say 10K and each takes sometime to parse, dag_parser doesnt pick any newly created files till all 10k files are finished parsing
`if not self._file_path_queue:
self.emit_metrics()
self.prepare_file_path_queue()`
Above logic only adds new files to queue when queue is empty
### What you think should happen instead
Every loop of dag processor should pick new files and add into file for parsing queue
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27010 | https://github.com/apache/airflow/pull/27060 | fb9e5e612e3ddfd10c7440b7ffc849f0fd2d0b09 | 65b78b7dbd1d824d2c22b65922149985418acbc8 | "2022-10-12T11:34:30Z" | python | "2022-11-14T01:43:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,987 | ["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | DataprocLink is not available for dataproc workflow operators | ### Apache Airflow version
main (development)
### What happened
For DataprocInstantiateInlineWorkflowTemplateOperator and DataprocInstantiateWorkflowTemplateOperator, the dataproc link is available only for the jobs that have succeeded. Incase of Failure, the DataprocLink is not available
### What you think should happen instead
Like other dataproc operators, this should be available for workflow operators as well
### How to reproduce
_No response_
### Operating System
MacOS Monterey
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.5.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26987 | https://github.com/apache/airflow/pull/26986 | 7cfa1be467b995b886a97b68498137a76a31f97c | 0cb6450d6df853e1061dbcafbc437c07a8e0e555 | "2022-10-11T09:17:26Z" | python | "2022-11-16T21:30:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,984 | ["dev/breeze/src/airflow_breeze/utils/path_utils.py"] | Running pre-commit without installing breeze errors out | ### Apache Airflow version
main (development)
### What happened
Running pre-commit without installing `apache-airflow-breeze` errors out
```
Traceback (most recent call last):
File "/Users/bhavaniravi/projects/airflow/./scripts/ci/pre_commit/pre_commit_flake8.py", line 39, in <module>
from airflow_breeze.global_constants import MOUNT_SELECTED
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/global_constants.py", line 30, in <module>
from airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 240, in <module>
AIRFLOW_SOURCES_ROOT = find_airflow_sources_root_to_operate_on().resolve()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 235, in find_airflow_sources_root_to_operate_on
reinstall_if_setup_changed()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 148, in reinstall_if_setup_changed
if sources_hash != package_hash:
UnboundLocalError: local variable 'package_hash' referenced before assignment
```
And to understand the error better, I commented out the exception handling code.
```
try:
package_hash = get_package_setup_metadata_hash()
except ModuleNotFoundError as e:
if "importlib_metadata" in e.msg:
return False
```
It returned
```
Traceback (most recent call last):
File "/Users/bhavaniravi/projects/airflow/./scripts/ci/pre_commit/pre_commit_flake8.py", line 39, in <module>
from airflow_breeze.global_constants import MOUNT_SELECTED
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/global_constants.py", line 30, in <module>
from airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 240, in <module>
AIRFLOW_SOURCES_ROOT = find_airflow_sources_root_to_operate_on().resolve()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 235, in find_airflow_sources_root_to_operate_on
reinstall_if_setup_changed()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 141, in reinstall_if_setup_changed
package_hash = get_package_setup_metadata_hash()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 86, in get_package_setup_metadata_hash
for line in distribution('apache-airflow-breeze').metadata.as_string().splitlines(keepends=False):
File "/opt/homebrew/Cellar/[email protected]/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/metadata.py", line 524, in distribution
return Distribution.from_name(distribution_name)
File "/opt/homebrew/Cellar/[email protected]/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/metadata.py", line 187, in from_name
raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: apache-airflow-breeze
```
### What you think should happen instead
The error should be handled gracefully, and print out the command to install `apache-airflow-breeze`
### How to reproduce
1. Here is the weird part. I install `pip install -e ./dev/breeze` breeze erorr vanishes.
But when I uninstall`pip uninstall apache-airflow-breeze` the error doesn't re-appear
2. The error occurred again after stopping the docker desktop. Running `pip install -e ./dev/breeze` fixed it.
### Operating System
MacOS Monetary
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26984 | https://github.com/apache/airflow/pull/26985 | 58378cfd42b137a31032404783b2957284a1e538 | ee3625540ff3712bbc6215214e4534d7e91c45fa | "2022-10-11T07:24:07Z" | python | "2022-10-22T23:35:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,960 | ["airflow/api/common/mark_tasks.py", "airflow/models/taskinstance.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/log_reader.py", "airflow/utils/state.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/views/test_views_grid.py"] | can't see failed sensor task log on webpage | ### Apache Airflow version
2.4.1
### What happened

when the sensor running, I can see the log above, but when I manual set the task state to failed or the task failed by other reason, I can't see log at here

In other version airflow, like 2.3/2.2, this still happens
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26960 | https://github.com/apache/airflow/pull/26993 | ad7f8e09f8e6e87df2665abdedb22b3e8a469b49 | f110cb11bf6fdf6ca9d0deecef9bd51fe370660a | "2022-10-10T06:42:09Z" | python | "2023-01-05T16:42:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,912 | ["airflow/www/static/js/api/useTaskLog.ts", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"] | Log-tab under grid view is automatically re-fetching completed logs every 3 sec. | ### Apache Airflow version
2.4.1
### What happened
The new inline log-tab under grid view is fantastic.
What's not so great though, is that it is automatically reloading the logs on the `/api/v1/dags/.../dagRuns/.../taskInstances/.../logs/1` api endpoint every 3 seconds. Same interval as the reload of the grid status it seems.
This:
* Makes it difficult for users to scroll in the log panel and to select text in the log panel, because it is replaced all the time
* Put unnecessary load on the client and the link between client-webserver.
* Put unnecssary load on the webserver and on the logging-backend, in our case it involves queries to an external Loki server.
This happens even if the TaskLogReader has set `metadata["end_of_log"] = True`
### What you think should happen instead
Logs should not automatically be reloaded if `end_of_log=True`
For logs which are not at end, some other slower reload or more incremental query/streaming is preferred.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.2.0
apache-airflow-providers-sqlite==3.1.0
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26912 | https://github.com/apache/airflow/pull/27233 | 8d449ae04aa67ecbabf84f35a34fc2e53665ee17 | e73e90e388f7916ae5eea48ba39687d99f7a94b1 | "2022-10-06T12:38:34Z" | python | "2022-10-25T14:26:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,910 | ["dev/provider_packages/MANIFEST_TEMPLATE.in.jinja2", "dev/provider_packages/SETUP_TEMPLATE.py.jinja2"] | python_kubernetes_script.jinja2 file missing from apache-airflow-providers-cncf-kubernetes==4.4.0 release | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
```
$ pip freeze | grep apache-airflow-providers
apache-airflow-providers-cncf-kubernetes==4.4.0
```
### Apache Airflow version
2.4.1
### Operating System
macos-12.6
### Deployment
Other Docker-based deployment
### Deployment details
Using the astro cli.
### What happened
Trying to test the `@task.kubernetes` decorator with Airflow 2.4.1 and the `apache-airflow-providers-cncf-kubernetes==4.4.0` package, I get the following error:
```
[2022-10-06, 10:49:01 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/decorators/kubernetes.py", line 95, in execute
write_python_script(jinja_context=jinja_context, filename=script_filename)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/python_kubernetes_script.py", line 79, in write_python_script
template = template_env.get_template('python_kubernetes_script.jinja2')
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/environment.py", line 1010, in get_template
return self._load_template(name, globals)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/environment.py", line 969, in _load_template
template = self.loader.load(self, name, self.make_globals(globals))
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/loaders.py", line 126, in load
source, filename, uptodate = self.get_source(environment, name)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/loaders.py", line 218, in get_source
raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: python_kubernetes_script.jinja2
```
Looking the [source file](https://files.pythonhosted.org/packages/5d/54/0ea031a9771ded6036d05ad951359f7361995e1271a302ba2af99bdce1af/apache-airflow-providers-cncf-kubernetes-4.4.0.tar.gz) for the `apache-airflow-providers-cncf-kubernetes==4.4.0` package, I can see that `python_kubernetes_script.py` is there but not `python_kubernetes_script.jinja2`
```
$ tar -ztvf apache-airflow-providers-cncf-kubernetes-4.4.0.tar.gz 'apache-airflow-providers-cncf-kubernetes-4.4.0/airflow/providers/cncf/kubernetes/py*'
-rw-r--r-- 0 root root 2949 Sep 22 15:25 apache-airflow-providers-cncf-kubernetes-4.4.0/airflow/providers/cncf/kubernetes/python_kubernetes_script.py
```
### What you think should happen instead
The `python_kubernetes_script.jinja2` file that is available here https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/python_kubernetes_script.jinja2 should be included in the `apache-airflow-providers-cncf-kubernetes==4.4.0` pypi package.
### How to reproduce
With a default installation of `apache-airflow==2.4.1` and `apache-airflow-providers-cncf-kubernetes==4.4.0`, running the following DAG will reproduce the issue.
```
import pendulum
from airflow.decorators import dag, task
@dag(
schedule_interval=None,
start_date=pendulum.datetime(2022, 7, 20, tz="UTC"),
catchup=False,
tags=['xray_classifier'],
)
def k8s_taskflow():
@task.kubernetes(
image="python:3.8-slim-buster",
name="k8s_test",
namespace="default",
in_cluster=False,
config_file="/path/to/config"
)
def execute_in_k8s_pod():
import time
print("Hello from k8s pod")
time.sleep(2)
execute_in_k8s_pod_instance = execute_in_k8s_pod()
k8s_taskflow_dag = k8s_taskflow()
```
### Anything else
If I manually add the `python_kubernetes_script.jinja2` into my `site-packages/airflow/providers/cncf/kubernetes/` folder, then it works as expected.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26910 | https://github.com/apache/airflow/pull/27451 | 4cdea86d4cc92a51905aa44fb713f530e6bdadcf | 8975d7c8ff00841f4f2f21b979cb1890e6d08981 | "2022-10-06T11:33:31Z" | python | "2022-11-01T20:31:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,905 | ["airflow/www/static/js/api/index.ts", "airflow/www/static/js/api/useTaskXcom.ts", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Nav.tsx", "airflow/www/static/js/dag/details/taskInstance/Xcom/XcomEntry.tsx", "airflow/www/static/js/dag/details/taskInstance/Xcom/index.tsx", "airflow/www/templates/airflow/dag.html"] | Display selected task outputs (xcom) in task UI | ### Description
I often find myself checking the stats of a passed task, e.g. "inserted 3 new rows" or "discovered 4 new files" in the task logs. It would be very handy to see these on the UI directly, as part of the task details or elsewhere.
One idea would be to choose in the Task definition, which XCOM keys should be output in the task details, like so:

### Use case/motivation
As a developer, I want to better monitor the results of my tasks in terms of key metrics, so I can see the data processed by them. While for production, this can be achieved by forwarding/outputting metrics to other systems, like notification hooks, or ingesting them into e.g. grafana, I would like to do this already in AirFlow to a certain extent. This would certainly cut down on my clicks while running beta DAGs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26905 | https://github.com/apache/airflow/pull/35719 | d0f4512ecb9c0683a60be7b0de8945948444df8e | 77c01031d6c569d26f6fabd331597b7e87274baa | "2022-10-06T07:05:39Z" | python | "2023-12-04T15:59:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,892 | ["airflow/www/views.py"] | Dataset Next Trigger Modal Not Populating Latest Update | ### Apache Airflow version
2.4.1
### What happened
When using dataset scheduling, it isn't obvious which datasets a downstream dataset consumer is awaiting in order for the DAG to be scheduled.
I would assume that this is supposed to be solved by the `Latest Update` column in the modal that opens when selecting `x of y datasets updated`, but it appears that the data isn't being populated.
<img width="601" alt="image" src="https://user-images.githubusercontent.com/5778047/194116186-d582cede-c778-47f7-8341-fc13a69a2358.png">
Although one of the datasets has been produced, there is no data in the `Latest Update` column of the modal.
In the above example, both datasets have been produced > 1 time.
<img width="581" alt="image" src="https://user-images.githubusercontent.com/5778047/194116368-ceff241f-a623-4893-beb7-637b821c4b53.png">
<img width="581" alt="image" src="https://user-images.githubusercontent.com/5778047/194116410-19045f5a-8400-47b0-afcb-9fbbffca26ee.png">
### What you think should happen instead
The `Latest Update` column should be populated with the latest update timestamp for each dataset required to schedule a downstream, dataset consuming DAG.
Ideally there would be some form of highlighting on the "missing" datasets for quick visual feedback when DAGs have a large number of datasets required for scheduling.
### How to reproduce
1. Create a DAG (or 2 individual DAGs) that produces 2 datasets
2. Produce both datasets
3. Then produce _only one_ dataset
4. Check the modal by clicking from the home screen on the `x of y datasets updated` button.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26892 | https://github.com/apache/airflow/pull/29441 | 0604033829787ebed59b9982bf08c1a68d93b120 | 6f9efbd0537944102cd4a1cfef06e11fe0a3d03d | "2022-10-05T16:51:49Z" | python | "2023-02-20T08:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,875 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | SQLToGCSOperators Add Support for Dumping Json value in CSV | ### Description
If output format is `CSV`, then any "dict" type object returned from a database, for example a Postgres JSON column, is not dumped to a string and is kept as a dict object.
### Use case/motivation
Currently if export_format is `CSV` and data column in Postgres is defined as `json` or `jsonb` data type, the param `stringify_dict` in abstract method `convert_type` has been hardcoded to `False`, which results in param `stringify_dict` in subclass cannot be customised, for instance in subclass `PostgresToGCSOperator`.
Function `convert_types` in base class `BaseSQLToGCSOperator`:
```
def convert_types(self, schema, col_type_dict, row, stringify_dict=False) -> list:
"""Convert values from DBAPI to output-friendly formats."""
return [
self.convert_type(value, col_type_dict.get(name), stringify_dict=stringify_dict)
for name, value in zip(schema, row)
]
```
Function `convert_type` in subclass `PostgresToGCSOperator`:
```
def convert_type(self, value, schema_type, stringify_dict=True):
"""
Takes a value from Postgres, and converts it to a value that's safe for
JSON/Google Cloud Storage/BigQuery.
Timezone aware Datetime are converted to UTC seconds.
Unaware Datetime, Date and Time are converted to ISO formatted strings.
Decimals are converted to floats.
:param value: Postgres column value.
:param schema_type: BigQuery data type.
:param stringify_dict: Specify whether to convert dict to string.
"""
if isinstance(value, datetime.datetime):
iso_format_value = value.isoformat()
if value.tzinfo is None:
return iso_format_value
return pendulum.parse(iso_format_value).float_timestamp
if isinstance(value, datetime.date):
return value.isoformat()
if isinstance(value, datetime.time):
formatted_time = time.strptime(str(value), "%H:%M:%S")
time_delta = datetime.timedelta(
hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec
)
return str(time_delta)
if stringify_dict and isinstance(value, dict):
return json.dumps(value)
if isinstance(value, Decimal):
return float(value)
return value
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26875 | https://github.com/apache/airflow/pull/26876 | bab6dbec3883084e5872123b515c2a8491c32380 | a67bcf3ecaabdff80c551cff1f987523211e7af4 | "2022-10-04T23:21:37Z" | python | "2022-10-06T08:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,812 | ["chart/values.schema.json", "tests/charts/test_webserver.py"] | Add NodePort Option to the values schema | ### Official Helm Chart version
1.6.0 (latest released)
### Apache Airflow version
2.3.4
### Kubernetes Version
1.23
### Helm Chart configuration
```
# shortened values.yaml file
webserver:
service:
type: NodePort
ports:
- name: airflow-ui
port: 80
targetPort: airflow-ui
nodePort: 8081 # Note this line does not work, this is what'd be nice to have for defining nodePort
```
### Docker Image customisations
_No response_
### What happened
Supplying nodePort like in the above Helm Chart Configuration example fails with an error saying a value of nodePort is not supported.
### What you think should happen instead
It'd be nice if we could define the nodePort we want the airflow-webserver service to listen on at launch. As it currently stands, supplying nodePort like in the above values.yaml example will fail, saying nodePort cannot be supplied. The workaround is to manually edit the webserver service post-deployment and specify the desired port for nodePort.
I looked at the way [the webserver service template file](https://github.com/apache/airflow/blob/main/chart/templates/webserver/webserver-service.yaml#L44-L50) is set up, and the logic there should allow this, but I believe the missing definition in the [schema.json](https://github.com/apache/airflow/blob/main/chart/values.schema.json#L3446-L3472) file is causing this to error out.
### How to reproduce
Attempt to install the Airflow Helm chart using the above values.yaml config
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26812 | https://github.com/apache/airflow/pull/26945 | be8a62e596a0dc0f935114a9d585007b497312a2 | cc571e8e0e27420d179870c8ddc7274c4f47ba44 | "2022-09-30T22:31:13Z" | python | "2022-11-14T00:57:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,802 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | pdb no longer works with airflow test command since 2.3.3 | Converted back to issue as I've reproduced it and traced the issue back to https://github.com/apache/airflow/pull/24362
### Discussed in https://github.com/apache/airflow/discussions/26352
<div type='discussions-op-text'>
<sup>Originally posted by **GuruComposer** September 12, 2022</sup>
### Apache Airflow version
2.3.4
### What happened
I used to be able to use ipdb to debug DAGs by running `airflow tasks test <dag_name> <dag_id>`, and hitting an ipdb breakpoint (ipdb.set_trace()).
This no longer works. I get a strange type error:
``` File "/usr/local/lib/python3.9/bdb.py", line 88, in trace_dispatch
return self.dispatch_line(frame)
File "/usr/local/lib/python3.9/bdb.py", line 112, in dispatch_line
self.user_line(frame)
File "/usr/local/lib/python3.9/pdb.py", line 262, in user_line
self.interaction(frame, None)
File "/home/astro/.local/lib/python3.9/site-packages/IPython/core/debugger.py", line 336, in interaction
OldPdb.interaction(self, frame, traceback)
File "/usr/local/lib/python3.9/pdb.py", line 357, in interaction
self._cmdloop()
File "/usr/local/lib/python3.9/pdb.py", line 322, in _cmdloop
self.cmdloop()
File "/usr/local/lib/python3.9/cmd.py", line 126, in cmdloop
line = input(self.prompt)
TypeError: an integer is required (got type NoneType)```
### What you think should happen instead
I should get the ipdb shell.
### How to reproduce
1. Add ipdb breakpoint anywhere in airflow task.
import ipdb; ipdb.set_trace()
2. Run that task:
Run `airflow tasks test <dag_name> <dag_id>`, and
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
2.3.4 | https://github.com/apache/airflow/issues/26802 | https://github.com/apache/airflow/pull/26806 | 677df102542ab85aab4efbbceb6318a3c7965e2b | 029ebacd9cbbb5e307a03530bdaf111c2c3d4f51 | "2022-09-30T13:51:53Z" | python | "2022-09-30T17:46:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,796 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py"] | Incorrect await_container_completion in KubernetesPodOperator | ### Apache Airflow version
2.3.4
### What happened
The [await_container_completion](
https://github.com/apache/airflow/blob/2.4.0/airflow/providers/cncf/kubernetes/utils/pod_manager.py#L259) function has a negated condition
```
while not self.container_is_running(pod=pod, container_name=container_name):
```
that causes our Airflow tasks running <1 s to never be completed, causing an infinite loop.
I see this was addressed and released in https://github.com/apache/airflow/pull/23883, but later reverted in https://github.com/apache/airflow/pull/24474. How come it was reverted? The thread on that revert PR with comments from @jedcunningham and @potiuk didn't really address why the fix was reverted.
### What you think should happen instead
Pods finishing within 1s should be properly handled.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==4.3.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26796 | https://github.com/apache/airflow/pull/28771 | 4f603364d364586a2062b061ddac18c4b58596d2 | ce677862be4a7de777208ba9ba9e62bcd0415393 | "2022-09-30T07:47:04Z" | python | "2023-01-07T18:17:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,785 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | BigQueryHook Requires Optional Field When Parsing Results Schema | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google 8.3.0
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When querying BigQuery using the BigQueryHook, sometimes the following error is returned:
```
[2022-09-29, 19:04:57 UTC] {{taskinstance.py:1902}} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/operators/python.py", line 171, in execute
return_value = self.execute_callable()
File "/usr/local/lib/python3.9/site-packages/airflow/operators/python.py", line 189, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/usr/src/app/dags/test.py", line 23, in curried_bigquery
cursor.execute(UPDATE_HIGHWATER_MARK_QUERY)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2700, in execute
self.description = _format_schema_for_description(query_results["schema"])
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 3009, in _format_schema_for_description
field["mode"] == "NULLABLE",
KeyError: 'mode'
```
### What you think should happen instead
The schema should be returned without error.
### How to reproduce
_No response_
### Anything else
According to the official docs, only `name` and `type` are required to be present. `mode` is listed as optional.
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#TableFieldSchema
The code currently expects `mode` to be present.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26785 | https://github.com/apache/airflow/pull/26786 | b7203cd36eef20de583df3e708f49073d689ac84 | cee610ae5cf14c117527cdfc9ac2ef0ddb5dcf3b | "2022-09-29T19:05:19Z" | python | "2022-10-01T13:47:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,774 | ["airflow/providers/trino/provider.yaml", "generated/provider_dependencies.json"] | Trino and Presto hooks do not properly execute statements other than SELECT | ### Apache Airflow Provider(s)
presto, trino
### Versions of Apache Airflow Providers
apache-airflow-providers-trino==4.0.1
apache-airflow-providers-presto==4.0.1
### Apache Airflow version
2.4.0
### Operating System
macOS 12.6
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When using the TrinoHook (PrestoHook also applies), only the `get_records()` and `get_first()` methods work as expected, the `run()` and `insert_rows()` do not.
The SQL statements sent by the problematic methods reach the database (visible in logs and UI), but they don't get executed.
The issue is caused by the hook not making the required subsequent requests to the Trino HTTP endpoints after the first request. More info [here](https://trino.io/docs/current/develop/client-protocol.html#overview-of-query-processing):
> If the JSON document returned by the POST to /v1/statement does not contain a nextUri link, the query has completed, either successfully or unsuccessfully, and no additional requests need to be made. If the nextUri link is present in the document, there are more query results to be fetched. The client should loop executing a GET request to the nextUri returned in the QueryResults response object until nextUri is absent from the response.
SQL statements made by methods like `get_records()` do get executed because internally they call `fetchone()` or `fetchmany()` on the cursor, which do make the subsequent requests.
### What you think should happen instead
The Hook is able to execute SQL statements other than SELECT.
### How to reproduce
Connect to a Trino or Presto instance and execute any SQL statement (INSERT or CREATE TABLE) using `TrinoHook.run()`, the statements will reach the API but they won't get executed.
Then, provide a dummy handler function like this:
`TrinoHook.run(..., handler=lambda cur: cur.description)`
The `description` property internally iterates over the cursor requests, causing the statement getting executed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26774 | https://github.com/apache/airflow/pull/27168 | e361be74cd800efe1df9fa5b00a0ad0df88fcbfb | 09c045f081feeeea09e4517d05904b38660f525c | "2022-09-29T11:32:29Z" | python | "2022-10-26T03:13:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,767 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | MaxID logic for GCSToBigQueryOperator Causes XCom Serialization Error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google 8.4.0rc1
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
The Max ID parameter, when used, causes an XCom serialization failure when trying to retrieve the value back out of XCom
### What you think should happen instead
Max ID value is returned from XCom call
### How to reproduce
Set `max_id_key=column,` on the operator, check XCom of the operator after it runs.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26767 | https://github.com/apache/airflow/pull/26768 | 9a6fc73aba75a03b0dd6c700f0f8932f6a474ff7 | b7203cd36eef20de583df3e708f49073d689ac84 | "2022-09-29T03:03:25Z" | python | "2022-10-01T13:39:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,754 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "airflow/providers/amazon/aws/utils/connection_wrapper.py", "tests/providers/amazon/aws/utils/test_connection_wrapper.py"] | Cannot retrieve config from alternative secrets backend. | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
Providers included in apache/airflow:2.4.0 docker image:
```
apache-airflow==2.4.0
apache-airflow-providers-amazon==5.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.3.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-docker==3.1.0
apache-airflow-providers-elasticsearch==4.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.3.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.2.0
apache-airflow-providers-mysql==3.2.0
apache-airflow-providers-odbc==3.1.1
apache-airflow-providers-postgres==5.2.1
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.1.0
```
### Apache Airflow version
2.4
### Operating System
AWS Fargate
### Deployment
Docker-Compose
### Deployment details
We have configure the alternative secrets backend to use AWS SMP:
```
[secrets]
backend = airflow.providers.amazon.aws.secrets.systems_manager.SystemsManagerParameterStoreBackend
backend_kwargs = {"config_prefix": "/airflow2/config", "connections_prefix": "/airflow2/connections", "variables_prefix": "/airflow2/variables"}
```
### What happened
```
All processes fail with:
`Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 178, in _get_secret
response = self.client.get_parameter(Name=ssm_path, WithDecryption=True)
File "/home/airflow/.local/lib/python3.7/site-packages/cached_property.py", line 36, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 98, in client
from airflow.providers.amazon.aws.hooks.base_aws import SessionFactory
ImportError: cannot import name 'SessionFactory' from 'airflow.providers.amazon.aws.hooks.base_aws' (/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 122, in _get_config_value_from_secret_backend
return secrets_client.get_config(config_key)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 167, in get_config
return self._get_secret(self.config_prefix, key)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 180, in _get_secret
except self.client.exceptions.ParameterNotFound:
File "/home/airflow/.local/lib/python3.7/site-packages/cached_property.py", line 36, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 98, in client
from airflow.providers.amazon.aws.hooks.base_aws import SessionFactory
ImportError: cannot import name 'SessionFactory' from 'airflow.providers.amazon.aws.hooks.base_aws' (/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 178, in _get_secret
response = self.client.get_parameter(Name=ssm_path, WithDecryption=True)
File "/home/airflow/.local/lib/python3.7/site-packages/cached_property.py", line 36, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 98, in client
from airflow.providers.amazon.aws.hooks.base_aws import SessionFactory
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 49, in <module>
from airflow.models.connection import Connection
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/connection.py", line 32, in <module>
from airflow.models.base import ID_LEN, Base
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/base.py", line 76, in <module>
COLLATION_ARGS = get_id_collation_args()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/base.py", line 70, in get_id_collation_args
conn = conf.get('database', 'sql_alchemy_conn', fallback='')
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 557, in get
option = self._get_option_from_secrets(deprecated_key, deprecated_section, key, section)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 577, in _get_option_from_secrets
option = self._get_secret_option(section, key)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 502, in _get_secret_option
return _get_config_value_from_secret_backend(secrets_path)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 125, in _get_config_value_from_secret_backend
'Cannot retrieve config from alternative secrets backend. '
airflow.exceptions.AirflowConfigException: Cannot retrieve config from alternative secrets backend. Make sure it is configured properly and that the Backend is accessible.
cannot import name 'SessionFactory' from 'airflow.providers.amazon.aws.hooks.base_aws' (/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py)
```
`
### What you think should happen instead
Airflow 2.3.4 was using amazon provider 5.0.0 and everything was working fine. Looking at the SystemsManagerParameterStoreBackend class, the client method changed in amazon 5.1.0 (coming with AF 2.4). There use to be a simple boto3.session call. The new code calls for an import of SessionFactory. I do not understand why this import fails though.
### How to reproduce
I assume anyone that sets up the parameter store as backend and try to use the docker image (FROM apache/airflow:2.4.0) will run into this issue.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26754 | https://github.com/apache/airflow/pull/26784 | 8898db999c88c98b71f4a5999462e6858aab10eb | 8a1bbcfcb31c1adf5c0ea2dff03b507f584ad1f3 | "2022-09-28T15:37:01Z" | python | "2022-10-06T16:38:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,709 | ["chart/templates/cleanup/cleanup-cronjob.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_cleanup_pods.py"] | Cleanup Job Pods host remove history after completing the job | We currently do not have possibility of confioguring:
* spec.successfulJobsHistoryLimit
* spec.failedJobsHistoryLimit
For cleanup jobs. Creaeted it to add a feature for that.
### Discussed in https://github.com/apache/airflow/discussions/26547
<div type='discussions-op-text'>
<sup>Originally posted by **alionar** September 21, 2022</sup>
<img width="919" alt="Screen Shot 2022-09-21 at 10 36 29" src="https://user-images.githubusercontent.com/18596757/191408780-fa3e0aba-2faf-45d2-b0ca-6c8c8db458d2.png">
Airflow Version: 2.2.2
Kubernetes Version: 1.22.12-gke.500
Helm Chart version: 1.6.0
Hi, i found that completed cleanup job pods just stayed in nodes after finished and make GKE auto scaller triggered to add new nodes everytime cleanup job pods executed.
It made us to delete all completed job pods and drain the unused nodes manually everyday.
I found in k8s docs that:
```
The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional.
These fields specify how many completed and failed jobs should be kept.
By default, they are set to 3 and 1 respectively
```
But when i tried to add this history limit as 0 into cleanup helm chart value, chart didn't recognize it

is that possible to add some job history limit number in cleanup value chart?
Current values.yaml for cleanup:
```
cleanup:
enabled: true
schedule: "*/15 * * * *"
```
</div> | https://github.com/apache/airflow/issues/26709 | https://github.com/apache/airflow/pull/26838 | 6b75be43171eafc45825d043ef051638aa103ccd | e5c704d12f017c55c93284a5c57cb52fdfbaa571 | "2022-09-27T14:35:35Z" | python | "2022-10-05T13:45:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,607 | ["airflow/www/views.py"] | resolve web ui views warning re DISTINCT ON | ### Body
Got this warning in webserver output when loading home page
```
/Users/dstandish/code/airflow/airflow/www/views.py:710 SADeprecationWarning: DISTINCT ON is currently supported only by the PostgreSQL dialect. Use of DISTINCT ON for other backends is currently silently ignored, however this usage is deprecated, and will raise CompileError in a future release for all backends that do not support this syntax.
```
looks like it's this line
```
dagtags = session.query(DagTag.name).distinct(DagTag.name).all()
```
may be able to change to `func.distinct`
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26607 | https://github.com/apache/airflow/pull/26608 | 7179eba69e9cb40c4122f3589c5977e536469b13 | 55d11464c047d2e74f34cdde75d90b633a231df2 | "2022-09-22T21:05:43Z" | python | "2022-09-23T08:52:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,581 | ["airflow/utils/db_cleanup.py", "tests/utils/test_db_cleanup.py"] | airflow db clean is unable to delete the table rendered_task_instance_fields | ### Apache Airflow version
Other Airflow 2 version
### What happened
Hi All,
When I run the below command in Airflow 2.3.4:
`airflow db clean --clean-before-timestamp '2022-09-18T00:00:00+05:30' --yes`
I receive an error within a warning which says
`[2022-09-20 10:33:30,971] {db_cleanup.py:302} WARNING - Encountered error when attempting to clean table 'rendered_task_instance_fields'.`
All other tables like `log`, `dag`, `xcom` get deleted properly. On my analysis, `rendered_task_instance_fields` was the 5th largest table by rows in the DB, so the impact of it's data size is significant.
On analyzing the table itself on PostGres 13 DB, I found that the table `rendered_task_instance_fields` has no timestamp column that records when the entry was inserted.
https://imgur.com/a/Qys2uwD
Thus, there would be no way the code can filter out older records and delete them.
### What you think should happen instead
A timestamp field needs to be added to the table `rendered_task_instance_fields` basis of which older records can be deleted.
### How to reproduce
Run the below command in airflow v2.3.4 and check the output.
`airflow db clean --clean-before-timestamp '2022-09-18T00:00:00+05:30' --yes`
### Operating System
Ubuntu 20.04 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==5.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.3.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-mongo==3.0.0
apache-airflow-providers-mysql==3.2.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-sqlite==3.2.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26581 | https://github.com/apache/airflow/pull/28243 | 20a1c4db9882a223ae08de1a46e9bdf993698865 | 171ca66142887f59b1808fcdd6b19e7141a08d17 | "2022-09-22T04:17:29Z" | python | "2022-12-09T19:11:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,571 | ["airflow/providers/amazon/aws/secrets/secrets_manager.py", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager-json.png", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager-uri.png", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager.png", "docs/apache-airflow-providers-amazon/secrets-backends/aws-secrets-manager.rst", "tests/providers/amazon/aws/secrets/test_secrets_manager.py"] | Migrate Amazon Provider Package's `SecretsManagerBackend`'s `full_url_mode=False` implementation. | # Objective
I am following up on all the changes I've made in PR #25432 and which were originally discussed in issue #25104.
The objective of the deprecations introduced in #25432 is to flag and remove "odd" behaviors in the `SecretsManagerBackend`.
The objective of _this issue_ being opened is to discuss them, and hopefully reach a consensus on how to move forward implementing the changes.
I realize that a lot of the changes I made and their philosophy were under-discussed, so I will place the discussion here.
## What does it mean for a behavior to be "odd"?
You can think of the behaviors of `SecretsManagerBackend`, and which secret encodings it supports, as a Venn diagram.
Ideally, `SecretsManagerBackend` supports _at least_ everything the base API supports. This is a pretty straightforward "principle of least astonishment" requirement.
For example, it would be "astonishing" if copy+pasting a secret that works with the base API did not work in the `SecretsManagerBackend`.
To be clear, it would also be "astonishing" if the reverse were not true-- i.e. copy+pasting a valid secret from `SecretsManagerBackend` doesn't work with, say, environment variables. That said, adding new functionality is less astonishing than when a subclass doesn't inherit 100% of the supported behaviors of what it is subclassing. So although adding support for new secret encodings is arguably not desirable (all else equal), I think we can all agree it's not as bad as the reverse.
## Examples
I will cover two examples where we can see the "Venn diagram" nature of the secrets backend, and how some behaviors that are supported in one implementation are not supported in another:
### Example 1
Imagine the following environment variable secret:
```shell
export AIRFLOW_CONN_POSTGRES_DEFAULT='{
"conn_type": "postgres",
"login": "usr",
"password": "not%url@encoded",
"host": "myhost"
}'
```
Prior to #25432, this was _**not**_ a secret that worked with the `SecretsManagerBackend`, even though it did work with base Airflow's `EnvironmentVariablesBackend`(as of 2.3.0) due to the values not being URL-encoded.
Additionally, the `EnvironmentVariablesBackend` is smart enough to choose whether a secret should be treated as a JSON or a URI _without having to be explicitly told_. In a sense, this is also an incompatibility-- why should the `EnvironmentVariablesBackend` be "smarter" than the `SecretsManagerBackend` when it comes to discerning JSONs from URIs, and supporting both at the same time rather than needing secrets to be always one type of serialization?
### Example 2
Imagine the following environment variable secret:
```shell
export AIRFLOW_CONN_POSTGRES_DEFAULT="{
'conn_type': 'postgres',
'user': 'usr',
'pass': 'is%20url%20encoded',
'host': 'myhost'
}"
```
This is _not_ a valid secret in Airflow's base `EnvironmentVariablesBackend` implementation, although it _is_ a valid secret in `SecretsManagerBackend`.
There are two things that make it invalid in the `EnvironmentVariablesBackend` but valid in `SecretsManagerBackend`:
- `ast.litera_eval` in `SecretsManagerBackend` means that a Python dict repr is allowed to be read in.
- `user` and `pass` are invalid field names in the base API; these should be `login` and `password`, respectively. But the `_standardize_secret_keys()` method in the `SecretsManagerBackend` implementation makes it valid.
Additionally, note that this secret also parses differently in the `SecretsManagerBackend` than the `EnvironmentVariablesBackend`: the password `"is%20url%20encoded"` renders as `"is url encoded"` in the `SecretsManagerBackend`, but is left untouched by the base API when using a JSON.
## List of odd behaviors
Prior to #25432, the following behaviors were a part of the `SecretsManagerBackend` specification that I would consider "odd" because they are not part of the base API implementation:
1. `full_url_mode=False` still required URL-encoded parameters for JSON values.
2. `ast.literal_eval` was used instead of `json.dumps`, which means that the `SecretsManagerBackend` on `full_url_mode=False` was supporting Python dict reprs and other non-JSONs.
3. The Airflow config required setting `full_url_mode=False` to determine what is a JSON or URI.
4. `get_conn_value()` always must return a URI.
5. The `SecretsManagerBackend` allowed for atypical / flexible field names (such as `user` instead of `login`) via the `_standardize_secret_keys()` method.
We also introduced a new odd behavior in order to assist with the migration effort, which is:
6. New kwarg called `are_secret_values_urlencoded` to support secrets whose encodings are "non-idempotent".
In the below sections, I discuss each behavior in more detail, and I've added my own opinions about how odd these behaiors are and the estimated adverse impact of removing the behaviors.
### Behavior 1: URL-encoding JSON values
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated|High|High|
This was the original behavior that frustrated me and motivated me to open issues + submit PRs.
With the "idempotency" check, I've done my best to smooth out the transition away from URL-encoded JSON values.
The requirement is now _mostly_ removed, to the extent that the removal of this behavior can be backwards compatible and as smooth as possible:
- Users whose secrets do not contain special characters will not have even noticed a change took place.
- Users who _do_ have secrets with special characters hopefully are checking their logs and will have seen a deprecation warning telling them to remove the URL-encoding.
- In a _select few rare cases_ where a user has a secret with a "non-idempotent" encoding, the user will have to reconfigure their `backend_kwargs` to have `are_secret_values_urlencoded` set.
I will admit that I was surprised at how smooth we could make the developer experience around migrating this behavior for the majority of use cases.
When you consider...
- How smooth migrating is (just remove the URL-encoding! In most cases you don't need to do anything else!), and
- How disruptive full removal of URL-encoding is (to people who have not migrated yet),
.. it makes me almost want to hold off on fully removing this behavior for a little while longer, just to make sure developers read their logs and see the deprecation warning.
### Behavior 2: `ast.literal_eval` for deserializing JSON secrets
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated|High|Low|
It is hard to feel bad for anyone who is adversely impacted by this removal:
- This behavior should never have been introduced
- This behavior was never a documented behavior
- A reasonable and educated user will have known better than to rely on non-JSONs.
Providing a `DeprecationWarning` for this behavior was already going above and beyond, and we can definitely remove this soon.
### Behavior 3: `full_url_mode=False` is required for JSON secrets
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Active|Medium|Low|
This behavior is odd because the base API does not require such a thing-- whether it is a JSON or a URI can be inferred by checking whether the first character of the string is `{`.
Because it is possible to modify this behavior without introducing breaking changes, moving from _lack_ of optionality for the `full_url_mode` kwarg can be considered a feature addition.
Ultimately, we would want to switch from `full_url_mode: bool = True` to `full_url_mode: bool | None = None`.
In the proposed implementation, when `full_url_mode is None`, we just use whether the value starts with `{` to check if it is a JSON. _Only when_ `full_url_mode` is a `bool` would we explicitly raise errors e.g. if a JSON was given when `full_url_mode=True`, or a URI was given when `full_url_mode=False`.
### Behavior 4: `get_conn_value()` must return URI
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated + Active (until at least October 11th)|Low|Medium|
The idea that the callback invoked by `get_connection()` (now called `get_conn_value()`, but previously called `get_conn_uri()`) can return a JSON is a new Airflow 2.3.0 behavior.
This behavior _**cannot**_ change until at least October 11th, because it is required for `2.2.0` backwards compatibility. Via Airflow's `README.md`:
> [...] by default we upgrade the minimum version of Airflow supported by providers to 2.3.0 in the first Provider's release after 11th of October 2022 (11th of October 2021 is the date when the first PATCHLEVEL of 2.2 (2.2.0) has been released.
Changing this behavior _after_ October 11th is just a matter of whether maintainers are OK with introduce a breaking change to the `2.2.x` folks who are relying on JSON secrets.
Note that right now, `get_conn_value()` is avoided _entirely_ when `full_url_mode=False` and `get_connection()` is called. So although there is a deprecation warning, it is almost never hit.
```python
if self.full_url_mode:
return self._get_secret(self.connections_prefix, conn_id)
else:
warnings.warn(
f'In future versions, `{type(self).__name__}.get_conn_value` will return a JSON string when'
' full_url_mode is False, not a URI.',
DeprecationWarning,
)
```
### Behavior 5: Flexible field names via `_standardize_secret_keys()`
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Active|Medium|High|
This is one of those things that is very hard to remove. Removing it can be quite disruptive!
It is also a low priority to remove because unlike some other behaviors, it does not detract from `SecretsManagerBackend` being a "strict superset" with the base API.
Maybe it will just be the case that `SecretsManagerBackend` has this incompatibility with the base API going forward indefinitely?
Even still, we should consider the two following proposals:
1. The default field name of `user` should probably be switched to `login`, both in the `dict[str, list[str]]` used to implement the find+replace, and also in the documentation. I do not forsee any issues with doing this.
2. Remove documentation for this feature if we think it is "odd" enough to warrant discouraging users from seeking it out.
I think # 1 should be uncontroversial, but # 2 may be more controversial. I do not want to detract from my other points by staking out too firm an opinion on this one, so the best solution may simply be to not touch this for now. In fact, not touching this is exactly what I did with the original PR.
### Behavior 6: `are_secret_values_urlencoded` kwarg
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Pending Deprecation|Medium|Medium|
In the original discussion #25104, @potiuk told me to add something like this. I tried my best to avoid users needing to do this, hence the "idempotency" check. So only a few users actually need to specify this to assist in the migration of their secrets.
This was introduced as a "pending" deprecation because frankly, it is an odd behavior to have ever been URL-encoding these JSON values, and it only exists as a necessity to aid in migrating secrets. In our ideal end state, this doesn't exist.
Eventually when it comes time, removing this will not be _all_ that disruptive:
- This only impacts users who have `full_url_mode=False`
- This only impacts users with secrets that have non-idempotent encodings.
- `are_secret_values_urlencoded` should be set to `False`. Users should never be manually setting to `True`!
So we're talking about a small percent of a small minority of users who will ever see or need to set this `are_secret_values_urlencoded` kwarg. And even then, they should have set `are_secret_values_urlencoded` to `False` to assist in migrating.
# Proposal for Next Steps
All three steps require breaking changes.
## Proposed Step 1
- Remove: **Behavior 2: `ast.literal_eval` for deserializing JSON secrets**
- Remove: **Behavior 3: `full_url_mode=False` is required for JSON secrets**
- Remove: **Behavior 4: `get_conn_value()` must return URI**
- Note: Must wait until at least October 11th!
Right now the code is frankly a mess. I take some blame for that, as I did introduce the mess. But the mess is all inservice of backwards compatibility.
Removing Behavior 4 _**vastly**_ simplifies the code, and means we don't need to continue overriding the `get_connection()` method.
Removing Behavior 2 also simplifies the code, and is a fairly straightforward change.
Removing Behavior 3 is fully backwards compatible (so no deprecation warnings required) and provides a much nicer user experience overall.
The main thing blocking "Proposed Step 1" is the requirement that `2.2.x` be supported until at least October 11th.
### Alternative to Proposed Step 1
It _is_ possible to remove Behavior 2 and Behavior 3 without removing Behavior 4, and do so in a way that keeps `2.2.x` backwards compatibility.
It will still however leave a mess of the code.
I am not sure how eager the Amazon Provider Package maintainers are to keep backwards compatibility here. Between the 1 year window, plus the fact that this can only possibly impact people using both the `SecretsManagerBackend` _and_ who have `full_url_mode=False` turned on, it seems like not an incredibly huge deal to just scrap support for `2.2.x` here when the time comes. But it is not appropriate for me to decide this, so I must be clear and say that we _can_ start trimming away some of the odd behaviors _without_ breaking Airflow `2.2.x` backwards compatibility, and that the main benefit of breaking backwards compatibility is the source code becoming way simpler.
## Proposed Step 2
- Remove: **Behavior 1: URL-encoding JSON values**
- Switch status from Pending Deprecation to Deprecation: **Behavior 6: `are_secret_values_urlencoded` kwarg**
Personally, I don't think we should rush on this. The reason I think we should take our time here is because the current way this works is surprisingly non-disruptive (i.e. no config changes required to migrate for most users), but fully removing the behavior may be pretty disruptive, especially to people who don't read their logs carefully.
### Alternative to Proposed Step 2
The alternative to this step is to combine this step with step 1, instead of holding off for a future date.
The main arguments in favor of combining with step 1 are:
- Reducing the number of version bumps that introduce breaking changes by simply combining all breaking changes into one step. It's unclear how many users even use `full_url_mode` and it is arguable that we're being too delicate with what was arguably a semi-experimental and odd feature to begin with; it's only become less experimental by the stroke of luck that Airflow 2.3.0 supports JSON-encoded secrets in the base API.
- A sort of "rip the BandAid" ethos, or a "get it done and over with" ethos. I don't think this is very nice to users, but I see the appeal of not keeping odd code around for a while.
## Proposed Step 3
- Remove: **Behavior 6: `are_secret_values_urlencoded` kwarg**
Once URL-encoding is no longer happening for JSON secrets, and all non-idempotent secrets have been cast or explicitly handled, and we've deprecated everything appropriately, we can finally remove `are_secret_values_urlencoded`.
# Conclusion
The original deprecations introduced were under-discussed, but hopefully now you both know where I was coming from, and also agree with the changes I made.
If you _disagree_ with the deprecations that I introduced, I would also like to hear about that and why, and we can see about rolling any of them back.
Please let me know what you think about the proposed steps for changes to the code base.
Please also let me know what you think an appropriate schedule is for introducing the changes, and whether you think I should consider one of the alternatives (both considered and otherwise) to the steps I outlined in the penultimate section.
# Other stuff
### Use case/motivation
(See above)
### Related issues
- #25432
- #25104
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26571 | https://github.com/apache/airflow/pull/27920 | c8e348dcb0bae27e98d68545b59388c9f91fc382 | 8f0265d0d9079a8abbd7b895ada418908d8b9909 | "2022-09-21T18:31:22Z" | python | "2022-12-05T19:21:54Z" |
Subsets and Splits