status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 17,437 | ["docs/apache-airflow/faq.rst"] | It's too slow to recognize new dag file when there are a log of dags files | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
There are 5000+ dag files in our production env. It will delay for almost 10mins to recognize the new dag file if a new one comes.
There are 16 CPU cores in the scheduler machine. The airflow version is 2.1.0.
<!-- A short description of your feature -->
**Use case / motivation**
I think there should be a feature to support recognizing new dag files or recently modified dag files faster.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
Maybe will.
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/17437 | https://github.com/apache/airflow/pull/17519 | 82229b363d53db344f40d79c173421b4c986150c | 7dfc52068c75b01a309bf07be3696ad1f7f9b9e2 | 2021-08-05T09:45:52Z | python | 2021-08-10T10:05:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,422 | ["airflow/hooks/dbapi.py", "airflow/providers/postgres/hooks/postgres.py", "tests/hooks/test_dbapi.py"] | AttributeError: 'PostgresHook' object has no attribute 'schema' | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):1.21
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):Linux workspace
- **Install tools**:
- **Others**:
**What happened**:
Running PostgresOperator errors out with 'PostgresHook' object has no attribute 'schema'.
I tested it as well with the code from PostgresOperator tutorial in https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/operators/postgres_operator_howto_guide.html
This is happening since upgrade of apache-airflow-providers-postgres to version 2.1.0
```
*** Reading local file: /tmp/logs/postgres_operator_dag/create_pet_table/2021-08-04T15:32:40.520243+00:00/2.log
[2021-08-04 15:57:12,429] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: postgres_operator_dag.create_pet_table 2021-08-04T15:32:40.520243+00:00 [queued]>
[2021-08-04 15:57:12,440] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: postgres_operator_dag.create_pet_table 2021-08-04T15:32:40.520243+00:00 [queued]>
[2021-08-04 15:57:12,440] {taskinstance.py:1067} INFO -
--------------------------------------------------------------------------------
[2021-08-04 15:57:12,440] {taskinstance.py:1068} INFO - Starting attempt 2 of 2
[2021-08-04 15:57:12,440] {taskinstance.py:1069} INFO -
--------------------------------------------------------------------------------
[2021-08-04 15:57:12,457] {taskinstance.py:1087} INFO - Executing <Task(PostgresOperator): create_pet_table> on 2021-08-04T15:32:40.520243+00:00
[2021-08-04 15:57:12,461] {standard_task_runner.py:52} INFO - Started process 4692 to run task
[2021-08-04 15:57:12,466] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', '***_operator_dag', 'create_pet_table', '2021-08-04T15:32:40.520243+00:00', '--job-id', '6', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/test_dag.py', '--cfg-path', '/tmp/tmp2mez286k', '--error-file', '/tmp/tmpgvc_s17j']
[2021-08-04 15:57:12,468] {standard_task_runner.py:77} INFO - Job 6: Subtask create_pet_table
[2021-08-04 15:57:12,520] {logging_mixin.py:104} INFO - Running <TaskInstance: ***_operator_dag.create_pet_table 2021-08-04T15:32:40.520243+00:00 [running]> on host 5995a11eafd1
[2021-08-04 15:57:12,591] {taskinstance.py:1280} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=***_operator_dag
AIRFLOW_CTX_TASK_ID=create_pet_table
AIRFLOW_CTX_EXECUTION_DATE=2021-08-04T15:32:40.520243+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-08-04T15:32:40.520243+00:00
[2021-08-04 15:57:12,591] {postgres.py:68} INFO - Executing:
CREATE TABLE IF NOT EXISTS pet (
pet_id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
pet_type VARCHAR NOT NULL,
birth_date DATE NOT NULL,
OWNER VARCHAR NOT NULL);
[2021-08-04 15:57:12,608] {base.py:69} INFO - Using connection to: id: ***_default.
[2021-08-04 15:57:12,610] {taskinstance.py:1481} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1137, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/postgres/operators/postgres.py", line 70, in execute
self.hook.run(self.sql, self.autocommit, parameters=self.parameters)
File "/usr/local/lib/python3.8/site-packages/airflow/hooks/dbapi.py", line 177, in run
with closing(self.get_conn()) as conn:
File "/usr/local/lib/python3.8/site-packages/airflow/providers/postgres/hooks/postgres.py", line 97, in get_conn
dbname=self.schema or conn.schema,
AttributeError: 'PostgresHook' object has no attribute 'schema'
[2021-08-04 15:57:12,612] {taskinstance.py:1524} INFO - Marking task as FAILED. dag_id=***_operator_dag, task_id=create_pet_table, execution_date=20210804T153240, start_date=20210804T155712, end_date=20210804T155712
[2021-08-04 15:57:12,677] {local_task_job.py:151} INFO - Task exited with return code 1
```
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/17422 | https://github.com/apache/airflow/pull/17423 | d884a3d4aa65f65aca2a62f42012e844080a31a3 | 04b6559f8a06363a24e70f6638df59afe43ea163 | 2021-08-04T16:06:47Z | python | 2021-08-07T17:51:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,381 | ["airflow/executors/celery_executor.py", "tests/executors/test_celery_executor.py"] | Bug in the logic of Celery executor for checking stalled adopted tasks? | **Apache Airflow version**: 2.1.1
**What happened**:
`_check_for_stalled_adopted_tasks` method breaks on first item which satisfies condition
https://github.com/apache/airflow/blob/2.1.1/airflow/executors/celery_executor.py#L353
From the comment/logic, it looks like the idea is to optimize this piece of code, however it is not seen that `self.adopted_task_timeouts` object maintains entries sorted by timestamp. This results in unstable behaviour of scheduler, which means it sometimes may not resend tasks to Celery (due to skipping them).
Confirmed with Airflow 2.1.1
**What you expected to happen**:
Deterministic behaviour of scheduler in this case
**How to reproduce it**:
These are steps to reproduce adoption of tasks. To reproduce unstable behaviour, you may need to do trigger some additional DAGs in the process.
- set [core]parallelism to 30
- trigger DAG with concurrency==100 and 30 tasks, each is running for 30 minutes (e.g. sleep 1800)
- 18 of them will be running, others will be in queued state
- restart scheduler
- observe "Adopted tasks were still pending after 0:10:00, assuming they never made it to celery and clearing:"
- tasks will be failed and marked as "up for retry"
The important thing is that scheduler has to restart once tasks get to the queue, so that it will adopt queued tasks. | https://github.com/apache/airflow/issues/17381 | https://github.com/apache/airflow/pull/18208 | e7925d8255e836abd8912783322d61b3a9ff657a | 9a7243adb8ec4d3d9185bad74da22e861582ffbe | 2021-08-02T14:33:22Z | python | 2021-09-15T13:36:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,373 | ["airflow/cli/cli_parser.py", "airflow/executors/executor_loader.py", "tests/cli/conftest.py", "tests/cli/test_cli_parser.py"] | Allow using default celery commands for custom Celery executors subclassed from existing | **Description**
Allow custom executors subclassed from existing (CeleryExecutor, CeleryKubernetesExecutor, etc.) to use default CLI commands to start workers or flower monitoring.
**Use case / motivation**
Currently, users who decide to roll their own custom Celery-based executor cannot use default commands (i.e. `airflow celery worker`) even though it's built on top of existing CeleryExecutor. If they try to, they'll receive the following error: `airflow command error: argument GROUP_OR_COMMAND: celery subcommand works only with CeleryExecutor, your current executor: custom_package.CustomCeleryExecutor, see help above.`
One workaround for this is to create custom entrypoint script for worker/flower containers/processes that are still going to use the same Celery app as CeleryExecutor. This leads to unnecessary maintaining of this entrypoint script.
I'd suggest two ways of fixing that:
- Check if custom executor is subclassed from Celery executor (which might lead to errors, if custom executor is used to access other celery app, which might be a proper reason for rolling custom executor)
- Store `app` as attribute of Celery-based executors and match the one provided by custom executor with the default one
**Related Issues**
N/A | https://github.com/apache/airflow/issues/17373 | https://github.com/apache/airflow/pull/18189 | d3f445636394743b9298cae99c174cb4ac1fc30c | d0cea6d849ccf11e2b1e55d3280fcca59948eb53 | 2021-08-02T08:46:59Z | python | 2021-12-04T15:19:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,368 | ["airflow/providers/slack/example_dags/__init__.py", "airflow/providers/slack/example_dags/example_slack.py", "airflow/providers/slack/operators/slack.py", "docs/apache-airflow-providers-slack/index.rst", "tests/providers/slack/operators/test.csv", "tests/providers/slack/operators/test_slack.py"] | Add example DAG for SlackAPIFileOperator | The SlackAPIFileOperator is not straight forward and it might be better to add an example DAG to demonstrate the usage.
| https://github.com/apache/airflow/issues/17368 | https://github.com/apache/airflow/pull/17400 | c645d7ac2d367fd5324660c616618e76e6b84729 | 2935be19901467c645bce9d134e28335f2aee7d8 | 2021-08-01T23:15:40Z | python | 2021-08-16T16:16:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,348 | ["airflow/providers/google/cloud/example_dags/example_mlengine.py", "airflow/providers/google/cloud/operators/mlengine.py", "tests/providers/google/cloud/operators/test_mlengine.py"] | Add support for hyperparameter tuning on GCP Cloud AI | @darshan-majithiya had opened #15429 to add the hyperparameter tuning PR but it's gone stale. I'm adding this issue to see if they want to pick it back up, or if not, if someone wants to pick up where they left off in the spirit of open source 😄 | https://github.com/apache/airflow/issues/17348 | https://github.com/apache/airflow/pull/17790 | 87769db98f963338855f59cfc440aacf68e008c9 | aa5952e58c58cab65f49b9e2db2adf66f17e7599 | 2021-07-30T18:50:32Z | python | 2021-08-27T18:12:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,340 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/apache/livy/operators/livy.py", "tests/providers/apache/livy/operators/test_livy.py"] | Retrieve session logs when using Livy Operator | **Description**
The Airflow logs generated by the Livy operator currently only state the status of the submitted batch. To view the logs from the job itself, one must go separately to the session logs. I think that Airflow should have the option (possibly on by default) that retrieves the session logs after the batch reaches a terminal state if a `polling_interval` has been set.
**Use case / motivation**
When debugging a task submitted via Livy, the session logs are the first place to check. For most other tasks, including SparkSubmitOperator, viewing the first-check logs can be done in the Airflow UI, but for Livy you must go externally or write a separate task to retrieve them.
**Are you willing to submit a PR?**
I don't yet have a good sense of how challenging this will be to set up and test. I can try but if anyone else wants to go for it, don't let my attempt stop you.
**Related Issues**
None I could find
| https://github.com/apache/airflow/issues/17340 | https://github.com/apache/airflow/pull/17393 | d04aa135268b8e0230be3af6598a3b18e8614c3c | 02a33b55d1ef4d5e0466230370e999e8f1226b30 | 2021-07-30T13:54:00Z | python | 2021-08-20T21:49:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,326 | ["tests/jobs/test_local_task_job.py"] | `TestLocalTaskJob.test_mark_success_no_kill` fails consistently on MSSQL | **Apache Airflow version**: main
**Environment**: CI
**What happened**:
The `TestLocalTaskJob.test_mark_success_no_kill` test no longer passes on MSSQL. I initially thought it was a race condition, but even after 5 minutes the TI wasn't running.
https://github.com/apache/airflow/blob/36bdfe8d0ef7e5fc428434f8313cf390ee9acc8f/tests/jobs/test_local_task_job.py#L301-L306
I've tracked down that the issue was introduced with #16301 (cc @ephraimbuddy), but I haven't really dug into why.
**How to reproduce it**:
`./breeze --backend mssql tests tests/jobs/test_local_task_job.py`
```
_____________________________________________________________________________________ TestLocalTaskJob.test_mark_success_no_kill _____________________________________________________________________________________
self = <tests.jobs.test_local_task_job.TestLocalTaskJob object at 0x7f54652abf10>
def test_mark_success_no_kill(self):
"""
Test that ensures that mark_success in the UI doesn't cause
the task to fail, and that the task exits
"""
dagbag = DagBag(
dag_folder=TEST_DAG_FOLDER,
include_examples=False,
)
dag = dagbag.dags.get('test_mark_success')
task = dag.get_task('task1')
session = settings.Session()
dag.clear()
dag.create_dagrun(
run_id="test",
state=State.RUNNING,
execution_date=DEFAULT_DATE,
start_date=DEFAULT_DATE,
session=session,
)
ti = TaskInstance(task=task, execution_date=DEFAULT_DATE)
ti.refresh_from_db()
job1 = LocalTaskJob(task_instance=ti, ignore_ti_state=True)
process = multiprocessing.Process(target=job1.run)
process.start()
for _ in range(0, 50):
if ti.state == State.RUNNING:
break
time.sleep(0.1)
ti.refresh_from_db()
> assert State.RUNNING == ti.state
E AssertionError: assert <TaskInstanceState.RUNNING: 'running'> == None
E + where <TaskInstanceState.RUNNING: 'running'> = State.RUNNING
E + and None = <TaskInstance: test_mark_success.task1 2016-01-01 00:00:00+00:00 [None]>.state
tests/jobs/test_local_task_job.py:306: AssertionError
------------------------------------------------------------------------------------------------ Captured stderr call ------------------------------------------------------------------------------------------------
INFO [airflow.models.dagbag.DagBag] Filling up the DagBag from /opt/airflow/tests/dags
INFO [root] class_instance type: <class 'unusual_prefix_5d280a9b385120fec3c40cfe5be04e2f41b6b5e8_test_task_view_type_check.CallableClass'>
INFO [airflow.models.dagbag.DagBag] File /opt/airflow/tests/dags/test_zip.zip:file_no_airflow_dag.py assumed to contain no DAGs. Skipping.
Process Process-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2336, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 364, in connect
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 809, in _checkout
result = pool._dialect.do_ping(fairy.connection)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 575, in do_ping
cursor.execute(self._dialect_specific_select_one)
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The server failed to resume the transaction. Desc:3400000012. (3971) (SQLExecDirectW)')
(truncated)
``` | https://github.com/apache/airflow/issues/17326 | https://github.com/apache/airflow/pull/17334 | 0f97b92c1ad15bd6d0a90c8dee8287886641d7d9 | 7bff44fba83933de1b420fbb4fc3655f28769bd0 | 2021-07-29T22:15:54Z | python | 2021-07-30T14:38:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,316 | ["scripts/ci/pre_commit/pre_commit_check_provider_yaml_files.py"] | Docs validation function - Add meaningful errors | The following function just prints right/left error which is not very meaningful and very difficult to troubleshoot.
```python
def check_doc_files(yaml_files: Dict[str, Dict]):
print("Checking doc files")
current_doc_urls = []
current_logo_urls = []
for provider in yaml_files.values():
if 'integrations' in provider:
current_doc_urls.extend(
guide
for guides in provider['integrations']
if 'how-to-guide' in guides
for guide in guides['how-to-guide']
)
current_logo_urls.extend(
integration['logo'] for integration in provider['integrations'] if 'logo' in integration
)
if 'transfers' in provider:
current_doc_urls.extend(
op['how-to-guide'] for op in provider['transfers'] if 'how-to-guide' in op
)
expected_doc_urls = {
"/docs/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/apache-airflow-providers-*/operators/**/*.rst", recursive=True)
if not f.endswith("/index.rst") and '/_partials' not in f
}
expected_doc_urls |= {
"/docs/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/apache-airflow-providers-*/operators.rst", recursive=True)
}
expected_logo_urls = {
"/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/integration-logos/**/*", recursive=True)
if os.path.isfile(f)
}
try:
assert_sets_equal(set(expected_doc_urls), set(current_doc_urls))
assert_sets_equal(set(expected_logo_urls), set(current_logo_urls))
except AssertionError as ex:
print(ex)
sys.exit(1)
``` | https://github.com/apache/airflow/issues/17316 | https://github.com/apache/airflow/pull/17322 | 213e337f57ef2ef9a47003214f40da21f4536b07 | 76e6315473671b87f3d5fe64e4c35a79658789d3 | 2021-07-29T14:58:56Z | python | 2021-07-30T19:18:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,291 | ["tests/jobs/test_scheduler_job.py", "tests/test_utils/asserts.py"] | [QUARANTINE] Quarantine test_retry_still_in_executor | The `TestSchedulerJob.test_retry_still_in_executor` fails occasionally and should be quarantined.
```
________________ TestSchedulerJob.test_retry_still_in_executor _________________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob object at 0x7f4c9031f128>
def test_retry_still_in_executor(self):
"""
Checks if the scheduler does not put a task in limbo, when a task is retried
but is still present in the executor.
"""
executor = MockExecutor(do_update=False)
dagbag = DagBag(dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"), include_examples=False)
dagbag.dags.clear()
dag = DAG(dag_id='test_retry_still_in_executor', start_date=DEFAULT_DATE, schedule_interval="@once")
dag_task1 = BashOperator(
task_id='test_retry_handling_op', bash_command='exit 1', retries=1, dag=dag, owner='airflow'
)
dag.clear()
dag.is_subdag = False
with create_session() as session:
orm_dag = DagModel(dag_id=dag.dag_id)
orm_dag.is_paused = False
session.merge(orm_dag)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
@mock.patch('airflow.dag_processing.processor.DagBag', return_value=dagbag)
def do_schedule(mock_dagbag):
# Use a empty file since the above mock will return the
# expected DAGs. Also specify only a single file so that it doesn't
# try to schedule the above DAG repeatedly.
self.scheduler_job = SchedulerJob(
num_runs=1, executor=executor, subdir=os.path.join(settings.DAGS_FOLDER, "no_dags.py")
)
self.scheduler_job.heartrate = 0
self.scheduler_job.run()
do_schedule()
with create_session() as session:
ti = (
session.query(TaskInstance)
.filter(
TaskInstance.dag_id == 'test_retry_still_in_executor',
TaskInstance.task_id == 'test_retry_handling_op',
)
.first()
)
> ti.task = dag_task1
E AttributeError: 'NoneType' object has no attribute 'task'
tests/jobs/test_scheduler_job.py:2514: AttributeError
``` | https://github.com/apache/airflow/issues/17291 | https://github.com/apache/airflow/pull/19860 | d1848bcf2460fa82cd6c1fc1e9e5f9b103d95479 | 9b277dbb9b77c74a9799d64e01e0b86b7c1d1542 | 2021-07-28T17:40:09Z | python | 2021-12-13T17:55:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,276 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | Make platform version as independent parameter of ECSOperator | Currently `ECSOperator` propagates `platform_version` parameter either in case `launch_type` is `FARGATE` or there is `capacity_provider_strategy` parameter provided. The case with `capacity_provider_strategy` is wrong. Capacity provider strategy can contain a reference on EC2 capacity provider. If it's an EC2 capacity provider, then `platform_version` should not be propagated to the `boto3` api call. And it's not possible to do so with the current logic of `ECSOperator` because `platform_version` is always propagated in such case and `boto3` doesn't accept `platform_version` as `None`. So in order to fix that `platform_version` should be an independent parameter and propagated only when it's specified, regardless which `launch_type` or `capacity_provider_strategy` is specified. That should also simplify the logic of `ECSOperator`.
I will prepare a PR to fix that. | https://github.com/apache/airflow/issues/17276 | https://github.com/apache/airflow/pull/17281 | 5c1e09cafacea922b9281e901db7da7cadb3e9be | 71088986f12be3806d48e7abc722c3f338f01301 | 2021-07-27T20:44:50Z | python | 2021-08-02T08:05:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,255 | ["airflow/www/extensions/init_security.py"] | x_frame_enabled logic broken in Airflow 2 | **Apache Airflow version**: 2.1.0
**Environment**:
- **Cloud provider or hardware configuration**: Azure
- **OS** (e.g. from /etc/os-release): RHEL8.3
- **Kernel** (e.g. `uname -a`): Linux 7db15dac176b 5.10.25-linuxkit # 1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
**What happened**:
When x_frame_enabled is default or set to true, embedding Airflow is not working since X-Frame-Options is set to "DENY"
**What you expected to happen**:
When x_frame_enabled is default or set to true, embedding Airflow is working
**How to reproduce it**:
leave x_frame_enabled to default or set it to true and try to embed it in an iFrame for instance and it will not work.
Setting it to "False" is the current workaround since the if condition does not seem to be correct.
**Anything else we need to know**:
broken code in Airflow 2:
https://github.com/apache/airflow/blob/080132254b06127a6e2e8a2e23ceed6a7859d498/airflow/www/extensions/init_security.py#L26
if x_frame_enabled is enabled it will apply the DENY header, which was not the case in Airflow 1:
https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/www_rbac/app.py#L274
since if was only setting the header in case it is NOT enabled. | https://github.com/apache/airflow/issues/17255 | https://github.com/apache/airflow/pull/19491 | d5cafc901158ec4d10f86f6d0c5a4faba23bc41e | 084079f446570ba43114857ea1a54df896201419 | 2021-07-27T10:21:53Z | python | 2022-01-22T23:09:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,249 | ["dev/README_RELEASE_HELM_CHART.md", "dev/chart/build_changelog_annotations.py"] | Add Changelog Annotation to Helm Chart | Our Helm Chart misses changelog annotation:
https://artifacthub.io/docs/topics/annotations/helm/

| https://github.com/apache/airflow/issues/17249 | https://github.com/apache/airflow/pull/20555 | 485ff6cc64d8f6a15d8d6a0be50661fe6d04b2d9 | c56835304318f0695c79ac42df7a97ad05ccd21e | 2021-07-27T07:24:22Z | python | 2021-12-29T21:24:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,240 | ["airflow/operators/bash.py", "tests/operators/test_bash.py"] | bash operator overrides environment variables instead of updating them | **Apache Airflow version**: 1.10.15
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
`Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.16-eks-7737de", GitCommit:"7737de131e58a68dda49cdd0ad821b4cb3665ae8", GitTreeState:"clean", BuildDate:"2021-03-10T21:33:25Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}`
**Environment**:
- **Cloud provider or hardware configuration**: EKS
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux airflow-web-84bf9f8865-s5xxg 4.14.209-160.335.amzn2.x86_64 #1 SMP Wed Dec 2 23:31:46 UTC 2020 x86_64
- **Install tools**: helm (fork of the official chart)
- **Others**:
**What happened**:
We started using `env` variable in the built-in bash_operator. the goal was to add a single variable to be used as part of the command. but once using the variable, it caused all of the other environment variables to be ignored.
**What you expected to happen**:
we expected that any environment variable we are adding to this operator will be added or updated.
the expectation are wrong from this variable.
**How to reproduce it**:
`import os
os.environ["foo"] = "bar"
from datetime import datetime
from airflow import DAG
from airflow.models import TaskInstance
from airflow.operators.bash_operator import BashOperator
dag = DAG(dag_id='anydag', start_date=datetime.now())
#unsuccessfull example:
task = BashOperator(bash_command='if [ -z "$foo" ]; then exit 1; fi',env= {"foo1" : "bar1"},dag=dag, task_id='test')
ti = TaskInstance(task=task, execution_date=datetime.now())
result = task.execute(ti.get_template_context())
#successfull example:
task = BashOperator(bash_command='if [ -z "$foo" ]; then exit 1; fi',dag=dag, task_id='test1')
ti = TaskInstance(task=task, execution_date=datetime.now())
result = task.execute(ti.get_template_context())`
**Anything else we need to know**:
this happens every time it runs.
| https://github.com/apache/airflow/issues/17240 | https://github.com/apache/airflow/pull/18944 | b2045d6d1d4d2424c02d7d9b40520440aa4e5070 | d4a3d2b1e7cf273caaf94463cbfcbcdb77bfc338 | 2021-07-26T21:10:14Z | python | 2021-10-13T19:28:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,235 | ["airflow/www/views.py", "tests/www/views/test_views_connection.py"] | Connection inputs in Extra field are overwritten by custom form widget fields | **Apache Airflow version**:
2.1.0
**What happened**:
There are several hooks that still use optional parameters from the classic `Extra` field. However, when creating the connection the `Extra` field is overwritten with values from the custom fields that are included in the form. Because the `Extra` field is overwritten, these optional parameters cannot be used by the hook.
For example, in the `AzureDataFactoryHook`, If `resource_group_name` or `factory_name` are not provided when initializing the hook, it defaults to the value specified in the connection extras. Using the Azure Data Factory connection form, here is the initial connection submission:

After saving the connection, the `Extra` field is overwritten with the custom fields that use "extras" under the hood:

**What you expected to happen**:
Wavering slightly but I would have initially expected that the `Extra` field wasn't overwritten but updated with both the custom field "extras" plus the originally configured values in the `Extra` field. However, a better UX would be that the values used in the `Extra` field should be separate custom fields for these hooks and the `Extra` field is hidden. Perhaps it's even both?
**How to reproduce it**:
Install either the Microsoft Azure or Snowflake providers, attempt to create a connection for either the Snowflake, Azure Data Factory, Azure Container Volume, or Azure types with the `Extra` field populated prior to saving the form.
**Anything else we need to know**:
Happy to submit PRs to fix this issue. 🚀
| https://github.com/apache/airflow/issues/17235 | https://github.com/apache/airflow/pull/17269 | 76e6315473671b87f3d5fe64e4c35a79658789d3 | 1941f9486e72b9c70654ea9aa285d566239f6ba1 | 2021-07-26T17:16:23Z | python | 2021-07-31T05:35:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,224 | ["tests/jobs/test_scheduler_job.py", "tests/test_utils/asserts.py"] | [QUARANTINE] The test_scheduler_verify_pool_full test is quarantined | The test fails occasionally with the below stacktrace, so I am marking this as Quarantined.
```
_______________ TestSchedulerJob.test_scheduler_verify_pool_full _______________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob object at 0x7fbaaaba0f40>
def test_scheduler_verify_pool_full(self):
"""
Test task instances not queued when pool is full
"""
dag = DAG(dag_id='test_scheduler_verify_pool_full', start_date=DEFAULT_DATE)
BashOperator(
task_id='dummy',
dag=dag,
owner='airflow',
pool='test_scheduler_verify_pool_full',
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
session = settings.Session()
pool = Pool(pool='test_scheduler_verify_pool_full', slots=1)
session.add(pool)
session.flush()
dag = SerializedDAG.from_dict(SerializedDAG.to_dict(dag))
SerializedDagModel.write_dag(dag)
self.scheduler_job = SchedulerJob(executor=self.null_exec)
self.scheduler_job.processor_agent = mock.MagicMock()
# Create 2 dagruns, which will create 2 task instances.
dr = dag.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=DEFAULT_DATE,
state=State.RUNNING,
)
> self.scheduler_job._schedule_dag_run(dr, session)
tests/jobs/test_scheduler_job.py:2108:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/jobs/scheduler_job.py:1020: in _schedule_dag_run
dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
airflow/utils/session.py:67: in wrapper
return func(*args, **kwargs)
airflow/models/dagbag.py:186: in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <airflow.models.dagbag.DagBag object at 0x7fbaaab95100>
dag_id = 'test_scheduler_verify_pool_full'
session = <sqlalchemy.orm.session.Session object at 0x7fbaaad9d0a0>
def _add_dag_from_db(self, dag_id: str, session: Session):
"""Add DAG to DagBag from DB"""
from airflow.models.serialized_dag import SerializedDagModel
row = SerializedDagModel.get(dag_id, session)
if not row:
> raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
E airflow.exceptions.SerializedDagNotFound: DAG 'test_scheduler_verify_pool_full' not found in serialized_dag table
``` | https://github.com/apache/airflow/issues/17224 | https://github.com/apache/airflow/pull/19860 | d1848bcf2460fa82cd6c1fc1e9e5f9b103d95479 | 9b277dbb9b77c74a9799d64e01e0b86b7c1d1542 | 2021-07-26T09:29:03Z | python | 2021-12-13T17:55:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,223 | ["airflow/jobs/local_task_job.py", "tests/jobs/test_local_task_job.py"] | run_as_user shows none even when default_impersonation set in the config | **Apache Airflow version**:
2.1.2
**Environment**:
- **Cloud provider or hardware configuration**: AWS ECS
- **OS** (e.g. from /etc/os-release): debian
- **Kernel** (e.g. `uname -a`): Linux
- **Install tools**:
- **Others**:
**What happened**:
When setting default_impersonation in airflow.cfg as `testuser` and not passing run_as_user in the default_parm or operator parameters, it is showing run_as_user in the UI as None (It is using testuser to execute the dag though):

And also throwing error in logs and some of the dags are failing
```
[2021-07-26 09:53:41,929: ERROR/ForkPoolWorker-7] Failed to execute task PID of job runner does not match.
--
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/executors/celery_executor.py", line 117, in _execute_in_fork
args.func(args)
File "/usr/local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/cli.py", line 91, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 238, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/usr/local/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 64, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/usr/local/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 121, in _run_task_by_local_task_job
run_job.run()
File "/usr/local/lib/python3.7/site-packages/airflow/jobs/base_job.py", line 245, in run
self._execute()
File "/usr/local/lib/python3.7/site-packages/airflow/jobs/local_task_job.py", line 131, in _execute
self.heartbeat()
File "/usr/local/lib/python3.7/site-packages/airflow/jobs/base_job.py", line 226, in heartbeat
self.heartbeat_callback(session=session)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/jobs/local_task_job.py", line 195, in heartbeat_callback
raise AirflowException("PID of job runner does not match")
airflow.exceptions.AirflowException: PID of job runner does not match
```
**What you expected to happen**:
Dags not to fail and logs not to show this error messages.
**How to reproduce it**:
Set default_impersonation in airflow.cfg and don't pass run_as_user in task
Run a dag with multiple task (more than 1) which runs for more than 10sec
**Anything else we need to know**:
I suspect that the if-else statement [here](https://github.com/apache/airflow/blob/main/airflow/jobs/local_task_job.py#L196) is causing the issue
Also I am trying to set run_as_user for all dags to be `testuser` using cluster policy, doing that also its giving the same error
```
def task_policy(task):
task.run_as_user = 'testuser'
```
| https://github.com/apache/airflow/issues/17223 | https://github.com/apache/airflow/pull/17229 | 4e2a94c6d1bde5ddf2aa0251190c318ac22f3b17 | 40419dd371c7be53e6c8017b0c4d1bc7f75d0fb6 | 2021-07-26T09:23:01Z | python | 2021-07-28T15:00:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,198 | ["airflow/providers/google/cloud/transfers/bigquery_to_mysql.py", "tests/providers/google/cloud/transfers/test_bigquery_to_mysql.py"] | BigQueryToMySqlOperator uses deprecated method and doesn't use keyword arguments | **Apache Airflow version**: 2.0+
**Apache Airflow provider and version**: apache-airflow-providers-google==2.2.0
**What happened**:
My `BigQueryToMySqlOperator` task always fail with the following error message.
```
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1113, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1287, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1317, in _execute_task
result = task_copy.execute(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/bigquery_to_mysql.py", line 166, in execute
for rows in self._bq_get_data():
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/bigquery_to_mysql.py", line 138, in _bq_get_data
response = cursor.get_tabledata(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2508, in get_tabledata
return self.hook.get_tabledata(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 1284, in get_tabledata
rows = self.list_rows(dataset_id, table_id, max_results, selected_fields, page_token, start_index)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 412, in inner_wrapper
raise AirflowException(
airflow.exceptions.AirflowException: You must use keyword arguments in this methods rather than positional
```
**What you expected to happen**:
I expect the task to move data from BigQuery to MySql.
**How to reproduce it**:
You will need a working gcp connection and mysql connection as well as some sample data to test it out.
An example of the BigQueryToMySqlOperator can be pull from the [Operator description](https://github.com/apache/airflow/blob/2.1.2/airflow/providers/google/cloud/transfers/bigquery_to_mysql.py
).
```python
transfer_data = BigQueryToMySqlOperator(
task_id='task_id',
dataset_table='origin_bq_table',
mysql_table='dest_table_name',
replace=True,
)
```
**Anything else we need to know**:
The operator is having this issue because the [cursor](https://github.com/apache/airflow/blob/providers-google/2.2.0/airflow/providers/google/cloud/transfers/bigquery_to_mysql.py#L138) in BigQueryToMySqlOperator calls [get_tabledata](https://github.com/apache/airflow/blob/providers-google/2.2.0/airflow/providers/google/cloud/hooks/bigquery.py#L1260) which [calls list_rows with positional arguments](https://github.com/apache/airflow/blob/providers-google/2.2.0/airflow/providers/google/cloud/hooks/bigquery.py#L1284).
Calling list_rows with positional arguments triggers the function wrapper [fallback_to_default_project_id](https://github.com/apache/airflow/blob/providers-google/2.2.0/airflow/providers/google/common/hooks/base_google.py#L398), which does **NOT** allow for positional arguements.
| https://github.com/apache/airflow/issues/17198 | https://github.com/apache/airflow/pull/18073 | 2767781b880b0fb03d46950c06e1e44902c25a7c | cfb602a33dc1904e2f51d74fa711722c8b702726 | 2021-07-23T21:08:14Z | python | 2021-09-09T23:21:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,193 | ["airflow/providers_manager.py"] | Custom connection cannot use SelectField | When you have Custom Connection and use SelectField, Connection screen becomes unusable (see #17064)
We should detect the situation and throw an exception when Provider Info is initialized. | https://github.com/apache/airflow/issues/17193 | https://github.com/apache/airflow/pull/17194 | 504294e4c231c4fe5b81c37d0a04c0832ce95503 | 8e94c1c64902b97be146cdcfe8b721fced0a283b | 2021-07-23T15:59:47Z | python | 2021-07-23T18:26:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,192 | ["airflow/providers/salesforce/hooks/salesforce.py", "docs/apache-airflow-providers-salesforce/connections/salesforce.rst", "tests/providers/salesforce/hooks/test_salesforce.py"] | Adding additional login support for SalesforceHook | **Description**
Currently the `SalesforceHook` only supports authentication via username, password, and security token. The Salesforce API used under the hood supports a few other authentication types:
- Direct access via a session ID
- IP filtering
- JWT access
**Use case / motivation**
The `SalesforceHook` should support all authentication types supported by the underlying API.
**Are you willing to submit a PR?**
Yes 🚀
**Related Issues**
#8766
| https://github.com/apache/airflow/issues/17192 | https://github.com/apache/airflow/pull/17399 | bb52098cd685497385801419a1e0a59d6a0d7283 | 5c0e98cc770b4f055dbd1c0b60ccbd69f3166da7 | 2021-07-23T15:27:10Z | python | 2021-08-06T13:20:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,186 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | Fix template_ext processing for Kubernetes Pod Operator | The "template_ext" mechanism is useful for automatically loading and jinja-processing files which are specified in parameters of Operators. However this might lead to certain problems for example (from slack conversation):
```
The templated_fields in KubernetesPodOperator seems cause the error airflow jinja2.exceptions.TemplateNotFound when some character in the column, e.g / in env_vars.
The code got error.
env_vars=[
k8s.V1EnvVar(
name="GOOGLE_APPLICATION_CREDENTIALS",
value="/var/secrets/google/service-account.json",
),
```
I believe the behaviour changed comparing to not-so-long-past. Some of the changes with processing parameters with jinja recursively caused this template behaviour to be applied also nested parameters like above.
There were also several discussions and user confusion with this behaviour: #15942, #16922
There are two ways we could improve the situation:
1) limit the "template_extension" resolution to only direct string kwargs passed to the operator (I think this is no brainer and we should do it)
2) propose some "escaping" mechanism, where you could either disable template_extension processing entirely or somehow mark the parameters that should not be treated as templates.
Here I have several proposals:
a) we could add "skip_template_ext_processing" or similar parameter in BaseOperator <- I do not like it as many operators rely on this behaviour for a good reason
b) we could add "template_ext" parameter in the operator that could override the original class-level-field <- I like this one a lot
c) we could add "disable_template_ext_pattern" (str) parameter where we could specify list of regexp's where we could filter out only specific values <- this one will allow to disable template_ext much more "selectively" - only for certain parameters.
UPDATE: It only affects Kubenernetes Pod Operator due to it's recursive behaviour and should be fixed there.
| https://github.com/apache/airflow/issues/17186 | https://github.com/apache/airflow/pull/17760 | 3b99225a4d3dc9d11f8bd80923e68368128adf19 | 73d2b720e0c79323a29741882a07eb8962256762 | 2021-07-23T07:55:07Z | python | 2021-08-21T13:57:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,168 | ["airflow/providers/amazon/aws/example_dags/example_local_to_s3.py", "airflow/providers/amazon/aws/transfers/local_to_s3.py", "tests/providers/amazon/aws/transfers/test_local_to_s3.py"] | Add LocalFilesystemtoS3Operator | **Description**
Currently, an S3Hook exists that allows transfer of files to S3 via `load_file()`, however there is no operator associated with it. The S3Load Operator would wrap the S3 Hook, so it is not used directly.
**Use case / motivation**
Seeing as to upload a local file to S3 using the S3 Hook, a Python task with the same functionality has to be written anyway, this could reduce a lot of redundant boiler-plate code and standardize the local file to S3 load process.
**Are you willing to submit a PR?**
Yes
**Related Issues**
Not that I could find.
| https://github.com/apache/airflow/issues/17168 | https://github.com/apache/airflow/pull/17382 | 721d4e7c60cbccfd064572f16c3941f41ff8ab3a | 1632c9f519510ff218656bbc1554c80cb158e85a | 2021-07-22T18:02:17Z | python | 2021-08-14T15:26:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,155 | ["airflow/contrib/operators/gcs_to_bq.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | Typo in GoogleCloudStorageToBigQueryOperator deprecation message | **Apache Airflow version**: latest (main branch)
**What happened**:
There is a little typo in this line of code: https://github.com/apache/airflow/blob/6172b0e6c796d1d757ac1806b671ee168c031b1e/airflow/contrib/operators/gcs_to_bq.py#L40
**What you expected to happen**:
It should be *gcs_to_bigquery* instead of *gcs_to_bq*:
```
Please use `airflow.providers.google.cloud.transfers.gcs_to_bigquery.GCSToBigQueryOperator`.""",
```
| https://github.com/apache/airflow/issues/17155 | https://github.com/apache/airflow/pull/17159 | 759c76d7a5d23cc6f6ef4f724a1a322d2445bbd2 | 88b535729309cfc29c38f36ad2fad42fa72f7443 | 2021-07-22T07:53:18Z | python | 2021-07-22T22:31:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,135 | ["airflow/providers/exasol/hooks/exasol.py"] | ExasolHook get_pandas_df does not return pandas dataframe but None |
When calling the exasol hooks get_pandas_df function (https://github.com/apache/airflow/blob/main/airflow/providers/exasol/hooks/exasol.py) I noticed that it does not return a pandas dataframe. It returns None. In fact the function definition type hint explicitly states that None is returned. But the name of the function suggests otherwise. The name get_pandas_df implies that it should return a dataframe and not None.
I think that it would make more sense if get_pandas_df would indeed return a dataframe as the name is alluring to. So the code should be like this:
`def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> pd.DataFrame:
... some code ...
with closing(self.get_conn()) as conn:
df=conn.export_to_pandas(sql, query_params=parameters, **kwargs)
return df`
INSTEAD OF:
`def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> None:
... some code ...
with closing(self.get_conn()) as conn:
conn.export_to_pandas(sql, query_params=parameters, **kwargs)`
**Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Not using Kubernetes
**Environment**:Official Airflow-Docker Image
- **Cloud provider or hardware configuration**: no cloud - docker host (DELL Server with 48 Cores, 512GB RAM and many TB storage)
- **OS** (e.g. from /etc/os-release):Official Airflow-Docker Image on CentOS 7 Host
- **Kernel** (e.g. `uname -a`): Linux cad18b35be00 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16 18:28:22 UTC 2021 x86_64 GNU/Linux
- **Install tools**: only docker
- **Others**:
**What happened**:
You can replicate the findings with following dag file:
import datetime
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.providers.exasol.operators.exasol import ExasolHook
import pandas as pd
default_args = {"owner": "airflow"}
def call_exasol_hook(**kwargs):
#Make connection to Exasol
hook = ExasolHook(exasol_conn_id='Exasol QA')
sql = 'select 42;'
df = hook.get_pandas_df(sql = sql)
return df
with DAG(
dag_id="exasol_hook_problem",
start_date=datetime.datetime(2021, 5, 5),
schedule_interval="@once",
default_args=default_args,
catchup=False,
) as dag:
set_variable = PythonOperator(
task_id='call_exasol_hook',
python_callable=call_exasol_hook
)
Sorry for the strange code formatting. I do not know how to fix this in the github UI form.
Sorry also in case I missed something.
When testing or executing the task via CLI:
` airflow tasks test exasol_hook_problem call_exasol_hook 2021-07-20`
the logs show:
`[2021-07-21 12:53:19,775] {python.py:151} INFO - Done. Returned value was: None`
None was returned - although get_pandas_df was called. A pandas df should have been returned instead.
| https://github.com/apache/airflow/issues/17135 | https://github.com/apache/airflow/pull/17850 | 890bd4310e12a0a4fadfaec1f9b36d2aaae6119e | 997c31cd19e08706ff17486bed2a4e398d192757 | 2021-07-21T13:07:48Z | python | 2021-08-28T01:40:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,120 | ["airflow/cli/commands/scheduler_command.py"] | [Scheduler error] psycopg2.OperationalError: SSL SYSCALL error: Socket operation on non-socket | Hi Airflow Team,
I am running the Airflow in an EC2 instance which is installed by conda-forge during the Code Deploy.
After upgrading the Airflow version from 2.0.2 to >=2.1.0, I am facing an error every time when I try to start the scheduler in daemon mode using this command: ```airflow scheduler --daemon```
I took a look at a similar issue #11456 and try to fix it with Python 3.8.10 and `python-daemon` 2.3.0 but still doesn't work.
The webserver is working fine, but it can't detect the scheduler.
```
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2336, in _wrap_pool_connect
return fn()
File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 364, in connect
return _ConnectionFairy._checkout(self)
File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 809, in _checkout
result = pool._dialect.do_ping(fairy.connection)
File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 575, in do_ping
cursor.execute(self._dialect_specific_select_one)
psycopg2.OperationalError: SSL SYSCALL error: Socket operation on non-socket
```
Relevant package version:
`sqlalchemy`=1.3.23
`psycopg2`=2.8.6
`python-daemon`=2.3.0
`apache-airflow-providers-http`=2.0.0
`apache-airflow-providers-elasticsearch`=2.0.2
| https://github.com/apache/airflow/issues/17120 | https://github.com/apache/airflow/pull/17157 | b8abf1425004410ba8ca37385d294c650a2a7e06 | e8fc3acfd9884312669c1d85b71f42a9aab29cf8 | 2021-07-21T01:48:46Z | python | 2021-08-01T18:45:01Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,111 | ["airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/ads/hooks/ads.py", "docs/apache-airflow-providers-google/index.rst", "setup.py", "tests/providers/google/ads/operators/test_ads.py"] | apache-airflow-providers-google: google-ads-12.0.0 | Hey team, I was looking to use the google ads hook but it seems like the google ads package is a bit out of date with the hook only taking "v5", "v4", "v3", "v2" https://developers.google.com/google-ads/api/docs/release-notes and all of those being deprecated. Is there any chance the provider can be upgraded to include this? here is the release note of https://github.com/googleads/google-ads-python/releases/tag/12.0.0 google's 12.0.0 release which also deprecated v5
**Apache Airflow version**: 2.0.1
**What happened**:
deprecated API endpoint, need to update google ads to version 12.0.0
**What you expected to happen**:
return query data, instead, I get an error returned from the google ads v5 API:
**How to reproduce it**:
attempt to hit the v5 API endpoint
**Anything else we need to know**:
error is below
```
Response
-------
Headers: {
"google.ads.googleads.v5.errors.googleadsfailure-bin": "\nJ\n\u0002\b\u0001\u0012D Version v5 is deprecated. Requests to this version will be blocked.",
"grpc-status-details-bin": "\b\u0003\u0012%Request contains an invalid argument.\u001a\u0001\nCtype.googleapis.com/google.ads.googleads.v5.errors.GoogleAdsFailure\u0012L\nJ\n\u0002\b\u0001\u0012D Version v5 is deprecated. Requests to this version will be blocked.",
"request-id": "JyFZ9zysaqJbiCr_PX8SLA"
}
Fault: errors {
error_code {
request_error: UNKNOWN
}
message: " Version v5 is deprecated. Requests to this version will be blocked."
}
```
| https://github.com/apache/airflow/issues/17111 | https://github.com/apache/airflow/pull/17160 | 966b2501995279b7b5f2e1d0bf1c63a511dd382e | 5d2224795b3548516311025d5549094a9b168f3b | 2021-07-20T15:47:13Z | python | 2021-07-25T20:55:49Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,083 | ["airflow/models/baseoperator.py", "docs/spelling_wordlist.txt", "tests/models/test_baseoperator.py"] | Update chain() to support Labels | **Description**
The `airflow.models.baseoperator.chain()` is a very useful and convenient way to add sequential task dependencies in DAGs. This function has [recently been updated](https://github.com/apache/airflow/issues/16635) to support `BaseOperator` and `XComArgs` but should also be able to support `Labels` as well.
**Use case / motivation**
Users who create tasks via the `@task` decorator will not be able to use the `chain()` function to apply sequential dependencies that do not share an `XComArg` implicit dependency with a `Label`. This use case can occur when attempting to chain multiple branch labels and the next sequential task.
With the new update (yet to be released), users will receive the following exception when attempting to chain an `XComArg` and `Label`:
```bash
TypeError: Chain not supported between instances of <class 'airflow.utils.edgemodifier.EdgeModifier'> and <class 'airflow.models.xcom_arg.XComArg'>
```
**Are you willing to submit a PR?**
Absolutely. 🚀
**Related Issues**
None
| https://github.com/apache/airflow/issues/17083 | https://github.com/apache/airflow/pull/17099 | 01a0aca249eeaf71d182bf537b9d04121257ac09 | 29d8e7f50b6e946a6b6561cad99620e00a2c8360 | 2021-07-19T14:23:55Z | python | 2021-07-25T16:20:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,079 | ["airflow/models/dag.py", "airflow/utils/dag_cycle_tester.py"] | dag.cli should detect DAG cycles | **Description**
I wish `dag.cli()` reported cycles in a task graph.
**Use case / motivation**
We use Airflow (now 2.1.1), with about 40 DAGs authored by many people, with daily changes, and put our DAGs into custom docker image that we deploy with flux.
However, I noticed that a lot of commits from our developers, are a lot of small fixes, because it is tricky to test DAGs locally (especially if one uses plugins, which we don't anymore).
So I wrote a script that does import every dag file, runs it, and calls `dag.cli()`, and I then list all tasks, and run a test --dry_run on each task. That proved to be a super useful script, that can detect a lot of issues (malformed imports, syntax errors, typos in jinja2 templates, uses of unitialized variables, task id name collisions, and so on), before the change is even commited to our git repo, docker image is build, and deployed. Thus making iteration speed faster.
However, I noticed that `dag.cli()` does not detect cycles in a task graph.
Example:
```python3
from pprint import pprint
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow.utils.dates import days_ago
def print_context(ds, **kwargs):
"""Print the Airflow context and ds variable from the context."""
pprint(kwargs)
print(ds)
return 'Whatever you return gets printed in the logs'
with DAG(
dag_id="test_dag1",
description="Testing",
schedule_interval="@daily",
catchup=False,
start_date=days_ago(2),
) as dag:
a = PythonOperator(
task_id='print_the_context1',
python_callable=print_context,
)
b = PythonOperator(
task_id='print_the_context2',
python_callable=print_context,
)
a >> b
b >> a
if __name__ == '__main__':
dag.cli()
```
Now running:
```
$ python3 dags/primary/examples/tutorial_cycles.py tasks list
print_the_context1
print_the_context2
$
```
```
$ python3 dags/primary/examples/tutorial_cycles.py tasks test --dry-run print_the_context2 '2021-07-19T00:00:00+0000'
[2021-07-19 10:37:27,513] {baseoperator.py:1263} INFO - Dry run
$
```
No warnings.
When running a dag using a scheduler, it eventually detects a cycle (not sure if on load, or only when executing it, or reaching a specific task), but that is a bit too late.
I wonder if it is possible to make `dag.cli()` detect cycles? It might also be possible to detect cycles even earlier, when adding DAG edges, but that might be too slow to do on every call. However, I am pretty sure dag.cli() could do it efficiently, as it does have a full graph available. (There are well known linear algorithms based on DFS that detect cycles).
Just now, I noticed that there is method `dag.topological_sort()`, that is quite handy, and will detect cycles, so if I add:
```python3
if __name__ == '__main__':
dag.topological_sort()
dag.cli()
```
It does detect a cycle:
```
Traceback (most recent call last):
File "/home/witek/code/airflow/dags/primary/examples/tutorial_cycles.py", line 33, in <module>
print(dag.topological_sort())
File "/home/witek/airflow-testing/venv/lib/python3.9/site-packages/airflow/models/dag.py", line 1119, in topological_sort
raise AirflowException(f"A cyclic dependency occurred in dag: {self.dag_id}")
airflow.exceptions.AirflowException: A cyclic dependency occurred in dag: test_dag1
```
I think it might be useful to add `topological_sort` (and `tree_view`) to be accessible via `dag.cli()`, so the external script can easily detect cycles this way.
I also noticed that calling `dag.treeview()` does not detect cycle. In fact it does not print anything when there is a cycle.
| https://github.com/apache/airflow/issues/17079 | https://github.com/apache/airflow/pull/17105 | 3939e841616d70ea2d930f55e6a5f73a2a99be07 | 9b3bfe67019f4aebd37c49b10c97b20fa0581be1 | 2021-07-19T09:59:24Z | python | 2021-07-20T13:04:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,047 | ["airflow/www/static/js/dag_code.js"] | Toggle Wrap on DAG code page is broken | **Apache Airflow version**: `apache/airflow:2.1.2-python3.9` and `2.1.0-python3.8`
**Environment**:
- **Cloud provider or hardware configuration**: Docker for Windows, AWS ECS
- **OS** (e.g. from /etc/os-release): Windows 10, AWS ECS Fargate
- **Install tools**: docker compose, ECS
- **Others**: Web browsers: tested this on Chrome and Brave.
**What happened**:
The `Toggle Wrap` button on the DAG code page is not working.
**What you expected to happen**:
It should toggle between wrapped/unwrapped code blocks.
**How to reproduce it**:
1. Spin up an airflow environment using the official docker compose file with DAG examples enabled.
2. Open code page for any DAG that uses the [START xyz] [END xyz] blocks in its source code.
3. Click on the `Toggle Wrap` button in the top right corner of the code.

**Additional remarks**
This feature seems to be working totally fine on the TI logs, and by looking at the code they are re-using the same function. | https://github.com/apache/airflow/issues/17047 | https://github.com/apache/airflow/pull/19211 | eace4102b68e4964b47f2d8c555f65ceaf0a3690 | a1632edac783878cb82d9099f4f973c9a10b0d0f | 2021-07-16T13:51:45Z | python | 2021-11-03T14:19:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,038 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | ECSOperator returns last logs when ECS task fails | **Description**
Currently when the ECSOperator fails because the ECS task is not in 'success' state it returns a generic message like that in Airflow alerts that doesn't have much value when we want to debug things quickly.
`This task is not in success state {<huge JSON from AWS containing all the ECS task details>}`
**Use case / motivation**
This is to make it faster for people to fix an issue when a task running ECSOperator fails.
**Proposal**
The idea would be to return instead the last lines of logs from Cloudwatch (that are printed above in Airflow logs) so when we receive the alert we know what failed in the ECS task instead of having to go to Airflow logs to find it. This feature would involve changes there I think:
- https://github.com/apache/airflow/blob/2ce6e8de53adc98dd3ae80fa7c74b38eb352bc3a/airflow/providers/amazon/aws/operators/ecs.py#L354
- https://github.com/apache/airflow/blob/2ce6e8de53adc98dd3ae80fa7c74b38eb352bc3a/airflow/providers/amazon/aws/operators/ecs.py#L375
| https://github.com/apache/airflow/issues/17038 | https://github.com/apache/airflow/pull/17209 | a8970764d98f33a54be0e880df27f86b311038ac | e6cb2f7beb4c6ea4ad4a965f9c0f2b8f6978129c | 2021-07-15T18:45:02Z | python | 2021-09-09T23:43:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,037 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Status of testing Providers that were prepared on July 15, 2021 | have a kind request for all the contributors to the latest provider packages release.
Could you help us to test the RC versions of the providers and let us know in the comment,
if the issue is addressed there.
## Providers that need testing
Those are providers that require testing as there were some substantial changes introduced:
### Provider [amazon: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-amazon/2.1.0rc1)
- [ ] [Allow attaching to previously launched task in ECSOperator (#16685)](https://github.com/apache/airflow/pull/16685): @pmalafosse
- [ ] [Update AWS Base hook to use refreshable credentials (#16770) (#16771)](https://github.com/apache/airflow/pull/16771): @baolsen
- [x] [Added select_query to the templated fields in RedshiftToS3Operator (#16767)](https://github.com/apache/airflow/pull/16767): @hewe
- [ ] [AWS Hook - allow IDP HTTP retry (#12639) (#16612)](https://github.com/apache/airflow/pull/16612): @baolsen
- [ ] [Update Boto3 API calls in ECSOperator (#16050)](https://github.com/apache/airflow/pull/16050): @scottypate
- [ ] [AWS DataSync Operator does not cancel task on Exception (#11011)](https://github.com/apache/airflow/issues/11011): @baolsen
- [ ] [Fix wrong template_fields_renderers for AWS operators (#16820)](https://github.com/apache/airflow/pull/16820): @codenamestif
- [ ] [AWS DataSync cancel task on exception (#11011) (#16589)](https://github.com/apache/airflow/pull/16589): @baolsen
### Provider [apache.hive: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-apache-hive/2.0.1rc1)
- [ ] [Add python 3.9 (#15515)](https://github.com/apache/airflow/pull/15515): @potiuk
### Provider [apache.sqoop: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-apache-sqoop/2.0.1rc1)
- [x] [Fix Minor Bugs in Apache Sqoop Hook and Operator (#16350)](https://github.com/apache/airflow/pull/16350): @ciancolo
### Provider [cncf.kubernetes: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-cncf-kubernetes/2.0.1rc1)
- [x] [BugFix: Using `json` string in template_field fails with K8s Operators (#16930)](https://github.com/apache/airflow/pull/16930): @kaxil
### ~Provider [docker: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-docker/2.1.0rc1)~
~- [ ] [Adds option to disable mounting temporary folder in DockerOperator (#16932)(https://github.com/apache/airflow/pull/16932): @potiuk: bug found.~
### Provider [google: 4.1.0rc1](https://pypi.org/project/apache-airflow-providers-google/4.1.0rc1)
- [ ] [Standardise dataproc location param to region (#16034)](https://github.com/apache/airflow/pull/16034): @Daniel-Han-Yang
- [ ] [Update alias for field_mask in Google Memmcache (#16975)](https://github.com/apache/airflow/pull/16975): @potiuk
### Provider [jenkins: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-jenkins/2.0.1rc1)
- [ ] [Fixed to check number key from jenkins response (#16963)](https://github.com/apache/airflow/pull/16963): @namjals
### Provider [microsoft.azure: 3.1.0rc1](https://pypi.org/project/apache-airflow-providers-microsoft-azure/3.1.0rc1)
- [ ] [Add support for managed identity in WASB hook (#16628)](https://github.com/apache/airflow/pull/16628): @malthe
- [ ] [WASB hook: reduce log messages for happy path (#16626)](https://github.com/apache/airflow/pull/16626): @malthe
- [ ] [Fix multiple issues in Microsoft AzureContainerInstancesOperator (#15634)](https://github.com/apache/airflow/pull/15634): @BKronenbitter
### ~Provider [mysql: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-mysql/2.1.0rc1): Marking for RC2 release~
~- [ ] [Added template_fields_renderers for MySQL Operator (#16914)](https://github.com/apache/airflow/pull/16914): @oyarushe~
~- [ ] [Extended template_fields_renderers for MySQL provider (#16987)](https://github.com/apache/airflow/pull/16987):~ @oyarushe
### Provider [postgres: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-postgres/2.1.0rc1)
- [ ] [Add schema override in DbApiHook (#16521)](https://github.com/apache/airflow/pull/16521): @LukeHong
### Provider [sftp: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-sftp/2.1.0rc1)
- [ ] [Add support for non-RSA type client host key (#16314)](https://github.com/apache/airflow/pull/16314): @malthe
### Provider [snowflake: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-snowflake/2.1.0rc1)
- [x] [Adding: Snowflake Role in snowflake provider hook (#16735)](https://github.com/apache/airflow/pull/16735): @saurasingh
### Provider [ssh: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-ssh/2.1.0rc1)
- [ ] [Add support for non-RSA type client host key (#16314)](https://github.com/apache/airflow/pull/16314): @malthe
- [ ] [SSHHook: Using correct hostname for host_key when using non-default ssh port (#15964)](https://github.com/apache/airflow/pull/15964): @freget
- [ ] [Correctly load openssh-gerenated private keys in SSHHook (#16756)](https://github.com/apache/airflow/pull/16756): @ashb
### Provider [tableau: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-tableau/2.1.0rc1)
- [ ] [Allow disable SSL for TableauHook (#16365)](https://github.com/apache/airflow/pull/16365): @ciancolo
- [ ] [Deprecate Tableau personal token authentication (#16916)](https://github.com/apache/airflow/pull/16916): @samgans
## New Providers
- [x] [apache.drill: 1.0.0rc1](https://pypi.org/project/apache-airflow-providers-apache-drill/1.0.0rc1) @dzamo | https://github.com/apache/airflow/issues/17037 | https://github.com/apache/airflow/pull/17061 | 16564cad6f2956ecb842455d9d6a6255f8d3d817 | b076ac5925e1a316dd6e9ad8ee4d1a2223e376ca | 2021-07-15T18:10:18Z | python | 2021-07-18T13:15:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,032 | ["airflow/providers/google/cloud/operators/bigquery.py", "airflow/www/views.py", "docs/apache-airflow/howto/custom-operator.rst", "docs/apache-airflow/img/template_field_renderer_path.png", "tests/www/views/test_views.py"] | Improved SQL rendering within BigQueryInsertJobOperator | **Description**
`BigQueryInsertJobOperator` requires the submission of a `configuration` parameter in the form of a dict. Unfortunately, if this contains a large SQL query - especially one that is formatted with new lines - then this cannot currently be rendered very nicely in the UI.
<img width="1670" alt="Screenshot 2021-07-15 at 15 39 33" src="https://user-images.githubusercontent.com/967119/125806943-839d57a9-d4a0-492d-b130-06432b095239.png">
**Use case / motivation**
The issue with this is that it's impossible to copy and paste the rendered query out of the Airflow UI, into a BigQuery browser and run it without lots of manual edits which is time wasted when troubleshooting problems.
**Are you willing to submit a PR?**
Yes. My current thought process around this would be to add an optional SQL parameter to the operator which, if provided, would be added into the configuration and could therefore have its own template field and SQL renderer.
e.g.
<img width="1570" alt="Screenshot 2021-07-14 at 14 18 09" src="https://user-images.githubusercontent.com/967119/125808200-5f30b8f4-4def-48a7-8223-82afdc65c973.png">
| https://github.com/apache/airflow/issues/17032 | https://github.com/apache/airflow/pull/17321 | 97428efc41e5902183827fb9e4e56d067ca771df | 67cbb0f181f806edb16ca12fb7a2638b5f31eb58 | 2021-07-15T14:49:02Z | python | 2021-08-02T14:44:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,031 | ["airflow/providers/yandex/example_dags/example_yandexcloud_dataproc.py", "airflow/providers/yandex/hooks/yandex.py", "airflow/providers/yandex/hooks/yandexcloud_dataproc.py", "airflow/providers/yandex/operators/yandexcloud_dataproc.py", "docs/spelling_wordlist.txt", "setup.py", "tests/providers/yandex/hooks/test_yandexcloud_dataproc.py", "tests/providers/yandex/operators/test_yandexcloud_dataproc.py"] | Add autoscaling support to yandexcloud operator | * and stop setting default values in python operator code, so defaults can be set at the server side.
This issue is just for the PR and questions from maintainers. | https://github.com/apache/airflow/issues/17031 | https://github.com/apache/airflow/pull/17033 | 0e6e04e5f80eaf186d28ac62d4178e971ccf32bc | e3089dd5d045cf6daf8f15033a4cc879db0df5b5 | 2021-07-15T14:44:00Z | python | 2021-08-02T11:06:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,014 | ["airflow/models/baseoperator.py", "tests/models/test_baseoperator.py"] | Changes to BaseOperatorMeta breaks __init_subclass__ | **Apache Airflow version**: 2.1.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.19.8
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Amazon Linux EKS
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Helm (community chart)
- **Others**:
**What happened**:
With the addition of `__new__` to the `BaseOperatorMeta` class, this breaks our usage of operators that allow configuration through `__init_subclass__` arguments.
Relevant python bug: https://bugs.python.org/issue29581
**What you expected to happen**:
We should be able to use `__init_subclass__` in operators as we used to be able to. We relied on this behavior to customize some of our subclasses. This is working in 2.0.2 (though might exactly be considered a regression).
**How to reproduce it**:
This is a broken example
```py3
import datetime
from airflow import DAG
from airflow.operators.bash import BashOperator
class PrefixedBashOperator(BashOperator):
def __init_subclass__(cls, command: str = None, **kwargs):
if command is not None:
cls._command = command
super().__init_subclass__(**kwargs)
def __init__(self, bash_command, **kwargs):
super().__init__(bash_command=self._command + ' ' + bash_command, **kwargs)
class EchoOperator(PrefixedBashOperator, command='echo'):
pass
with DAG(dag_id='foo', start_date=datetime.datetime(2021, 7, 1)) as dag:
EchoOperator(task_id='foo', bash_command='-e "from airflow"', dag=dag)
```
This results in error:
```
TypeError: __new__() got an unexpected keyword argument 'command'
```
**Anything else we need to know**:
This example works. This shows that all that is needed to fix the issue is add `**kwargs` to `__new__`
```py3
import datetime
from abc import ABCMeta
from airflow import DAG
from airflow.models.baseoperator import BaseOperatorMeta
from airflow.operators.bash import BashOperator
class NewBaseOperatorMeta(BaseOperatorMeta):
def __new__(cls, name, bases, namespace, **kwargs):
new_cls = super(ABCMeta, cls).__new__(cls, name, bases, namespace, **kwargs)
new_cls.__init__ = cls._apply_defaults(new_cls.__init__) # type: ignore
return new_cls
class PrefixedBashOperator(BashOperator, metaclass=NewBaseOperatorMeta):
def __init_subclass__(cls, command: str = None, **kwargs):
if command is not None:
cls._command = command
super().__init_subclass__(**kwargs)
def __init__(self, bash_command, **kwargs):
super().__init__(bash_command=self._command + ' ' + bash_command, **kwargs)
class EchoOperator(PrefixedBashOperator, command='echo'):
pass
with DAG(dag_id='foo', start_date=datetime.datetime(2021, 7, 1)) as dag:
EchoOperator(task_id='foo', bash_command='-e "from airflow"', dag=dag)
```
| https://github.com/apache/airflow/issues/17014 | https://github.com/apache/airflow/pull/17027 | 34478c26d7de1328797e03bbf96d8261796fccbb | 901513203f287d4f8152f028e9070a2dec73ad74 | 2021-07-15T05:27:46Z | python | 2021-07-22T22:23:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 17,005 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | `retry_exponential_backoff` algorithm does not account for case when `retry_delay` is zero | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
When `retry_exponential_backoff` is enabled and `retry_interval` is inadvertently set to zero, a divide by zero error occurs in the `modded_hash` calculation of the exponential backoff algorithm, causing the scheduler to crash.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
Scheduler should treat it as a task with `retry_delay` of zero.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Create a task with `retry_delay=timedelta()` and `retry_exponential_backoff=True`.
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
Willing to submit a PR; opened #17003 (WIP) with possible fix.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/17005 | https://github.com/apache/airflow/pull/17003 | 0199c5d51aa7d34b7e3e8e6aad73ab80b6018e8b | 6e2a3174dfff2e396c38be0415df55cfe0d76f45 | 2021-07-14T20:23:09Z | python | 2021-09-30T07:32:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,992 | ["airflow/kubernetes/kubernetes_helper_functions.py", "tests/executors/test_kubernetes_executor.py"] | Pod fails to run when task or dag name contains non ASCII characters (k8s executor) | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.20.5
**Environment**:
- **Cloud provider or hardware configuration**: Azure AKS
**What happened**:
When task or dag name is defined with a non ascii character, pod creations fails with a kubernetes.client.rest.ApiException: (422)
(task maintains scheduled status)
this is because the hostname defined for the pod is based on dag and task name...
**What you expected to happen**:
Create the pod and run the task
**How to reproduce it**:
Run a task with name "campaña" on K8s executor,
Error log
```
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod \"dagnamecampaña.0fb696e661e347968216df454b41b56f\" is invalid: metadata.name: Invalid value: \"dagnamecampaña.0fb696e661e347968216df454b41b56f\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0
-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')","reason":"Invalid"
```
| https://github.com/apache/airflow/issues/16992 | https://github.com/apache/airflow/pull/17057 | 1a0730a08f2d72cd71447b6d6549ec10d266dd6a | a4af964c1ad2c419ef51cd9d717f5aac7ed60b39 | 2021-07-14T15:13:59Z | python | 2021-07-19T19:45:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,982 | ["airflow/models/taskinstance.py"] | Tasks fail and do not log due to backend DB (dead?)lock | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Apache Airflow version**: 2.1.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18
**Environment**:
- **Cloud provider or hardware configuration**: AWS hosting a Kube cluster
- **OS**: Ubuntu 19.10
- **Kernel**: 4.14.225-169.362.amzn2.x86_64
- **Install tools**:
- **Others**: MySQL 8.0.23 on RDS
**What happened**:
In an unpredictable fashion, some tasks are unable to start. They do not retry and they do not write to the shared log directory, but if I run `kubectl logs <worker pod>` while it sits in Error state afterward, I can see:
```
[2021-07-12 23:30:21,713] {dagbag.py:496} INFO - Filling up the DagBag from /usr/local/airflow/dags/foo/bar.py
Running <TaskInstance: foo_bar.my_task 2021-07-12T22:30:00+00:00 [queued]> on host <WORKER POD>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1205, 'Lock wait timeout exceeded; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/cli.py", line 91, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 238, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 64, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 121, in _run_task_by_local_task_job
run_job.run()
File "/usr/local/lib/python3.7/dist-packages/airflow/jobs/base_job.py", line 237, in run
self._execute()
File "/usr/local/lib/python3.7/dist-packages/airflow/jobs/local_task_job.py", line 96, in _execute
pool=self.pool,
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/models/taskinstance.py", line 1023, in check_and_change_state_before_execution
self.refresh_from_db(session=session, lock_for_update=True)
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/models/taskinstance.py", line 623, in refresh_from_db
ti = qry.with_for_update().first()
<SQLALCHEMY TRACE OMITTED FOR BREVITY>
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1205, 'Lock wait timeout exceeded; try restarting transaction')
[SQL: SELECT task_instance.try_number AS task_instance_try_number, task_instance.task_id AS task_instance_task_id, task_instance.dag_id AS task_instance_dag_id, task_instance.execution_date AS task_instance_execution_date, task_instance.start_date AS task_instance_start_date, task_instance.end_date AS task_instance_end_date, task_instance.duration AS task_instance_duration, task_instance.state AS task_instance_state, task_instance.max_tries AS task_instance_max_tries, task_instance.hostname AS task_instance_hostname, task_instance.unixname AS task_instance_unixname, task_instance.job_id AS task_instance_job_id, task_instance.pool AS task_instance_pool, task_instance.pool_slots AS task_instance_pool_slots, task_instance.queue AS task_instance_queue, task_instance.priority_weight AS task_instance_priority_weight, task_instance.operator AS task_instance_operator, task_instance.queued_dttm AS task_instance_queued_dttm, task_instance.queued_by_job_id AS task_instance_queued_by_job_id, task_instance.pid AS task_instance_pid, task_instance.executor_config AS task_instance_executor_config, task_instance.external_executor_id AS task_instance_external_executor_id
FROM task_instance
WHERE task_instance.dag_id = %s AND task_instance.task_id = %s AND task_instance.execution_date = %s
LIMIT %s FOR UPDATE]
[parameters: ('foobar', 'my_task', datetime.datetime(2021, 7, 12, 22, 30), 1)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
```
<details>
<summary>Full length log</summary>
```
[2021-07-12 23:30:21,713] {dagbag.py:496} INFO - Filling up the DagBag from /usr/local/airflow/dags/foo/bar.py
Running <TaskInstance: foo_bar.my_task 2021-07-12T22:30:00+00:00 [queued]> on host <WORKER POD>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1205, 'Lock wait timeout exceeded; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/cli.py", line 91, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 238, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 64, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 121, in _run_task_by_local_task_job
run_job.run()
File "/usr/local/lib/python3.7/dist-packages/airflow/jobs/base_job.py", line 237, in run
self._execute()
File "/usr/local/lib/python3.7/dist-packages/airflow/jobs/local_task_job.py", line 96, in _execute
pool=self.pool,
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/models/taskinstance.py", line 1023, in check_and_change_state_before_execution
self.refresh_from_db(session=session, lock_for_update=True)
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/models/taskinstance.py", line 623, in refresh_from_db
ti = qry.with_for_update().first()
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3429, in first
ret = list(self[0:1])
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3203, in __getitem__
return list(res)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3535, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3560, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1130, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1511, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1205, 'Lock wait timeout exceeded; try restarting transaction')
[SQL: SELECT task_instance.try_number AS task_instance_try_number, task_instance.task_id AS task_instance_task_id, task_instance.dag_id AS task_instance_dag_id, task_instance.execution_date AS task_instance_execution_date, task_instance.start_date AS task_instance_start_date, task_instance.end_date AS task_instance_end_date, task_instance.duration AS task_instance_duration, task_instance.state AS task_instance_state, task_instance.max_tries AS task_instance_max_tries, task_instance.hostname AS task_instance_hostname, task_instance.unixname AS task_instance_unixname, task_instance.job_id AS task_instance_job_id, task_instance.pool AS task_instance_pool, task_instance.pool_slots AS task_instance_pool_slots, task_instance.queue AS task_instance_queue, task_instance.priority_weight AS task_instance_priority_weight, task_instance.operator AS task_instance_operator, task_instance.queued_dttm AS task_instance_queued_dttm, task_instance.queued_by_job_id AS task_instance_queued_by_job_id, task_instance.pid AS task_instance_pid, task_instance.executor_config AS task_instance_executor_config, task_instance.external_executor_id AS task_instance_external_executor_id
FROM task_instance
WHERE task_instance.dag_id = %s AND task_instance.task_id = %s AND task_instance.execution_date = %s
LIMIT %s FOR UPDATE]
[parameters: ('foobar', 'my_task', datetime.datetime(2021, 7, 12, 22, 30), 1)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
```
</details>
Afterward, the task is marked as Failed. The issue is transient, and tasks can be manually rerun to try again.
**What you expected to happen**:
If a lock cannot be obtained, it should exit more gracefully and reschedule.
**How to reproduce it**:
You can trigger the non-graceful task failure by manually locking the row and then trying to run the task -- it should work on any task.
1. Connect to the MySQL instance backing Airflow
2. `SET autocommit = OFF;`
3. `START TRANSACTION;`
4. Lock the row
```
SELECT task_instance.try_number AS task_instance_try_number, task_instance.task_id AS task_instance_task_id, task_instance.dag_id AS task_instance_dag_id, task_instance.execution_date AS task_instance_execution_date, task_instance.start_date AS task_instance_start_date, task_instance.end_date AS task_instance_end_date, task_instance.duration AS task_instance_duration, task_instance.state AS task_instance_state, task_instance.max_tries AS task_instance_max_tries, task_instance.hostname AS task_instance_hostname, task_instance.unixname AS task_instance_unixname, task_instance.job_id AS task_instance_job_id, task_instance.pool AS task_instance_pool, task_instance.pool_slots AS task_instance_pool_slots, task_instance.queue AS task_instance_queue, task_instance.priority_weight AS task_instance_priority_weight, task_instance.operator AS task_instance_operator, task_instance.queued_dttm AS task_instance_queued_dttm, task_instance.queued_by_job_id AS task_instance_queued_by_job_id, task_instance.pid AS task_instance_pid, task_instance.executor_config AS task_instance_executor_config, task_instance.external_executor_id AS task_instance_external_executor_id
FROM task_instance
WHERE task_instance.dag_id = 'foobar'
AND task_instance.task_id = 'my_task'
AND task_instance.execution_date = '2021-07-12 00:00:00.000000'
LIMIT 1 FOR UPDATE;
```
5. Try to run the task via the UI.
**Anything else we need to know**:
Ideally deadlock doesn't ever occur and the task executes normally, however the deadlocks are seemingly random and I cannot replicate them. I hypothesized that somehow the scheduler was spinning up two worker pods at the same time, but if that were the case I would see two dead workers in `Error` state by performing `kubectl get pods`. Deadlock itself seems to occur on <1% of tasks, but it seems that deadlock itself consistently fails the task without retry. | https://github.com/apache/airflow/issues/16982 | https://github.com/apache/airflow/pull/21362 | 7a38ec2ad3b3bd6fda5e1ee9fe9e644ccb8b4c12 | 6d110b565a505505351d1ff19592626fb24e4516 | 2021-07-14T05:08:01Z | python | 2022-02-07T19:12:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,976 | ["airflow/www/views.py"] | Rendered Templates - py renderer doesn't work for list or dict | **Apache Airflow version**: 2.1.1
**What happened**:
The rendered templates screen doesn't show the operator arguments for TaskFlow API operators.

**What you expected to happen**:
Should show the arguments after any templates have been rendered.
**How to reproduce it**:
Will happen for any `@task` decorated operator.
**Anything else we need to know**:
This issue appears on a number of operators. Possible solution is to modify the "get_python_source" in utils/code_utils.py to handle list and dict. This would cause any operator that using py as the renderer to handle lists and dicts. Possibly something like:
```
if isinstance(x, list):
return [str(v) for v in x]
if isinstance(x, dict):
return {k: str(v) for k, v in x.items()}
```
The converting values to strings seems necessary in order to avoid errors when this is passed to the pygments lexer.

| https://github.com/apache/airflow/issues/16976 | https://github.com/apache/airflow/pull/17082 | 636625fdb99e6b7beb1375c5df52b06c09e6bafb | 1a0730a08f2d72cd71447b6d6549ec10d266dd6a | 2021-07-13T17:02:00Z | python | 2021-07-19T19:22:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,972 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "tests/providers/amazon/aws/hooks/test_base_aws.py"] | AWS Hooks fail when assuming role and connection id contains forward slashes | **Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T18:49:28Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.16-eks-7737de", GitCommit:"7737de131e58a68dda49cdd0ad821b4cb3665ae8", GitTreeState:"clean", BuildDate:"2021-03-10T21:33:25Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
**Environment**: Local/Development
- **Cloud provider or hardware configuration**: Docker container
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux 243e98509628 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
* Using AWS Secrets Manager secrets backend
* Using S3Hook with aws_conn_id="foo/bar/baz" (example, but the slashes are important)
* Secret value is: `aws://?role_arn=arn%3Aaws%3Aiam%3A%3A<account_id>%3Arole%2F<role_name>®ion_name=us-east-1`
* Get the following error: `botocore.exceptions.ClientError: An error occurred (ValidationError) when calling the AssumeRole operation: 1 validation error detected: Value 'Airflow_data/foo/bar/baz' at 'roleSessionName' failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]*`
**What you expected to happen**:
No error and for boto to attempt to assume the role in the connection URI.
The _SessionFactory._assume_role class method is setting the role session name to `f"Airflow_{self.conn.conn_id}"` with no encoding.
**How to reproduce it**:
* Create an AWS connection with forward slashes in the name/id
** Use a role_arn in the connection string (e.g. `aws://?role_arn=...`)
* Create a test DAG using an AWS hook. Example below:
```python
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
from datetime import datetime, timedelta
with DAG(
dag_id='test_assume_role',
start_date=datetime(2021, 6, 1),
schedule_interval=None, # no schedule, triggered manually/ad-hoc
tags=['test'],
) as dag:
def write_to_s3(**kwargs):
s3_hook = S3Hook(aws_conn_id='aws/test')
s3_hook.load_string(string_data='test', bucket_name='test_bucket', key='test/{{ execution_date }}')
write_test_object = PythonOperator(task_id='write_test_object', python_callable=write_to_s3)
```
**Anything else we need to know**:
This is a redacted log from my actual test while using AWS Secrets Manager. Should get a similar result *without* Secrets Manager though.
<details>
<summary>1.log</summary>
[2021-07-13 12:38:10,271] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: test_assume_role.write_test_object 2021-07-13T12:35:02.576772+00:00 [queued]>
[2021-07-13 12:38:10,288] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: test_assume_role.write_test_object 2021-07-13T12:35:02.576772+00:00 [queued]>
[2021-07-13 12:38:10,288] {taskinstance.py:1067} INFO -
--------------------------------------------------------------------------------
[2021-07-13 12:38:10,289] {taskinstance.py:1068} INFO - Starting attempt 1 of 1
[2021-07-13 12:38:10,289] {taskinstance.py:1069} INFO -
--------------------------------------------------------------------------------
[2021-07-13 12:38:10,299] {taskinstance.py:1087} INFO - Executing <Task(PythonOperator): write_test_object> on 2021-07-13T12:35:02.576772+00:00
[2021-07-13 12:38:10,305] {standard_task_runner.py:52} INFO - Started process 38974 to run task
[2021-07-13 12:38:10,309] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'test_assume_role', 'write_test_object', '2021-07-13T12:35:02.576772+00:00', '--job-id', '2376', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/test_assume_role.py', '--cfg-path', '/tmp/tmprusuo0ys', '--error-file', '/tmp/tmp8ytd9bk8']
[2021-07-13 12:38:10,311] {standard_task_runner.py:77} INFO - Job 2376: Subtask write_test_object
[2021-07-13 12:38:10,331] {logging_mixin.py:104} INFO - Running <TaskInstance: test_assume_role.write_test_object 2021-07-13T12:35:02.576772+00:00 [running]> on host 243e98509628
[2021-07-13 12:38:10,392] {taskinstance.py:1282} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=test_assume_role
AIRFLOW_CTX_TASK_ID=write_test_object
AIRFLOW_CTX_EXECUTION_DATE=2021-07-13T12:35:02.576772+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-07-13T12:35:02.576772+00:00
[2021-07-13 12:38:10,419] {base_aws.py:362} INFO - Airflow Connection: aws_conn_id=foo/bar/baz
[2021-07-13 12:38:10,444] {credentials.py:1087} INFO - Found credentials in environment variables.
[2021-07-13 12:38:11,079] {base_aws.py:173} INFO - No credentials retrieved from Connection
[2021-07-13 12:38:11,079] {base_aws.py:76} INFO - Retrieving region_name from Connection.extra_config['region_name']
[2021-07-13 12:38:11,079] {base_aws.py:81} INFO - Creating session with aws_access_key_id=None region_name=us-east-1
[2021-07-13 12:38:11,096] {base_aws.py:151} INFO - role_arn is arn:aws:iam::<account_id>:role/<role_name>
[2021-07-13 12:38:11,096] {base_aws.py:97} INFO - assume_role_method=None
[2021-07-13 12:38:11,098] {credentials.py:1087} INFO - Found credentials in environment variables.
[2021-07-13 12:38:11,119] {base_aws.py:185} INFO - Doing sts_client.assume_role to role_arn=arn:aws:iam::<account_id>:role/<role_name> (role_session_name=Airflow_data/foo/bar/baz)
[2021-07-13 12:38:11,407] {taskinstance.py:1481} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1137, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/operators/python.py", line 150, in execute
return_value = self.execute_callable()
File "/usr/local/lib/python3.7/site-packages/airflow/operators/python.py", line 161, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/usr/local/airflow/dags/test_assume_role.py", line 49, in write_to_s3
key='test/{{ execution_date }}'
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 61, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 90, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 571, in load_string
self._upload_file_obj(file_obj, key, bucket_name, replace, encrypt, acl_policy)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 652, in _upload_file_obj
if not replace and self.check_for_key(key, bucket_name):
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 61, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 90, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 328, in check_for_key
raise e
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 322, in check_for_key
self.get_conn().head_object(Bucket=bucket_name, Key=key)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 455, in get_conn
return self.conn
File "/usr/local/lib/python3.7/site-packages/cached_property.py", line 36, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 437, in conn
return self.get_client_type(self.client_type, region_name=self.region_name)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 403, in get_client_type
session, endpoint_url = self._get_credentials(region_name)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 379, in _get_credentials
conn=connection_object, region_name=region_name, config=self.config
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 69, in create_session
return self._impersonate_to_role(role_arn=role_arn, session=session, session_kwargs=session_kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 101, in _impersonate_to_role
sts_client=sts_client, role_arn=role_arn, assume_role_kwargs=assume_role_kwargs
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 188, in _assume_role
RoleArn=role_arn, RoleSessionName=role_session_name, **assume_role_kwargs
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationError) when calling the AssumeRole operation: 1 validation error detected: Value 'Airflow_data/foo/bar/baz' at 'roleSessionName' failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]*
[2021-07-13 12:38:11,417] {taskinstance.py:1531} INFO - Marking task as FAILED. dag_id=test_assume_role, task_id=write_test_object, execution_date=20210713T123502, start_date=20210713T123810, end_date=20210713T123811
[2021-07-13 12:38:11,486] {local_task_job.py:151} INFO - Task exited with return code 1
</details> | https://github.com/apache/airflow/issues/16972 | https://github.com/apache/airflow/pull/17210 | b2ee9b7bb762b613ac2d108a49448ceaa25253ec | 80fc80ace69982882dd0ac5c70eeedc714658941 | 2021-07-13T13:38:52Z | python | 2021-08-01T22:34:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,967 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | GCStoGCS operator not working when replace=False and source_object != destination_object | The GCStoGCS operator using the _replace=False_ flag and specifying _source_object != destination_object_, will not work properly, because it is comparing [apples](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/gcs_to_gcs.py#L351) with [pears](https://github.com/apache/airflow/blob/0dd70de9492444d52e0afaadd7a510fc8a95369c/airflow/providers/google/cloud/transfers/gcs_to_gcs.py#L357).
Cause: the _prefix_ in the _pears_ case needs to be calculated from the _destination_object_, however, it is coming from the _source_object_.
| https://github.com/apache/airflow/issues/16967 | https://github.com/apache/airflow/pull/16991 | e7bd82acdf3cf7628d5f3cdf223cf9cf01874c25 | 966b2501995279b7b5f2e1d0bf1c63a511dd382e | 2021-07-13T10:19:13Z | python | 2021-07-25T20:54:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,951 | ["airflow/models/baseoperator.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "airflow/utils/trigger_rule.py", "docs/apache-airflow/concepts/dags.rst", "tests/ti_deps/deps/test_trigger_rule_dep.py", "tests/utils/test_trigger_rule.py"] | add all_skipped trigger rule | I have use cases where I want to run tasks if all direct upstream tasks are skipped. The `all_done` trigger rule isn't enough for this use case. | https://github.com/apache/airflow/issues/16951 | https://github.com/apache/airflow/pull/21662 | f0b6398dd642dfb75c1393e8c3c88682794d152c | 537c24433014d3d991713202df9c907e0f114d5d | 2021-07-12T18:32:17Z | python | 2022-02-26T21:42:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,944 | ["Dockerfile"] | Include `Authlib` in Airflow Docker images | **Description**
`Authlib` is required for FAB authentication. Currently, it's not included in the Airflow images and must be pip installed separately. It's a small package supporting core functionality (Webserver UI authentication), hence would make sense to include. | https://github.com/apache/airflow/issues/16944 | https://github.com/apache/airflow/pull/17093 | d268016a7a6ff4b65079f1dea080ead02aea99bb | 3234527284ce01db67ba22c544f71ddaf28fa27e | 2021-07-12T15:40:58Z | python | 2021-07-19T23:12:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,939 | ["Dockerfile", "scripts/docker/compile_www_assets.sh"] | UI is broken for `breeze kind-cluster deploy` | Using `breeze kind-cluster deploy` to deploy airflow in Kubernetes cluster for development results in unusable UI
**Apache Airflow version**: main
**How to reproduce it**:
Start kind cluster with `./breeze kind-cluster start`
Deploy airflow with `./breeze kind-cluster deploy`
Check the UI and see that it's broken:

**Anything else we need to know**:
This is likely as a result of https://github.com/apache/airflow/pull/16577
| https://github.com/apache/airflow/issues/16939 | https://github.com/apache/airflow/pull/17086 | bb1d79cb81c5a5a80f97ab4fecfa7db7a52c7b4b | 660027f65d5333368aad7f16d3c927b9615e60ac | 2021-07-12T10:11:53Z | python | 2021-07-19T17:52:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,922 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py"] | Using the string ".json" in a dag makes KubernetesPodOperator worker unable to trigger the pod | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:11:29Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.21) and server (1.18) exceeds the supported minor version skew of +/-1
```
**Environment**:
- **Cloud provider or hardware configuration**: Scalway Kubernetes Kapsule
- **OS** (e.g. from /etc/os-release): macOS
- **Kernel** (e.g. `uname -a`): Darwin Louisons-MacBook-Pro.local 20.5.0 Darwin Kernel Version 20.5.0: Sat May 8 05:10:33 PDT 2021; root:xnu-7195.121.3~9/RELEASE_X86_64 x86_64
- **Install tools**:
- **Others**:
**What happened**:
While trying to write a simple dag with KubernetesPodExecutor, I noticed that in certain cases, the pod is launched but not always. By investigating a bit more, I found that when the string `".json"` is present in parameters of the KubernetesPodOperator, it will not work.
I tried to set up a minimal example to reproduce the bug.
I manage to reproduce the bug on my kubernetes cluster and my Airflow instance (if it can help)
```python
import datetime
import airflow
from airflow.utils.dates import days_ago
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import \
KubernetesPodOperator
DAG_NAME = "trigger_test"
default_args = {
"owner": "Rapsodie Data",
"depends_on_past": False,
"wait_for_downstream": False,
"email": [""],
"email_on_failure": False,
"email_on_retry": False,
"retries": 0,
"retry_delay": datetime.timedelta(minutes=20),
}
with airflow.DAG(
"michel",
catchup=False,
default_args=default_args,
start_date=days_ago(1),
schedule_interval="*/10 * * * *",
) as dag:
kubernetes_min_pod_json = KubernetesPodOperator(
# The ID specified for the task.
task_id='pod-ex-minimum_json',
name='pod-ex-minimum_json',
cmds=['echo'],
namespace='default',
arguments=["vivi.json"],
image='gcr.io/gcp-runtimes/ubuntu_18_0_4'
)
kubernetes_min_pod_txt = KubernetesPodOperator(
# The ID specified for the task.
task_id='pod-ex-minimum_txt',
name='pod-ex-minimum_txt',
cmds=['echo'],
namespace='default',
arguments=["vivi.txt"],
image='gcr.io/gcp-runtimes/ubuntu_18_0_4'
)
kubernetes_min_pod_json
kubernetes_min_pod_txt
```
No error message or log to give here.
Here is the logs of the scheduler while trying to execute one run:
```
[2021-07-10 14:30:49,356] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d had an event of type MODIFIED
[2021-07-10 14:30:49,356] {kubernetes_executor.py:205} INFO - Event: michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d Succeeded
[2021-07-10 14:30:49,996] {kubernetes_executor.py:368} INFO - Attempting to finish pod; pod_id: michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d; state: None; annotations: {'dag_id': 'michel', 'task_id': 'pod-ex-minimum_txt', 'execution_date': '2021-07-10T14:20:00+00:00', 'try_number': '1'}
[2021-07-10 14:30:50,004] {kubernetes_executor.py:546} INFO - Changing state of (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 20, tzinfo=tzlocal()), try_number=1), None, 'michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d', 'default', '56653001583') to None
[2021-07-10 14:30:50,006] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_txt execution_date=2021-07-10 14:20:00+00:00 exited with status None for try_number 1
[2021-07-10 14:31:00,478] {scheduler_job.py:964} INFO - 2 tasks up for execution:
<TaskInstance: michel.pod-ex-minimum_txt 2021-07-10 14:30:59.199174+00:00 [scheduled]>
<TaskInstance: michel.pod-ex-minimum_json 2021-07-10 14:30:59.199174+00:00 [scheduled]>
[2021-07-10 14:31:00,483] {scheduler_job.py:993} INFO - Figuring out tasks to run in Pool(name=default_pool) with 128 open slots and 2 task instances ready to be queued
[2021-07-10 14:31:00,483] {scheduler_job.py:1021} INFO - DAG michel has 0/16 running and queued tasks
[2021-07-10 14:31:00,484] {scheduler_job.py:1021} INFO - DAG michel has 1/16 running and queued tasks
[2021-07-10 14:31:00,484] {scheduler_job.py:1086} INFO - Setting the following tasks to queued state:
<TaskInstance: michel.pod-ex-minimum_txt 2021-07-10 14:30:59.199174+00:00 [scheduled]>
<TaskInstance: michel.pod-ex-minimum_json 2021-07-10 14:30:59.199174+00:00 [scheduled]>
[2021-07-10 14:31:00,492] {scheduler_job.py:1128} INFO - Sending TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) to executor with priority 1 and queue default
[2021-07-10 14:31:00,492] {base_executor.py:82} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_txt', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py']
[2021-07-10 14:31:00,493] {scheduler_job.py:1128} INFO - Sending TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) to executor with priority 1 and queue default
[2021-07-10 14:31:00,493] {base_executor.py:82} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_json', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py']
[2021-07-10 14:31:00,498] {kubernetes_executor.py:504} INFO - Add task TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) with command ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_txt', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'] with executor_config {}
[2021-07-10 14:31:00,500] {kubernetes_executor.py:504} INFO - Add task TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) with command ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_json', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'] with executor_config {}
[2021-07-10 14:31:00,503] {kubernetes_executor.py:292} INFO - Kubernetes job is (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1), ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_txt', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'], None, None)
[2021-07-10 14:31:00,558] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type ADDED
[2021-07-10 14:31:00,558] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_txt execution_date=2021-07-10 14:30:59.199174+00:00 exited with status queued for try_number 1
[2021-07-10 14:31:00,559] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending
[2021-07-10 14:31:00,559] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_json execution_date=2021-07-10 14:30:59.199174+00:00 exited with status queued for try_number 1
[2021-07-10 14:31:00,563] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED
[2021-07-10 14:31:00,563] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending
[2021-07-10 14:31:00,576] {scheduler_job.py:1249} INFO - Setting external_id for <TaskInstance: michel.pod-ex-minimum_json 2021-07-10 14:30:59.199174+00:00 [queued]> to 1
[2021-07-10 14:31:00,577] {scheduler_job.py:1249} INFO - Setting external_id for <TaskInstance: michel.pod-ex-minimum_txt 2021-07-10 14:30:59.199174+00:00 [queued]> to 1
[2021-07-10 14:31:00,621] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED
[2021-07-10 14:31:00,622] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending
[2021-07-10 14:31:00,719] {kubernetes_executor.py:292} INFO - Kubernetes job is (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1), ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_json', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'], None, None)
[2021-07-10 14:31:00,752] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type ADDED
[2021-07-10 14:31:00,752] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending
[2021-07-10 14:31:00,769] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED
[2021-07-10 14:31:00,770] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending
[2021-07-10 14:31:00,870] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED
[2021-07-10 14:31:00,871] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending
[2021-07-10 14:31:03,961] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED
[2021-07-10 14:31:03,961] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending
[2021-07-10 14:31:05,538] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED
[2021-07-10 14:31:05,542] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending
[2021-07-10 14:31:07,092] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED
[2021-07-10 14:31:07,092] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending
[2021-07-10 14:31:08,163] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED
[2021-07-10 14:31:08,164] {kubernetes_executor.py:208} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b is Running
[2021-07-10 14:31:08,818] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED
[2021-07-10 14:31:08,820] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending
[2021-07-10 14:31:09,924] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED
[2021-07-10 14:31:09,925] {kubernetes_executor.py:208} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 is Running
[2021-07-10 14:31:28,861] {dagrun.py:429} ERROR - Marking run <DagRun michel @ 2021-07-10 14:30:59.199174+00:00: manual__2021-07-10T14:30:59.199174+00:00, externally triggered: True> failed
[2021-07-10 14:31:45,227] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED
[2021-07-10 14:31:45,227] {kubernetes_executor.py:205} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Succeeded
[2021-07-10 14:31:45,454] {kubernetes_executor.py:368} INFO - Attempting to finish pod; pod_id: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1; state: None; annotations: {'dag_id': 'michel', 'task_id': 'pod-ex-minimum_json', 'execution_date': '2021-07-10T14:30:59.199174+00:00', 'try_number': '1'}
[2021-07-10 14:31:45,459] {kubernetes_executor.py:546} INFO - Changing state of (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=tzlocal()), try_number=1), None, 'michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1', 'default', '56653030468') to None
[2021-07-10 14:31:45,463] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_json execution_date=2021-07-10 14:30:59.199174+00:00 exited with status None for try_number 1
[2021-07-10 14:31:47,817] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED
[2021-07-10 14:31:47,818] {kubernetes_executor.py:205} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Succeeded
[2021-07-10 14:31:48,373] {kubernetes_executor.py:368} INFO - Attempting to finish pod; pod_id: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b; state: None; annotations: {'dag_id': 'michel', 'task_id': 'pod-ex-minimum_txt', 'execution_date': '2021-07-10T14:30:59.199174+00:00', 'try_number': '1'}
[2021-07-10 14:31:48,376] {kubernetes_executor.py:546} INFO - Changing state of (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=tzlocal()), try_number=1), None, 'michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b', 'default', '56653031774') to None
[2021-07-10 14:31:48,378] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_txt execution_date=2021-07-10 14:30:59.199174+00:00 exited with status None for try_number 1
```
Don't hesitate to ask me if you need more info
| https://github.com/apache/airflow/issues/16922 | https://github.com/apache/airflow/pull/16930 | d3f300fba8c252cac79a1654fddb91532f44c656 | b2c66e45b7c27d187491ec6a1dd5cc92ac7a1e32 | 2021-07-10T14:35:22Z | python | 2021-07-11T17:35:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,921 | ["airflow/providers/salesforce/operators/bulk.py", "airflow/providers/salesforce/provider.yaml", "docs/apache-airflow-providers-salesforce/operators/bulk.rst", "docs/apache-airflow-providers-salesforce/operators/index.rst", "tests/providers/salesforce/operators/test_bulk.py", "tests/system/providers/salesforce/example_bulk.py"] | Add support for Salesforce Bulk API | **Description**
Salesforce Bulk API is very popular to retrieve/push data to Salesforce, a maximum of 10k records can be pushed in the bulk API. Add a separate hook to support bulk Api SalesforceBulkApiHook
https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm
**Use case / motivation**
In a lot of organizations this might be useful for performing ETL from Bigquery or data storage platforms to Salesforce using Bulk API.
**Are you willing to submit a PR?**
Yes
**Related Issues**
| https://github.com/apache/airflow/issues/16921 | https://github.com/apache/airflow/pull/24473 | 34b2ed4066794368f9bcf96b7ccd5a70ee342639 | b6a27594174c888af31d3fc71ea5f8b589883a12 | 2021-07-10T11:42:53Z | python | 2022-07-05T05:17:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,919 | ["airflow/providers/amazon/aws/transfers/sql_to_s3.py"] | error when using mysql_to_s3 (TypeError: cannot safely cast non-equivalent object to int64) |
**Apache Airflow version**:
**Environment**:
- **Cloud provider or hardware configuration**: aws
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**: when reading data with Mysql_to_s3 following exception happens:
[2021-07-10 03:24:04,051] {{mysql_to_s3.py:120}} INFO - Data from MySQL obtained
[2021-07-10 03:24:04,137] {{taskinstance.py:1482}} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib64/python3.7/site-packages/pandas/core/arrays/integer.py", line 155, in safe_cast
return values.astype(dtype, casting="safe", copy=copy)
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
The above exception was the direct cause of the following exception:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1138, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/transfers/mysql_to_s3.py", line 122, in execute
self._fix_int_dtypes(data_df)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/transfers/mysql_to_s3.py", line 114, in _fix_int_dtypes
df[col] = df[col].astype(pd.Int64Dtype())
File "/usr/local/lib64/python3.7/site-packages/pandas/core/generic.py", line 5877, in astype
new_data = self._mgr.astype(dtype=dtype, copy=copy, errors=errors)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/internals/managers.py", line 631, in astype
return self.apply("astype", dtype=dtype, copy=copy, errors=errors)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/internals/managers.py", line 427, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/internals/blocks.py", line 673, in astype
values = astype_nansafe(vals1d, dtype, copy=True)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/dtypes/cast.py", line 1019, in astype_nansafe
return dtype.construct_array_type()._from_sequence(arr, dtype=dtype, copy=copy)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/arrays/integer.py", line 363, in _from_sequence
return integer_array(scalars, dtype=dtype, copy=copy)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/arrays/integer.py", line 143, in integer_array
values, mask = coerce_to_array(values, dtype=dtype, copy=copy)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/arrays/integer.py", line 258, in coerce_to_array
values = safe_cast(values, dtype, copy=False)
File "/usr/local/lib64/python3.7/site-packages/pandas/core/arrays/integer.py", line 164, in safe_cast
) from err
TypeError: cannot safely cast non-equivalent object to int64
```
<!-- What do you think went wrong? -->
**How to reproduce it**:
create a table like the following in mysql database and use mysql_to_s3 to load data from this table into s3
```
create table test_data(id int, some_decimal decimal(10, 2))
insert into test_data (id, some_decimal) values(1, 99999999.99), (2, null)
```
**Anything else we need to know**:
following code is the problem where it is looking for an occurrence of float data type in the column datatype name and instead of using the ```pd.Float64Dtype()``` it uses the ```pd.Int64Dtype()```. since there could be floating-point values in the array this will cause the exception for safely casting the array to data type.
```
def _fix_int_dtypes(self, df: pd.DataFrame) -> None:
"""Mutate DataFrame to set dtypes for int columns containing NaN values."""
for col in df:
if "float" in df[col].dtype.name and df[col].hasnans:
# inspect values to determine if dtype of non-null values is int or float
notna_series = df[col].dropna().values
if np.isclose(notna_series, notna_series.astype(int)).all():
# set to dtype that retains integers and supports NaNs
df[col] = np.where(df[col].isnull(), None, df[col])
df[col] = df[col].astype(pd.Int64Dtype())
```
Moreover, I don't know why we use ```isclose``` to inspect if the values will be close enough if we cast to integer when we have the option to cast to Float64Dtype.
```isclose``` here destroys the perception of the data because it is not an equal evaluation of the sets to determine if the type is float or int. It will approximately check which is the root cause of the exception that follows. | https://github.com/apache/airflow/issues/16919 | https://github.com/apache/airflow/pull/21277 | 0bcca55f4881bacc3fbe86f69e71981f5552b398 | 0a6ea572fb5340a904e9cefaa656ac0127b15216 | 2021-07-10T05:13:39Z | python | 2022-02-06T19:07:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,911 | ["UPDATING.md", "airflow/providers/google/cloud/example_dags/example_dataproc.py", "docs/apache-airflow-providers-google/operators/cloud/dataproc.rst"] | Error in passing metadata to DataprocClusterCreateOperator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
Hi,
I am facing some issues while installing PIP Packages in the Dataproc cluster using Initialization script,
I am trying to upgrade to Airflow 2.0 from 1.10.12 (where this code works fine)
``
[2021-07-09 11:35:37,587] {taskinstance.py:1454} ERROR - metadata was invalid: [('PIP_PACKAGES', 'pyyaml requests pandas openpyxl'), ('x-goog-api-client', 'gl-python/3.7.10 grpc/1.35.0 gax/1.26.0 gccl/airflow_v2.0.0+astro.3')
``
```python
path = f"gs://goog-dataproc-initialization-actions-{self.cfg.get('region')}/python/pip-install.sh"
return DataprocClusterCreateOperator(
........
init_actions_uris=[path],
metadata=[('PIP_PACKAGES', 'pyyaml requests pandas openpyxl')],
............
)
```
**Apache Airflow version**:
airflow_v2.0.0
**What happened**:
I am trying to migrate our codebase from Airflow v1.10.12, on the deeper analysis found that as part refactoring in of below pr #6371, we can no longer pass **metadata** in DataprocClusterCreateOperator() as this is not being passed to ClusterGenerator() method.
**What you expected to happen**:
Operator should work as before.
| https://github.com/apache/airflow/issues/16911 | https://github.com/apache/airflow/pull/19446 | 0c9ce547594bad6451d9139676d0a5039d3ec182 | 48f228cf9ef7602df9bea6ce20d663ac0c4393e1 | 2021-07-09T13:03:48Z | python | 2021-11-15T21:39:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,907 | ["docs/apache-airflow/concepts/scheduler.rst"] | Add tests suite for MariaDB 10.6+ and fix incompatibilities | It seems that MariaDB 10.6 has added support for SKIP...LOCKED, so we could theoretically easily officially support MariaDB database (possibly with fixing some small issues resulting for test suite execution).
It would require to add `mariadb` backend similarly as we added `mssql` backend in those three PRs: #9973, #16103, #16134 | https://github.com/apache/airflow/issues/16907 | https://github.com/apache/airflow/pull/17287 | 9cd5a97654fa82f1d4d8f599e8eb81957b3f7286 | 6c9eab3ea0697b82f11acf79656129604ec0e8f7 | 2021-07-09T08:03:08Z | python | 2021-07-28T16:56:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,891 | ["CONTRIBUTING.rst", "airflow/providers/amazon/aws/example_dags/example_salesforce_to_s3.py", "airflow/providers/amazon/aws/transfers/salesforce_to_s3.py", "airflow/providers/amazon/provider.yaml", "airflow/providers/dependencies.json", "docs/apache-airflow-providers-amazon/operators/salesforce_to_s3.rst", "tests/providers/amazon/aws/transfers/test_salesforce_to_s3.py"] | Add a SalesforceToS3Operator to push Salesforce data to an S3 bucket | **Description**
Currently an operator exists to copy Salesforce data to Google Cloud Storage (`SalesforceToGcsOperator`) but a similar operator for an S3 destination is absent. Since S3 is widely used as part of general storage/data lakes within data pipelines as well as Salesforce to manage a slew of marketing, customer, and sales data, this operator seems beneficial.
**Use case / motivation**
Undoubtedly there are use cases to extract Salesforce into an S3 bucket, perhaps augmenting a data warehouse with said data downstream, or ensuring the data is sync'd to a data lake as part of a modern data architecture. I imagine users are currently building custom operators to do so in Airflow (if not taking advantage of an external service to handle the copy/sync). Ideally this functionality would be included within the AWS provider as well as help provide some feature parity with Google Cloud in Airflow.
**Are you willing to submit a PR?**
Yes 🚀
**Related Issues**
None that I can find.
| https://github.com/apache/airflow/issues/16891 | https://github.com/apache/airflow/pull/17094 | 038b87ecfa4466a405bcf7243872ef927800b582 | 32582b5bf1432e7c7603b959a675cf7edd76c9e6 | 2021-07-08T17:50:50Z | python | 2021-07-21T16:31:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,887 | ["airflow/www/static/js/graph.js"] | Show duration of a task group | **Description**
Show the duration of the task group in the Airflow UI.

**Use case / motivation**
The task groups are a nice way to encapsulate multiple tasks. However, in the Graph view, the duration of the grouped tasks isn't visible. You need to expand the group to view them.
It's possible to view the task durations in the Task Duration view, but that isn't as convenient if you want to zoom in on a particular section of the pipeline.
**Are you willing to submit a PR?**
Yes, but I'm not familiar with the codebase. If it a relatively easy fix, I would appreciate some guidance on which files to touch.
**Related Issues**
None | https://github.com/apache/airflow/issues/16887 | https://github.com/apache/airflow/pull/18406 | c1f34bdb9fefe1b0bc8ce2a69244c956724f4c48 | deb01dd806fac67e71e706cd2c00a7a8681c512a | 2021-07-08T15:03:39Z | python | 2021-09-23T11:56:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,881 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Re-deploy scheduler tasks failing with SIGTERM on K8s executor | **Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.18.17-gke.1901
**Environment**:
- **Cloud provider or hardware configuration**: Google Cloud
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux airflow-scheduler-7697b66974-m6mct 5.4.89+ #1 SMP Sat Feb 13 19:45:14 PST 2021 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
When the `scheduler` is restarted the currently running tasks are facing SIGTERM error.
Every time the `scheduler` is restarted or re-deployed then the current `scheduler` is terminated and a new `scheduler` is created. If during this process exist tasks running the new `scheduler` will terminate these tasks with `complete` status and new tasks will be created to continue the work of the terminated ones. After few seconds the new tasks are terminated with `error` status and SIGTERM error.
<details><summary>Error log</summary> [2021-07-07 14:59:49,024] {cursor.py:661} INFO - query execution done
[2021-07-07 14:59:49,025] {arrow_result.pyx:0} INFO - fetching data done
[2021-07-07 15:00:07,361] {local_task_job.py:196} WARNING - State of this instance has been externally set to failed. Terminating instance.
[2021-07-07 15:00:07,363] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 150
[2021-07-07 15:00:12,845] {taskinstance.py:1264} ERROR - Received SIGTERM. Terminating subprocesses.
[2021-07-07 15:00:12,907] {process_utils.py:66} INFO - Process psutil.Process(pid=150, status='terminated', exitcode=0, started='14:59:46') (150) terminated with exit code 0 </details>
**What you expected to happen**:
The tasks currently running should be allowed to finish their process or the substitute tasks should execute their process with success.
The new `scheduler` should interfere with the running tasks.
**How to reproduce it**:
To reproduce is necessary to start a DAG that has some task(s) that take some minutes to be completed. During this task(s) processing a new deploy for `scheduler` should be executed. During the re-deploy, the current `scheduler` will be terminated and a new one will be created. The current task(s) will be completed (without finish their processing) and substituted for new ones that will fail in seconds.
**Anything else we need to know**:
The problem was not happening with Airflow 1.10.15 and it started to happens after the upgrade to Airflow 2.1.0. | https://github.com/apache/airflow/issues/16881 | https://github.com/apache/airflow/pull/19375 | e57c74263884ad5827a5bb9973eb698f0c269cc8 | 38d329bd112e8be891f077b4e3300182930cf74d | 2021-07-08T08:43:07Z | python | 2021-11-03T06:45:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,880 | ["airflow/providers/amazon/aws/sensors/sqs.py", "setup.py", "tests/providers/amazon/aws/sensors/test_sqs.py"] | Improve AWS SQS Sensor | **Description**
Improve the AWS SQS Sensor as follows:
+ Add optional visibility_timeout parameter
+ Add a customisable / overrideable filter capability so we can filter/ignore irrelevant messages
[Not needed, see below conversation]
--- Check the HTTP status code in AWS response and raise Exception if not 200 - best practice
**Use case / motivation**
I'd like to make the SQS sensor more flexible to enable the following use case:
+ A single queue can be used as a channel for messages from multiple event sources and or multiple targets
+ We need a way to filter and ignore messages not relevant to us, which other processes are looking for
**Are you willing to submit a PR?**
Yes, please assign to me
**Related Issues**
None | https://github.com/apache/airflow/issues/16880 | https://github.com/apache/airflow/pull/16904 | 2c1880a90712aa79dd7c16c78a93b343cd312268 | d28efbfb7780afd1ff13a258dc5dc3e3381ddabd | 2021-07-08T08:11:56Z | python | 2021-08-02T20:47:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,877 | ["airflow/www/static/js/tree.js"] | Cleared task instances in manual runs should have borders | **Description**
Task instances in manual runs do not display with a border, except when the task instance is non-existent (after adding tasks to an existing DAG). Hence, when an _existing_ task instance is cleared, it is displayed without a border, causing it to disappear into the background. To be consistent, existing task instances that are cleared should also be drawn with borders.
Here, `task2a` and `task2b` are newly-added tasks and have `no_status`. They are displayed with borders:

Afterwards, `task1a` and `task1b` are cleared and lose their borders:

**Use case / motivation**
To prevent the task instances from disappearing into the background.
**Are you willing to submit a PR?**
Yes, but would need ramp-up time as I am new to front-end.
**Related Issues**
Split from #16824.
| https://github.com/apache/airflow/issues/16877 | https://github.com/apache/airflow/pull/18033 | a8184e42ce9d9b7f6b409f07c1e2da0138352ef3 | d856b79a1ddab030ab3e873ae2245738b949c30a | 2021-07-08T05:01:33Z | python | 2021-09-21T13:32:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,844 | ["airflow/api_connexion/openapi/v1.yaml"] | Rest API: allow filtering DagRuns by state. | **Add state filter to the dag runs list API endpoint**
One feature available in the "Browse / Dag Runs" page but not in the current rest API is the ability to filter runs of a specific state(s).
Example use-case: this would let a client efficiently fetch the number of "queued" and "running" runs, or look at recent failed runs.
Ideally, the current `/dags/{dag_id}/dagRuns` and `/dags/~/dagRuns/list` endpoints would each get updated to support an additional parameter called `state`. This parameter could be given multiple times and act as a logical "OR" (just like `dag_ids` in the POST endpoint, or like `state` in the task instances endpoint).
The web UI page offers more fancy filters like "includes", but for something with a finite number of values like `state`, it doesn't feel necessary for the API. | https://github.com/apache/airflow/issues/16844 | https://github.com/apache/airflow/pull/20697 | 4fa9cfd7de13cd79956fbb68f8416a5a019465a4 | 376da6a969a3bb13a06382a38ab467a92fee0179 | 2021-07-07T00:03:32Z | python | 2022-01-06T10:10:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,836 | ["airflow/www/templates/airflow/model_list.html"] | WebUI broke when enable_xcom_pickling Airflow2.1 | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
AWS EKS, AWS ECS
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
I have a python operator where python_callable just returns a data frame. And I was getting this error
```
[2021-07-06 15:02:08,889] {xcom.py:229} ERROR - Could not serialize the XCom value into JSON. If you are using pickle instead of JSON for XCom, then you need to enable pickle support for XCom in your airflow config.
[2021-07-06 15:02:08,890] {taskinstance.py:1481} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1137, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1344, in _execute_task
self.xcom_push(key=XCOM_RETURN_KEY, value=result)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1925, in xcom_push
session=session,
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/xcom.py", line 79, in set
value = XCom.serialize_value(value)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/xcom.py", line 226, in serialize_value
return json.dumps(value).encode('UTF-8')
File "/usr/local/lib/python3.7/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type DataFrame is not JSON serializable
```
Then I set `enable_xcom_pickling: True` in airflow.cfg as per suggestion in the error. It did work and DAG was successful but then my XCOM's UI broke with the following error.
```
Something bad has happened.
Please consider letting us know by creating a bug report using GitHub.
Python version: 3.7.10
Airflow version: 2.1.0
Node: airflow-webserver-7d7f64fbc4-zl8zk
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/security/decorators.py", line 109, in wraps
return f(self, *args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/views.py", line 553, in list
self.list_template, title=self.list_title, widgets=widgets
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/baseviews.py", line 288, in render_template
template, **dict(list(kwargs.items()) + list(self.extra_args.items()))
File "/home/airflow/.local/lib/python3.7/site-packages/flask/templating.py", line 140, in render_template
ctx.app,
File "/home/airflow/.local/lib/python3.7/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/home/airflow/.local/lib/python3.7/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/airflow/.local/lib/python3.7/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/airflow/.local/lib/python3.7/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/general/model/list.html", line 2, in top-level template code
{% import 'appbuilder/general/lib.html' as lib %}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/base.html", line 1, in top-level template code
{% extends base_template %}
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/templates/airflow/main.html", line 20, in top-level template code
{% extends 'appbuilder/baselayout.html' %}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 37, in top-level template code
{% block body %}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 19, in block "body"
{% block content %}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/general/model/list.html", line 13, in block "content"
{% block list_list scoped %}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/general/model/list.html", line 15, in block "list_list"
{{ widgets.get('list')()|safe }}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/widgets.py", line 37, in __call__
return template.render(args)
File "/home/airflow/.local/lib/python3.7/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/airflow/.local/lib/python3.7/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/airflow/.local/lib/python3.7/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/templates/airflow/model_list.html", line 21, in top-level template code
{% extends 'appbuilder/general/widgets/base_list.html' %}
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/templates/appbuilder/general/widgets/base_list.html", line 23, in top-level template code
{% block begin_loop_values %}
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/templates/airflow/model_list.html", line 80, in block "begin_loop_values"
{% elif item[value] != None %}
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/generic.py", line 1443, in __nonzero__
f"The truth value of a {type(self).__name__} is ambiguous. "
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I expected to see the XCOM return_value in the xcom list as it use to in previous versions.
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16836 | https://github.com/apache/airflow/pull/16893 | 2fea4cdceaa12b3ac13f24eeb383af624aacb2e7 | dcc7fb56708773e929f742c7c8463fb8e91e7340 | 2021-07-06T16:06:48Z | python | 2021-07-12T14:23:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,834 | ["airflow/utils/log/file_task_handler.py"] | Airflow dashboard cannot load logs containing emoji | **Apache Airflow version**:
2.1.0
**What happened**:
When printing emoji to a DAG log, the Airflow dashboard fails to display the entire log.
When checking the output log in the Airflow dashboard, the following error message appears:
> *** Failed to load local log file: /tmp/dag_name/task_name/2021-07-06T10:49:18.136953+00:00/1.log
> *** 'ascii' codec can't decode byte 0xf0 in position 3424: ordinal not in range(128)
**What you expected to happen**:
The log should be displayed.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Insert the following into any Python DAG, then run it.
`print("💼")`
**How often does this problem occur?**
Every log with an emoji in it prints an error.
**Why would anyone even want to print emoji in their logs?**
When they're part of the dataset you're processing. | https://github.com/apache/airflow/issues/16834 | https://github.com/apache/airflow/pull/17965 | 02397761af7ed77b0e7c4f4d8de34d8a861c5b40 | 2f1ed34a7ec699bd027004d1ada847ed15f4aa4b | 2021-07-06T12:39:54Z | python | 2021-09-12T17:57:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,833 | ["docs/helm-chart/customizing-workers.rst", "docs/helm-chart/index.rst"] | Chart: Add docs on using custom pod-template | Currently, we allow users to use their own `podTemplate` yaml using the `podTemplate` key in `values.yaml`. Some users have passed the name of the file in `podTemplate` instead of YAML string.
We should have a dedicated page on how a user could do that and add an example in `values.yaml` file itself.
https://airflow.apache.org/docs/helm-chart/stable/parameters-ref.html
https://github.com/apache/airflow/blob/81fde5844de37e90917deaaff9576914cb2637ee/chart/values.yaml#L1123-L1125
https://github.com/apache/airflow/blob/81fde5844de37e90917deaaff9576914cb2637ee/chart/templates/configmaps/configmap.yaml#L59-L65 | https://github.com/apache/airflow/issues/16833 | https://github.com/apache/airflow/pull/20331 | e148bf6b99b9b62415a7dd9fbfa594e0f5759390 | 8192a801f3090c4da19427819d551405c58d37e5 | 2021-07-06T12:37:35Z | python | 2021-12-16T17:19:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,828 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/providers/elasticsearch/log/es_task_handler.py", "tests/providers/elasticsearch/log/elasticmock/fake_elasticsearch.py", "tests/providers/elasticsearch/log/test_es_task_handler.py"] | Cannot Set Index Pattern on Elasticsearch as a Log Handler | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): ```Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.8-aliyun.1", GitCommit:"94f1dc8", GitTreeState:"", BuildDate:"2021-01-10T02:57:47Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}```
**Environment**: -
- **Cloud provider or hardware configuration**: Alibaba Cloud
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): `Linux airflow-webserver-fb89b7f8b-fgzvv 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 GNU/Linux`
- **Install tools**: Helm (Custom)
- **Others**: None
**What happened**:
My Airflow use fluent-bit to catch the stdout logs from airflow containers and then send the logs messages to Elasticsearch in a remote machine and it works well, I can see the logs through Kibana. But the Airflow cannot display the logs, because an error:
```
ERROR - Exception on /get_logs_with_metadata [GET]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1054, in get_logs_with_metadata
logs, metadata = task_log_reader.read_log_chunks(ti, try_number, metadata)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/log_reader.py", line 58, in read_log_chunks
logs, metadatas = self.log_handler.read(ti, try_number, metadata=metadata)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 217, in read
log, metadata = self._read(task_instance, try_number_element, metadata)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/elasticsearch/log/es_task_handler.py", line 160, in _read
logs = self.es_read(log_id, offset, metadata)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/elasticsearch/log/es_task_handler.py", line 233, in es_read
max_log_line = search.count()
File "/home/airflow/.local/lib/python3.8/site-packages/elasticsearch_dsl/search.py", line 701, in count
return es.count(index=self._index, body=d, **self._params)["count"]
File "/home/airflow/.local/lib/python3.8/site-packages/elasticsearch/client/utils.py", line 84, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/elasticsearch/client/__init__.py", line 528, in count
return self.transport.perform_request(
File "/home/airflow/.local/lib/python3.8/site-packages/elasticsearch/transport.py", line 351, in perform_request
status, headers_response, data = connection.perform_request(
File "/home/airflow/.local/lib/python3.8/site-packages/elasticsearch/connection/http_urllib3.py", line 261, in perform_request
self._raise_error(response.status, raw_data)
File "/home/airflow/.local/lib/python3.8/site-packages/elasticsearch/connection/base.py", line 181, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'security_exception', 'no permissions for [indices:data/read/search] and User [name=airflow, backend_roles=[], request
```
but when I debug and use this code, I can see the logs:
```
es = elasticsearch.Elasticsearch(['...'], **es_kwargs)
es.search(index="airflow-*", body=dsl)
```
and when I look into the source code of elasticsearch providers there are no definition of the index-pattern on that
https://github.com/apache/airflow/blob/88199eefccb4c805f8d6527bab5bf600b397c35e/airflow/providers/elasticsearch/log/es_task_handler.py#L216
so I assume the issue is insufficient permission to scan all the indices, therefore, how can I set the index-pattern so that Airflow only reads certain indices?
Thank you!
**What you expected to happen**: The Airflow configuration has option to add elasticsearch index pattern so that airflow only queries certain indices, not querying all indexes on the elasticsearch server
**How to reproduce it**: Click log button on task popup modal to see logs page
**Anything else we need to know**: Every time etc | https://github.com/apache/airflow/issues/16828 | https://github.com/apache/airflow/pull/23888 | 68217f5df872a0098496cf75937dbf3d994d0549 | 99bbcd3780dd08a0794ba99eb69006c106dcf5d2 | 2021-07-06T04:22:18Z | python | 2022-12-08T02:23:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,806 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Error mounting /tmp/airflowtmp... with remote docker |
**Apache Airflow version**: v2.1.0
**Environment**:
- **Cloud provider or hardware configuration**: ec2 t3a.medium
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS
- **Kernel** (e.g. `uname -a`): 5.4.0-1051-aws
- **Install tools**: sudo pip3 install apache-airflow[mysql,ssh,docker,amazon]
- **Others**: python 3.6.9
**What happened**:
Task fails with error:
```none
docker.errors.APIError: 400 Client Error for http://192.168.1.50:2375/v1.41/containers/create:
Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmp7naq_r53")
```
**How to reproduce it**:
Create an separate EC2 instance and forward the docker daemon:
```shell
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo touch /etc/systemd/system/docker.service.d/options.conf
echo -e """
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375
""" >> /etc/systemd/system/docker.service.d/options.conf
sudo systemctl daemon-reload
sudo systemctl restart docker
```
Create dag with DockerOperator
```python
DockerOperator(
task_id="run_image",
docker_url="tcp://192.168.1.50:2375",
image="ubuntu:latest",
dag=dag,
)
```
Run the DAG.
**Anything else we need to know**:
To me it looks like the DockerOperator is creating a temporary directory locally and tries to bind it to the container. However as this is a remote container the directory doesn't exist. here is the code part:
```python
class DockerOperator(BaseOperator):
...
def _run_image(self) -> Optional[str]:
"""Run a Docker container with the provided image"""
self.log.info('Starting docker container from image %s', self.image)
with TemporaryDirectory(prefix='airflowtmp', dir=self.host_tmp_dir) as host_tmp_dir:
if not self.cli:
raise Exception("The 'cli' should be initialized before!")
tmp_mount = Mount(self.tmp_dir, host_tmp_dir, "bind")
self.container = self.cli.create_container(
command=self.format_command(self.command),
name=self.container_name,
environment={**self.environment, **self._private_environment},
host_config=self.cli.create_host_config(
auto_remove=False,
mounts=self.mounts + [tmp_mount],
network_mode=self.network_mode,
shm_size=self.shm_size,
dns=self.dns,
dns_search=self.dns_search,
cpu_shares=int(round(self.cpus * 1024)),
mem_limit=self.mem_limit,
cap_add=self.cap_add,
extra_hosts=self.extra_hosts,
privileged=self.privileged,
),
image=self.image,
user=self.user,
entrypoint=self.format_command(self.entrypoint),
working_dir=self.working_dir,
tty=self.tty,
)
```
I see no way of disabling this behavior without some major patching.
How are you guys using remote docker daemons? Is this a use case? Would it be possible to implement something to allow that? | https://github.com/apache/airflow/issues/16806 | https://github.com/apache/airflow/pull/16932 | fc0250f1d5c43784f353dbdf4a34089aa96c28e5 | bc004151ed6924ee7bec5d9d047aedb4873806da | 2021-07-05T08:35:47Z | python | 2021-07-15T04:35:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,803 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | DockerOperator not working from containerized Airflow not recognizing `/var/run/docker.sock` | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.1
**Docker Image:** `apache/airflow:2.1.1-python3.8`
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Not running on k8s.
**Environment**:
- **Cloud provider or hardware configuration**: DigitalOcean
- **OS** (e.g. from /etc/os-release): Ubuntu
- **Kernel** (e.g. `uname -a`): `Linux airflow 5.11.0-22-generic #23-Ubuntu SMP Thu Jun 17 00:34:23 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux`
- **Install tools**: Installed via Docker & Docker Compose following instructions from [official docker-compose installation docs](https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#docker-compose-yaml).
- **Others**: N/A
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
Running with the `DockerOperator` causes the following error:
```
*** Reading local file: /opt/airflow/logs/docker_eod_us_equities/docker_command_sleep/2021-07-04T23:19:48.431208+00:00/2.log
[2021-07-04 23:49:53,886] {taskinstance.py:896} INFO - Dependencies all met for <TaskInstance: docker_eod_us_equities.docker_command_sleep 2021-07-04T23:19:48.431208+00:00 [queued]>
[2021-07-04 23:49:53,902] {taskinstance.py:896} INFO - Dependencies all met for <TaskInstance: docker_eod_us_equities.docker_command_sleep 2021-07-04T23:19:48.431208+00:00 [queued]>
[2021-07-04 23:49:53,903] {taskinstance.py:1087} INFO -
--------------------------------------------------------------------------------
[2021-07-04 23:49:53,903] {taskinstance.py:1088} INFO - Starting attempt 2 of 2
[2021-07-04 23:49:53,903] {taskinstance.py:1089} INFO -
--------------------------------------------------------------------------------
[2021-07-04 23:49:53,911] {taskinstance.py:1107} INFO - Executing <Task(DockerOperator): docker_command_sleep> on 2021-07-04T23:19:48.431208+00:00
[2021-07-04 23:49:53,914] {standard_task_runner.py:52} INFO - Started process 3583 to run task
[2021-07-04 23:49:53,920] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'docker_eod_us_equities', 'docker_command_sleep', '2021-07-04T23:19:48.431208+00:00', '--job-id', '52', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/eod_us_equities.py', '--cfg-path', '/tmp/tmp5iz14yg9', '--error-file', '/tmp/tmpleaymjfa']
[2021-07-04 23:49:53,920] {standard_task_runner.py:77} INFO - Job 52: Subtask docker_command_sleep
[2021-07-04 23:49:54,016] {logging_mixin.py:104} INFO - Running <TaskInstance: docker_eod_us_equities.docker_command_sleep 2021-07-04T23:19:48.431208+00:00 [running]> on host 0dff7922cb76
[2021-07-04 23:49:54,172] {taskinstance.py:1300} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=***
AIRFLOW_CTX_DAG_ID=docker_eod_us_equities
AIRFLOW_CTX_TASK_ID=docker_command_sleep
AIRFLOW_CTX_EXECUTION_DATE=2021-07-04T23:19:48.431208+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-07-04T23:19:48.431208+00:00
[2021-07-04 23:49:54,205] {docker.py:231} INFO - Starting docker container from image alpine
[2021-07-04 23:49:54,216] {taskinstance.py:1501} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.30/containers/create?name=docker_command_sleep
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 319, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 237, in _run_image
self.container = self.cli.create_container(
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/container.py", line 430, in create_container
return self.create_container_from_config(config, name)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/container.py", line 441, in create_container_from_config
return self._result(res, True)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.30/containers/create?name=docker_command_sleep: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmpw5gvv6dj")
[2021-07-04 23:49:54,222] {taskinstance.py:1544} INFO - Marking task as FAILED. dag_id=docker_eod_us_equities, task_id=docker_command_sleep, execution_date=20210704T231948, start_date=20210704T234953, end_date=20210704T234954
[2021-07-04 23:49:54,297] {local_task_job.py:151} INFO - Task exited with return code 1
```
I have even tried with Docker API v1.41 (latest) and same issue. I have bound the `/var/run/docker.sock` as a bind mount into the container.
**Docker Compose:**
```yaml
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Basic Airflow cluster configuration for CeleryExecutor with Redis and PostgreSQL.
#
# WARNING: This configuration is for local development. Do not use it in a production deployment.
#
# This configuration supports basic configuration using environment variables or an .env file
# The following variables are supported:
#
# AIRFLOW_IMAGE_NAME - Docker image name used to run Airflow.
# Default: apache/airflow:master-python3.8
# AIRFLOW_UID - User ID in Airflow containers
# Default: 50000
# AIRFLOW_GID - Group ID in Airflow containers
# Default: 50000
#
# Those configurations are useful mostly in case of standalone testing/running Airflow in test/try-out mode
#
# _AIRFLOW_WWW_USER_USERNAME - Username for the administrator account (if requested).
# Default: airflow
# _AIRFLOW_WWW_USER_PASSWORD - Password for the administrator account (if requested).
# Default: airflow
# _PIP_ADDITIONAL_REQUIREMENTS - Additional PIP requirements to add when starting all containers.
# Default: ''
#
# Feel free to modify this file to suit your needs.
---
version: '3'
x-airflow-common: &airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.1-python3.8}
environment: &airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL: 300 # Just to have a fast load in the front-end. Do not use it in production with those configurations.
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
AIRFLOW__CORE__ENABLE_XCOM_PICKLING: 'true' # "_run_image of the DockerOperator returns now a python string, not a byte string" Ref: https://github.com/apache/airflow/issues/13487
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- '/var/run/docker.sock:/var/run/docker.sock' # We will pass the Docker Deamon as a volume to allow the webserver containers start docker images. Ref: https://stackoverflow.com/q/51342810/7024760
user: '${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}'
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'airflow']
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 80:8080
healthcheck:
test: ['CMD', 'curl', '--fail', 'http://localhost:80/health']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test:
[
'CMD-SHELL',
'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"',
]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- 'CMD-SHELL'
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
healthcheck:
test: ['CMD', 'curl', '--fail', 'http://localhost:5555/']
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
```
**DAG:**
```python
from datetime import datetime, timedelta
import pendulum
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.providers.docker.operators.docker import DockerOperator
from airflow.operators.dummy_operator import DummyOperator
AMERICA_NEW_YORK_TIMEZONE = pendulum.timezone('US/Eastern')
default_args = {
'owner': 'airflow',
'description': 'Docker testing',
'depend_on_past': False,
'start_date': datetime(2021, 5, 1, tzinfo=AMERICA_NEW_YORK_TIMEZONE),
'retries': 1,
'retry_delay': timedelta(minutes=30),
}
with DAG(
'docker_test',
default_args=default_args,
schedule_interval="15 20 * * *",
catchup=False,
) as dag:
start_dag = DummyOperator(task_id='start_dag')
end_dag = DummyOperator(task_id='end_dag')
t1 = BashOperator(task_id='print_current_date', bash_command='date')
t2 = DockerOperator(
task_id='docker_command_sleep',
image='alpine',
container_name='docker_command_sleep',
api_version='1.30',
auto_remove=True,
command="/bin/sleep 3",
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
do_xcom_push=True,
)
start_dag >> t1
t1 >> t2
t2 >> end_dag
```
**Anything else we need to know**: Problem happens any time `DockerOperator` is being used. Not entirely sure why this happening given that the docker sock is fully permissive (has `777`) and is bind mounted into the container. When I test via *docker-py* client in Python shell under `airflow` user inside the container, it works perfectly fine to run all docker-py operations like listing running containers and such confirming the mounted docker UNIX socket is available and working. However, even with the `docker_url` pointing to the docker socket in the above DAG, I am getting this error thrown in above trace.
For whatever strange reason the logs say it's trying to connect over `http+docker://localhost/v1.30/containers/create` instead of the UNIX docker socket that's bind mounted and explicitly specified via `docker_url`.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16803 | https://github.com/apache/airflow/pull/16932 | fc0250f1d5c43784f353dbdf4a34089aa96c28e5 | bc004151ed6924ee7bec5d9d047aedb4873806da | 2021-07-05T01:05:27Z | python | 2021-07-15T04:35:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,783 | ["airflow/www/auth.py", "airflow/www/templates/airflow/no_roles.html", "airflow/www/views.py", "tests/www/views/test_views_acl.py"] | Airflow 2.1.0 Oauth for google Too Many Redirects b/c Google User does not have Role | The issue is similar to this ticket [16587](https://github.com/apache/airflow/issues/16587) and [14829](https://github.com/apache/airflow/issues/14829) however I have an updated airflow version AND updated packages than the ones suggested here and I am still getting the same outcome. When using google auth in airflow and attempting to sign in, we get an ERR_TOO_MANY_REDIRECTS. I know what causes the symptom of this, but hoping to find a resolution of keeping a Role in place to avoid the REDIRECTS.
- **Apache Airflow version**:
Version: v2.1.0
Git Version: .release:2.1.0+304e174674ff6921cb7ed79c0158949b50eff8fe
- **Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.10-gke.1600", GitCommit:"7b8e568a7fb4c9d199c2ba29a5f7d76f6b4341c2", GitTreeState:"clean", BuildDate:"2021-05-07T09:18:53Z", GoVersion:"go1.15.10b5", Compiler:"gc", Platform:"linux/amd64"}
- **Environment**: Staging
- **Cloud provider or hardware configuration**: GKE on
- **OS** (e.g. from /etc/os-release):
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
- **Kernel** (e.g. `uname -a`):
Linux margins-scheduler-97b6fb867-fth8p 5.4.89+ #1 SMP Sat Feb 13 19:45:14 PST 2021 x86_64 GNU/Linux
- **Install tools**: pip freeze below
**What happened**:
When using google auth in airflow and attempting to sign in, we get an ERR_TOO_MANY_REDIRECTS.
**What you expected to happen**:
I expect to log in as my user and it assigns a default Role of Viewer at the very least OR uses our mappings in web_server config python file. But the Role is blank in Database.
<!-- What do you think went wrong? -->
We realized that we get stuck in the loop, b/c the user will be in the users table in airflow but without a Role (its literally empty). Therefore it goes from the /login to /home to /login to /home over and over again.
**How to reproduce it**:
I add the Admin role in the database for my user, and the page that has the redirects refreshes and lets me in to the Airflow UI. However, when I sign out and signin in again, my users Role is then erased and it starts the redirect cycle again.
As you can see there is no Role (this happens when I attempt to login)
```
id | username | email | first_name | last_name | roles
===+==============================+=========================+============+===========+======
1 | admin | [email protected] | admin | admin | Admin
2 | google_############ | [email protected] | Cat | Says |
```
I run the command: `airflow users add-role -r Admin -u google_#################`
Then the page takes me to the UI and the table now looks like this:
```
id | username | email | first_name | last_name | roles
===+==============================+=========================+============+===========+======
1 | admin | [email protected] | admin | admin | Admin
2 | google_############ | [email protected] | Cat | Says | Admin
```
How often does this problem occur? Once? Every time etc? This occurs all the time
Here is the webserver_config.py
```
import os
from flask_appbuilder.security.manager import AUTH_OAUTH
AUTH_TYPE = AUTH_OAUTH
AUTH_ROLE_ADMIN="Admin"
AUTH_USER_REGISTRATION = False
AUTH_USER_REGISTRATION_ROLE = "Admin"
OIDC_COOKIE_SECURE = False
CSRF_ENABLED = False
WTF_CSRF_ENABLED = True
AUTH_ROLES_MAPPING = {"Engineering": ["Ops"],"Admins": ["Admin"]}
AUTH_ROLES_SYNC_AT_LOGIN = True
OAUTH_PROVIDERS = [
{
'name': 'google', 'icon': 'fa-google',
'token_key': 'access_token',
'remote_app': {
'client_id': '#####################.apps.googleusercontent.com',
'client_secret': '######################',
'api_base_url': 'https://www.googleapis.com/oauth2/v2/',
'whitelist': ['@company.com'], # optional
'client_kwargs': {
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url': 'https://accounts.google.com/o/oauth2/token',
'authorize_url': 'https://accounts.google.com/o/oauth2/auth'},
}
]
```
Here is the pip freeze:
```
adal==1.2.7
alembic==1.6.2
amqp==2.6.1
anyio==3.2.1
apache-airflow==2.1.0
apache-airflow-providers-amazon==1.4.0
apache-airflow-providers-celery==1.0.1
apache-airflow-providers-cncf-kubernetes==1.2.0
apache-airflow-providers-docker==1.2.0
apache-airflow-providers-elasticsearch==1.0.4
apache-airflow-providers-ftp==1.1.0
apache-airflow-providers-google==3.0.0
apache-airflow-providers-grpc==1.1.0
apache-airflow-providers-hashicorp==1.0.2
apache-airflow-providers-http==1.1.1
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-microsoft-azure==2.0.0
apache-airflow-providers-mysql==1.1.0
apache-airflow-providers-postgres==1.0.2
apache-airflow-providers-redis==1.0.1
apache-airflow-providers-sendgrid==1.0.2
apache-airflow-providers-sftp==1.2.0
apache-airflow-providers-slack==3.0.0
apache-airflow-providers-sqlite==1.0.2
apache-airflow-providers-ssh==1.3.0
apispec==3.3.2
appdirs==1.4.4
argcomplete==1.12.3
async-generator==1.10
attrs==20.3.0
azure-batch==10.0.0
azure-common==1.1.27
azure-core==1.13.0
azure-cosmos==3.2.0
azure-datalake-store==0.0.52
azure-identity==1.5.0
azure-keyvault==4.1.0
azure-keyvault-certificates==4.2.1
azure-keyvault-keys==4.3.1
azure-keyvault-secrets==4.2.0
azure-kusto-data==0.0.45
azure-mgmt-containerinstance==1.5.0
azure-mgmt-core==1.2.2
azure-mgmt-datafactory==1.1.0
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==16.1.0
azure-nspkg==3.0.2
azure-storage-blob==12.8.1
azure-storage-common==2.1.0
azure-storage-file==2.1.0
Babel==2.9.1
bcrypt==3.2.0
billiard==3.6.4.0
blinker==1.4
boto3==1.17.71
botocore==1.20.71
cached-property==1.5.2
cachetools==4.2.2
cattrs==1.0.0
celery==4.4.7
certifi==2020.12.5
cffi==1.14.5
chardet==3.0.4
click==7.1.2
clickclick==20.10.2
cloudpickle==1.4.1
colorama==0.4.4
colorlog==5.0.1
commonmark==0.9.1
contextvars==2.4
croniter==1.0.13
cryptography==3.4.7
dask==2021.3.0
dataclasses==0.7
defusedxml==0.7.1
dill==0.3.1.1
distlib==0.3.1
distributed==2.19.0
dnspython==1.16.0
docker==3.7.3
docker-pycreds==0.4.0
docutils==0.17.1
elasticsearch==7.5.1
elasticsearch-dbapi==0.1.0
elasticsearch-dsl==7.3.0
email-validator==1.1.2
eventlet==0.31.0
filelock==3.0.12
Flask==1.1.2
Flask-AppBuilder==3.3.0
Flask-Babel==1.0.0
Flask-Caching==1.10.1
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
flower==0.9.7
gevent==21.1.2
google-ads==4.0.0
google-api-core==1.26.3
google-api-python-client==1.12.8
google-auth==1.30.0
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.4
google-cloud-automl==2.3.0
google-cloud-bigquery==2.16.0
google-cloud-bigquery-datatransfer==3.1.1
google-cloud-bigquery-storage==2.4.0
google-cloud-bigtable==1.7.0
google-cloud-container==1.0.1
google-cloud-core==1.6.0
google-cloud-datacatalog==3.1.1
google-cloud-dataproc==2.3.1
google-cloud-dlp==1.0.0
google-cloud-kms==2.2.0
google-cloud-language==1.3.0
google-cloud-logging==2.3.1
google-cloud-memcache==0.3.0
google-cloud-monitoring==2.2.1
google-cloud-os-login==2.1.0
google-cloud-pubsub==2.4.2
google-cloud-redis==2.1.0
google-cloud-secret-manager==1.0.0
google-cloud-spanner==1.19.1
google-cloud-speech==1.3.2
google-cloud-storage==1.38.0
google-cloud-tasks==2.2.0
google-cloud-texttospeech==1.0.1
google-cloud-translate==1.7.0
google-cloud-videointelligence==1.16.1
google-cloud-vision==1.0.0
google-cloud-workflows==0.3.0
google-crc32c==1.1.2
google-resumable-media==1.2.0
googleapis-common-protos==1.53.0
graphviz==0.16
greenlet==1.1.0
grpc-google-iam-v1==0.12.3
grpcio==1.37.1
grpcio-gcp==0.2.2
gunicorn==20.1.0
h11==0.12.0
HeapDict==1.0.1
httpcore==0.13.6
httplib2==0.17.4
httpx==0.18.2
humanize==3.5.0
hvac==0.10.11
idna==2.10
immutables==0.15
importlib-metadata==1.7.0
importlib-resources==1.5.0
inflection==0.5.1
iso8601==0.1.14
isodate==0.6.0
itsdangerous==1.1.0
Jinja2==2.11.3
jmespath==0.10.0
json-merge-patch==0.2
jsonschema==3.2.0
kombu==4.6.11
kubernetes==11.0.0
lazy-object-proxy==1.4.3
ldap3==2.9
libcst==0.3.18
lockfile==0.12.2
Mako==1.1.4
Markdown==3.3.4
MarkupSafe==1.1.1
marshmallow==3.12.1
marshmallow-enum==1.5.1
marshmallow-oneofschema==2.1.0
marshmallow-sqlalchemy==0.23.1
msal==1.11.0
msal-extensions==0.3.0
msgpack==1.0.2
msrest==0.6.21
msrestazure==0.6.4
mypy-extensions==0.4.3
mysql-connector-python==8.0.22
mysqlclient==2.0.3
numpy==1.19.5
oauthlib==2.1.0
openapi-schema-validator==0.1.5
openapi-spec-validator==0.3.0
packaging==20.9
pandas==1.1.5
pandas-gbq==0.14.1
paramiko==2.7.2
pendulum==2.1.2
pep562==1.0
plyvel==1.3.0
portalocker==1.7.1
prison==0.1.3
prometheus-client==0.8.0
proto-plus==1.18.1
protobuf==3.16.0
psutil==5.8.0
psycopg2-binary==2.8.6
pyarrow==3.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pydata-google-auth==1.2.0
Pygments==2.9.0
PyJWT==1.7.1
PyNaCl==1.4.0
pyOpenSSL==19.1.0
pyparsing==2.4.7
pyrsistent==0.17.3
pysftp==0.2.9
python-daemon==2.3.0
python-dateutil==2.8.1
python-editor==1.0.4
python-http-client==3.3.2
python-ldap==3.3.1
python-nvd3==0.15.0
python-slugify==4.0.1
python3-openid==3.2.0
pytz==2021.1
pytzdata==2020.1
PyYAML==5.4.1
redis==3.5.3
requests==2.25.1
requests-oauthlib==1.1.0
rfc3986==1.5.0
rich==9.2.0
rsa==4.7.2
s3transfer==0.4.2
sendgrid==6.7.0
setproctitle==1.2.2
six==1.16.0
slack-sdk==3.5.1
sniffio==1.2.0
sortedcontainers==2.3.0
SQLAlchemy==1.3.24
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.37.2
sshtunnel==0.1.5
starkbank-ecdsa==1.1.0
statsd==3.3.0
swagger-ui-bundle==0.0.8
tabulate==0.8.9
tblib==1.7.0
tenacity==6.2.0
termcolor==1.1.0
text-unidecode==1.3
toolz==0.11.1
tornado==6.1
typing==3.7.4.3
typing-extensions==3.7.4.3
typing-inspect==0.6.0
unicodecsv==0.14.1
uritemplate==3.0.1
urllib3==1.25.11
vine==1.3.0
virtualenv==20.4.6
watchtower==0.7.3
websocket-client==0.59.0
Werkzeug==1.0.1
WTForms==2.3.3
zict==2.0.0
zipp==3.4.1
zope.event==4.5.0
zope.interface==5.4.0
```
Thanks in advance. | https://github.com/apache/airflow/issues/16783 | https://github.com/apache/airflow/pull/17613 | d8c0cfea5ff679dc2de55220f8fc500fadef1093 | 6868ca48b29915aae8c131d694ea851cff1717de | 2021-07-02T21:26:19Z | python | 2021-08-18T11:56:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,770 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "tests/providers/amazon/aws/hooks/test_base_aws.py"] | AWS hook should automatically refresh credentials when using temporary credentials | **Apache Airflow version**: 1.10.8 (Patched with latest AWS Hook)
**Environment**:
- **Cloud provider or hardware configuration**: 4 VCPU 8GB RAM VM
- **OS** (e.g. from /etc/os-release): RHEL 7.7
- **Kernel** (e.g. `uname -a`): Linux 3.10.0-957.el7.x86_64
- **Install tools**:
- **Others**:
The AWS Hook functionality for AssumeRoleWithSAML is not available in this version, we manually added it via patching the hook file.
**What happened**:
We've been using this hook for a while now with this issue, basically sts.assume_role and sts.assume_role_with_saml will return temporary credentials that are only valid for eg 1 hour by default. Eventually with long running operators / hooks / sensors some of them fail because the credentials have expired.
Example error messages
An error occurred (ExpiredTokenException) when calling the AssumeRole operation: Response has expired
An error occurred (ExpiredTokenException) when calling the AssumeRoleWithSAML operation: Response has expired
botocore.exceptions.ClientError: An error occurred (ExpiredTokenException) when calling the <any operation here> operation: The security token included in the request is expired
**What you expected to happen**:
AWS hook should be updated to use boto3 RefreshableCredentials when temporary credentials are in use.
**How to reproduce it**:
Use any of the assume role methods with the AWS Hook, create a session, wait 1 hour (or whatever expiry period applies to your role), and try and use the hook again.
**Anything else we need to know**:
I have a solution, please self-assign this. | https://github.com/apache/airflow/issues/16770 | https://github.com/apache/airflow/pull/16771 | 44210237cc59d463cd13983dd6d1593e3bcb8b87 | f0df184e4db940f7e1b9248b5f5843d494034112 | 2021-07-02T08:50:30Z | python | 2021-07-06T22:10:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,763 | ["airflow/providers/amazon/aws/hooks/sagemaker.py", "airflow/providers/amazon/aws/operators/sagemaker_processing.py", "tests/providers/amazon/aws/hooks/test_sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_processing.py"] | SagemakerProcessingOperator ThrottlingException | **Apache Airflow version**: 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NA
**Environment**: MWAA and Locally
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): NA
- **Kernel** (e.g. `uname -a`): NA
- **Install tools**: NA
- **Others**: NA
**What happened**:
When calling the SagemakerProcessingOperator sometimes get: "botocore.exceptions.ClientError: An error occurred (ThrottlingException)" due to excessive ListProcessingJobs operations.
**What you expected to happen**:
The job should have started without timing out. I believe one fix would be to use the `NameContains` functionality of boto3 [list_processing_jobs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.list_processing_jobs) so you don't have to paginate as is occurring [here](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/sagemaker.py#L916).
**How to reproduce it**:
If you incrementally create Sagemaker Processing jobs you will eventually see the Throttling as the pagination increases.
**Anything else we need to know**:
This looks like it is happening when the account already has a lot of former Sagemaker Processing jobs. | https://github.com/apache/airflow/issues/16763 | https://github.com/apache/airflow/pull/19195 | eb12bb2f0418120be31cbcd8e8722528af9eb344 | 96dd70348ad7e31cfeae6d21af70671b41551fe9 | 2021-07-01T21:55:26Z | python | 2021-11-04T06:49:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,762 | ["airflow/models/baseoperator.py", "tests/models/test_baseoperator.py"] | Scheduler crash on invalid priority_weight | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: Various (GCP & local Python)
- **OS** (e.g. from /etc/os-release): Various (linux, OSX)
**What happened**:
I, a certifiable idiot, accidentally passed a string into a task's `priority_weight` parameter in production.
There was no error at DAG evaluation time. However, upon __running__ the task, the scheduler immediately crashed. Because it was a scheduled run, the scheduler continued to restart and immediately crash until the offending DAG was paused.
The stack trace for my local repro is:
```
Traceback (most recent call last):
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/www/views.py", line 1459, in trigger
dag.create_dagrun(
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/models/dag.py", line 1787, in create_dagrun
run.verify_integrity(session=session)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/models/dagrun.py", line 663, in verify_integrity
ti = TI(task, self.execution_date)
File "<string>", line 4, in __init__
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 433, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.raise_(
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 430, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 286, in __init__
self.refresh_from_task(task)
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 619, in refresh_from_task
self.priority_weight = task.priority_weight_total
File "/Users/tomyedwab/.venv/khanflow/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 751, in priority_weight_total
return self.priority_weight + sum(
TypeError: can only concatenate str (not "int") to str
```
**What you expected to happen**:
I would hope that simple mistakes like this wouldn't be able to take down the Airflow scheduler. Ideally, this type of exception would cause a task to fail and trigger the task failure logic rather than relying on monitoring uptime for the scheduler process.
Separately, it would be nice to have a validation check in BaseOperator that the priority_weights are integers so we get quicker feedback if we supply an invalid value as soon as the DAG is deployed, rather than when it is supposed to run.
**How to reproduce it**:
I can reproduce this easily by adding a bad `priority_weight` parameter to any task, i.e.:
```
PythonOperator(task_id='hello', python_callable=_print_hello, priority_weight="X")
```
How often does this problem occur? Once? Every time etc?
This problem occurs every time.
| https://github.com/apache/airflow/issues/16762 | https://github.com/apache/airflow/pull/16765 | 6b7f3f0b84adf44393bc7923cd59279f128520ac | 9d170279a60d9d4ed513bae1c35999926f042662 | 2021-07-01T21:16:42Z | python | 2021-07-02T00:52:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,753 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | Realtime ECS logging | **Description**
Currently when `ECSOperator` is run, the logs of the ECS task are fetched only when the task is done. That's not so convenient, especially when the task takes some good amount of time. In order to understand what's happening with the task, I need to go to Cloudwatch and search for the tasks logs. It would be good to have some parallel process that could fetch ECS task logs from Cloudwatch and make them visible in a realtime.
**Are you willing to submit a PR?**
I can try, but I need to be guided.
| https://github.com/apache/airflow/issues/16753 | https://github.com/apache/airflow/pull/17626 | 27088c4533199a19e6f810abc4e565bc8e107cf0 | 4cd190c9bcbe4229de3c8527d0e3480dea3be42f | 2021-07-01T10:49:05Z | python | 2021-09-18T18:25:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,740 | ["airflow/operators/email.py", "airflow/utils/email.py", "tests/operators/test_email.py", "tests/utils/test_email.py"] | Custom headers not passible to send_email | Issue:
`custom_headers` is available as an optional keyword argument in `build_mime_message` but it is not exposed in `send_email` or `send_email_smtp`. This prevents using custom headers with the email operator when using the default smtp backend.
https://github.com/apache/airflow/blob/8e2a0bc2e39aeaf15b409bbfa8ac0c85aa873815/airflow/utils/email.py#L105-L116
Expected:
`send_email` should take optional key word argument `custom_headers` and pass it to `send_email_smtp` if that is the selected backend.
| https://github.com/apache/airflow/issues/16740 | https://github.com/apache/airflow/pull/19009 | deaa9a36242aabf958ce1d78776d2f29bfc416f4 | c95205eed6b01cb14939c65c95d78882bd8efbd2 | 2021-06-30T22:05:28Z | python | 2021-11-11T09:09:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,739 | ["airflow/executors/dask_executor.py", "docs/apache-airflow/executor/dask.rst", "tests/executors/test_dask_executor.py"] | Queue support for DaskExecutor using Dask Worker Resources | Currently airflow's DaskExecutor does not support specifying queues for tasks, due to dask's lack of an explicit queue specification feature. However, this can be reliably mimicked using dask resources ([details here](https://distributed.dask.org/en/latest/resources.html)). So the set up would look something like this:
```
# starting dask worker that can service airflow tasks submitted with queue=queue_name_1 or queue_name_2
$ dask-worker <address> --resources "queue_name_1=inf, queue_name_2=inf"
```
~~(Unfortunately as far as I know you need to provide a finite resource limit for the workers, so you'd need to provide an arbitrarily large limit, but I think it's worth the minor inconvenience to allow a queue functionality in the dask executor.)~~
```
# airflow/executors/dask_executor.py
def execute_async(
self,
key: TaskInstanceKey,
command: CommandType,
queue: Optional[str] = None,
executor_config: Optional[Any] = None,
) -> None:
self.validate_command(command)
def airflow_run():
return subprocess.check_call(command, close_fds=True)
if not self.client:
raise AirflowException(NOT_STARTED_MESSAGE)
################ change made here #################
resources = None
if queue:
resources = {queue: 1}
future = self.client.submit(airflow_run, pure=False, resources=resources)
self.futures[future] = key # type: ignore
```
| https://github.com/apache/airflow/issues/16739 | https://github.com/apache/airflow/pull/16829 | 601f22cbf1e9c90b94eda676cedd596afc118254 | 226a2b8c33d28cd391717191efb4593951d1f90c | 2021-06-30T22:01:54Z | python | 2021-09-01T09:32:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,738 | ["airflow/providers/ssh/hooks/ssh.py", "tests/providers/ssh/hooks/test_ssh.py"] | SSHHook will not work if `extra.private_key` is a RSA key | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
```
ssh-keygen -t rsa -P "" -f test_rsa
cat test_rsa | python -c 'import sys;import json; print(json.dumps(sys.stdin.read()))' # gives the private key encoded as JSON string to be pasted in the connection extra private_key
```
I created an Airflow Connection
* type: ssh
* extra:
```
{
"look_for_keys": "false",
"no_host_key_check": "true",
"private_key": "-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn\nNhAA........W4tTGFndW5hLU1hY0Jvb2stUHJvAQI=\n-----END OPENSSH PRIVATE KEY-----\n",
"private_key_passphrase": ""
}
```
When this SSH connection is used in SFTPToS3Operator for example it will incorrectly parse that `private_key` as a `paramiko.dsskey.DSSKey` instead of the correct `paramiko.rsakey.RSAKey`.
The code responsible for the processing of `private_key` is not **not deterministic** (I don't think `.values()` returns items in any particular order) , but in my case it will always try `paramiko.dsskey.DSSKey` before it tries `paramiko.rsakey.RSAKey`:
https://github.com/apache/airflow/blob/8e2a0bc2e39aeaf15b409bbfa8ac0c85aa873815/airflow/providers/ssh/hooks/ssh.py#L363-L369
This incorrectly parsed private key will cause a very confusing error later when it's actually used
```
[2021-06-30 23:33:14,604] {transport.py:1819} INFO - Connected (version 2.0, client AWS_SFTP_1.0)
[2021-06-30 23:33:14,732] {transport.py:1819} ERROR - Unknown exception: q must be exactly 160, 224, or 256 bits long
[2021-06-30 23:33:14,737] {transport.py:1817} ERROR - Traceback (most recent call last):
[2021-06-30 23:33:14,737] {transport.py:1817} ERROR - File "/Users/rubelagu/git/apache-airflow-providers-tdh/venv/lib/python3.8/site-packages/paramiko/transport.py", line 2109, in run
[2021-06-30 23:33:14,737] {transport.py:1817} ERROR - handler(self.auth_handler, m)
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - File "/Users/rubelagu/git/apache-airflow-providers-tdh/venv/lib/python3.8/site-packages/paramiko/auth_handler.py", line 298, in _parse_service_accept
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - sig = self.private_key.sign_ssh_data(blob)
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - File "/Users/rubelagu/git/apache-airflow-providers-tdh/venv/lib/python3.8/site-packages/paramiko/dsskey.py", line 108, in sign_ssh_data
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - key = dsa.DSAPrivateNumbers(
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - File "/Users/rubelagu/git/apache-airflow-providers-tdh/venv/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py", line 244, in private_key
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - return backend.load_dsa_private_numbers(self)
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - File "/Users/rubelagu/git/apache-airflow-providers-tdh/venv/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 826, in load_dsa_private_numbers
[2021-06-30 23:33:14,738] {transport.py:1817} ERROR - dsa._check_dsa_private_numbers(numbers)
[2021-06-30 23:33:14,739] {transport.py:1817} ERROR - File "/Users/rubelagu/git/apache-airflow-providers-tdh/venv/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py", line 282, in _check_dsa_private_numbers
[2021-06-30 23:33:14,739] {transport.py:1817} ERROR - _check_dsa_parameters(parameters)
[2021-06-30 23:33:14,739] {transport.py:1817} ERROR - File "/Users/rubelagu/git/apache-airflow-providers-tdh/venv/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py", line 274, in _check_dsa_parameters
[2021-06-30 23:33:14,739] {transport.py:1817} ERROR - raise ValueError("q must be exactly 160, 224, or 256 bits long")
[2021-06-30 23:33:14,739] {transport.py:1817} ERROR - ValueError: q must be exactly 160, 224, or 256 bits long
[2021-06-30 23:33:14,739] {transport.py:1817} ERROR -
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I expected to parse the `private_key` as a RSAKey.
I did my own test and `paramiko.dsskey.DSSKey.from_private_key(StringIO(private_key), password=passphrase)` will happily parse (incorrectly) a RSA key. The current code assumes that it will raise an exception but it won't.
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
For me it happens always, I don't think the order of `.values()` is deterministic, but in my laptop it will always try DSSKey before RSAKey.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16738 | https://github.com/apache/airflow/pull/16756 | 2285ee9f71a004d5c013119271873182fb431d8f | 7777d4f2fd0a63758c34769f8aa0438c8b4c6d83 | 2021-06-30T21:59:52Z | python | 2021-07-01T18:32:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,736 | ["BREEZE.rst", "breeze-complete", "scripts/ci/libraries/_initialization.sh", "scripts/ci/libraries/_kind.sh"] | The Helm Chart tests often timeout at installation recently in CI | Example here: https://github.com/apache/airflow/runs/2954825449#step:8:1950
| https://github.com/apache/airflow/issues/16736 | https://github.com/apache/airflow/pull/16750 | fa811057a6ae0fc6c5e4bff1e18971c262a42a4c | e40c5a268d8dc24d1e6b00744308ef705224cb66 | 2021-06-30T18:10:18Z | python | 2021-07-01T12:29:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,730 | ["airflow/providers/amazon/aws/example_dags/example_s3_to_sftp.py", "airflow/providers/amazon/aws/example_dags/example_sftp_to_s3.py", "airflow/providers/amazon/aws/transfers/s3_to_sftp.py", "airflow/providers/amazon/aws/transfers/sftp_to_s3.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/transfer/s3_to_sftp.rst", "docs/apache-airflow-providers-amazon/operators/transfer/sftp_to_s3.rst"] | SFTPToS3Operator is not mentioned in the apache-airflow-providers-amazon > operators documentation | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.2.0 , apache-airflow-providers-amazon == 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
I would expect to find the documentation for
`airflow.providers.amazon.aws.transfers.sftp_to_s3.SFTPToS3Operator`
in one these locations
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/index.html
or
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/transfer/index.html
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16730 | https://github.com/apache/airflow/pull/16964 | d1e9d8c88441dce5e2f64a9c7594368d662a8d95 | cda78333b4ce9304abe315ab1afe41efe17fd2da | 2021-06-30T08:20:24Z | python | 2021-07-18T17:21:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,725 | ["airflow/sensors/filesystem.py", "tests/sensors/test_filesystem.py"] | filesensor wildcard matching does not recognize directories | **Apache Airflow version**: 2.1.0
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**: FileSensor does not recognize directories with wildcard glob matching.
**What you expected to happen**: FileSensor would sense a directory that contains files if it matches with the wild card option.
**How to reproduce it**: Create a directory with a pattern that matches a wild card using glob
**Anything else we need to know**: Code from FileSensor source that I believe to cause the issue:
```
for path in glob(full_path):
if os.path.isfile(path):
mod_time = os.path.getmtime(path)
mod_time = datetime.datetime.fromtimestamp(mod_time).strftime('%Y%m%d%H%M%S')
self.log.info('Found File %s last modified: %s', str(path), str(mod_time))
return True
for _, _, files in os.walk(full_path):
if len(files) > 0:
return True
return False
```
I believe to resolve the issue `full_path` in os.walk should be `path` instead.
| https://github.com/apache/airflow/issues/16725 | https://github.com/apache/airflow/pull/16894 | 83cb237031dfe5b7cb5238cc1409ce71fd9507b7 | 789e0eaee8fa9dc35b27c49cc50a62ea4f635978 | 2021-06-30T01:55:11Z | python | 2021-07-12T21:23:36Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,705 | ["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"] | Ability to add multiple task_ids in the ExternalTaskSensor | **Description**
In its current shape the ExternalTaskSensor accepts either a single task_id or None to poll for the completion of a dag run. We have a use case where a dependent dag should poll for only certain list of tasks in the upstream dag. One option is to add N ExternalTaskSensor nodes if there are N nodes to be dependent on but those will be too many Sensor Nodes in the dag and can be avoided if the ExternalTaskSensor can accept a list of task_ids to poll for.
**Use case / motivation**
We have a upstream dag that updates a list of hive tables that is further used by a lot of downstream dags. This dag updates 100s of hive tables but some of the downstream dag depends only upon 10-20 of these tables, There are multiple dags which depends upon varied list of hive tables from the upstream dag.
**Are you willing to submit a PR?**
Yes, we are willing to submit a PR for this
**Related Issues**
Not that I am aware of . I did a search on the issue list.
| https://github.com/apache/airflow/issues/16705 | https://github.com/apache/airflow/pull/17339 | 32475facce68a17d3e14d07762f63438e1527476 | 6040125faf9c6cbadce0a4f5113f1c5c3e584d66 | 2021-06-29T13:11:12Z | python | 2021-08-19T01:28:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,703 | ["airflow/www/package.json", "airflow/www/yarn.lock"] | Workers silently crash after memory build up | **Apache Airflow version**: 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18.15
**Environment**:
- **Cloud provider or hardware configuration**: AWS, ec2 servers deployed by kops
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04
- **Kernel** (e.g. `uname -a`): Linux 5.4.0-1024-aws # 24-Ubuntu
- **Install tools**: Dockerfile
- **Others**: Custom Dockerfile (not official airflow image from dockerhub)
Celery Workers
**What happened**:
Memory usage builds up on our celery worker pods until they silently crash. Resource usage flat lines and no logs are created by the worker. The process is still running and Celery (verified via ping and flower) thinks the workers are up and running.
No tasks are finished by Airflow, the schedulers are running fine and still logging appropriately but the workers are doing nothing. Workers do not accept any tasks and inflight jobs hang.
They do not log an error message and the pod is not restarted as the process hasn't crashed.
Our workers do not all crash at the same time, it happens over a couple of hours even if they were all restarted at the same time, so it seems to be related to how many jobs the worker has done/logs/other-non-time event.
I believe this is related to the logs generated by the workers, Airflow appears to be reading in the existing log files to memory. Memory usage drops massively when the log files are deleted and then resume to build up again.
There doesn't appear to be a definite upper limit of memory that the pod hits when it crashes, but its around the 8 or 10GB mark (there is 14 available to the pods but they dont hit that).
Log size on disk correlates to more memory usage by a worker pod than one with smaller log size on disk.
**What you expected to happen**:
If the worker has crashed/ceased functioning it should either be able to log an appropriate message if the process is up or crash cleanly and be able to be restarted.
Existing log files should not contribute to the memory usage of the airflow process either.
Celery should also be able to detect that the worker is no longer functional.
**How to reproduce it**:
Run an airflow cluster with 40+ DAGs with several hundred tasks in total in an environment that has observable metrics, we use k8s with Prometheus.
We have 5x worker pods.
Monitor the memory usage of the worker containers/pods over time as well as the size of the airflow task logs. The trend should only increase.
**Anything else we need to know**:
This problem occurs constantly, after a clean deployment and in multiple environments.
The official Airflow docker image contains a [log-cleaner](https://github.com/apache/airflow/blob/main/scripts/in_container/prod/clean-logs.sh) so its possible this has been avoided but in general 15 days default would be far too long. Our workers crash between 2 or 3 days.
Resorting to an aggressive log cleaning script has mitigated the problem for us but without proper error logs or reasons for the crash it hard to be definite that we are safe.
This is our airflow.cfg logging config, we aren't doing anything radical just storing in a bucket.
```
[logging]
# Airflow can store logs remotely in AWS S3, Google Cloud Storage or Elastic Search.
# Users must supply an Airflow connection id that provides access to the storage
# location. If remote_logging is set to true, see UPDATING.md for additional
# configuration requirements.
# remote_logging = $ENABLE_REMOTE_LOGGING
# remote_log_conn_id = s3conn
# remote_base_log_folder = $LOGS_S3_BUCKET
# encrypt_s3_logs = False
remote_logging = True
remote_log_conn_id = s3conn
remote_base_log_folder = $AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER
encrypt_s3_logs = False
# Log format
log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
# Logging level
logging_level = INFO
# Logging class
# Specify the class that will specify the logging configuration
# This class has to be on the python classpath
logging_config_class =
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /usr/local/airflow/logs
# Name of handler to read task instance logs.
# Default to use file task handler.
# task_log_reader = file.task
task_log_reader = task
```
Here is a memory usage graph of a crashed worker pod, the flat line is when it is in a crashed state and then restarted. There is also a big cliff on the right of the graph at about 0900 on June 29th where I manually cleaned the log files from the disk.

The last few log lines before it crashed:
```
Jun 25, 2021 @ 04:28:01.831 | [2021-06-25 03:28:01,830: INFO/MainProcess] Received task: airflow.executors.celery_executor.execute_command[5f802ffb-d5af-40ae-9e99-5e0501bf7d1c]
Jun 25, 2021 @ 04:27:36.769 | [2021-06-25 03:27:36,769: INFO/MainProcess] Received task: airflow.executors.celery_executor.execute_command[737d4310-c6ae-450f-889a-ffee53e94d33]
Jun 25, 2021 @ 04:27:25.565 | [2021-06-25 03:27:25,564: WARNING/ForkPoolWorker-13] Running <TaskInstance: a_task_name 2021-06-25T02:18:00+00:00 [queued]> on host airflow-worker-3.airflow-worker.airflow.svc.cluster.local
Jun 25, 2021 @ 04:27:25.403 | [2021-06-25 03:27:25,402: INFO/ForkPoolWorker-13] Filling up the DagBag from /usr/local/airflow/dags/a_dag.py
Jun 25, 2021 @ 04:27:25.337 | [2021-06-25 03:27:25,337: INFO/ForkPoolWorker-13] Executing command in Celery: ['airflow', 'tasks', 'run', 'task_name_redacted', 'task, '2021-06-25T02:18:00+00:00', '--local', '--pool', 'default_pool', '--subdir', '/usr/local/airflow/dags/a_dag.py']
Jun 25, 2021 @ 04:27:25.327 | [2021-06-25 03:27:25,326: INFO/ForkPoolWorker-13] Task airflow.executors.celery_executor.execute_command[4d9ee684-4ae3-41d2-8a00-e8071179a1b1] succeeded in 5.212706514168531s: None
Jun 25, 2021 @ 04:27:24.980 | [2021-06-25 03:27:24,979: INFO/ForkPoolWorker-13] role_arn is None
Jun 25, 2021 @ 04:27:24.968 | [2021-06-25 03:27:24,968: INFO/ForkPoolWorker-13] No credentials retrieved from Connection
Jun 25, 2021 @ 04:27:24.968 | [2021-06-25 03:27:24,968: INFO/ForkPoolWorker-13] Creating session with aws_access_key_id=None region_name=None
Jun 25, 2021 @ 04:27:24.954 | [2021-06-25 03:27:24,953: INFO/ForkPoolWorker-13] Airflow Connection: aws_conn_id=s3conn
Jun 25, 2021 @ 04:27:20.610 | [2021-06-25 03:27:20,610: WARNING/ForkPoolWorker-13] Running <TaskInstance: task_name_redacted 2021-06-25T03:10:00+00:00 [queued]> on host airflow-worker-3.airflow-worker.airflow.svc.cluster.local
```
| https://github.com/apache/airflow/issues/16703 | https://github.com/apache/airflow/pull/30112 | 869c1e3581fa163bbaad11a2d5ddaf8cf433296d | e09d00e6ab444ec323805386c2056c1f8a0ae6e7 | 2021-06-29T09:48:11Z | python | 2023-03-17T15:08:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,699 | ["airflow/providers/ssh/hooks/ssh.py", "setup.py", "tests/providers/ssh/hooks/test_ssh.py"] | ssh provider limits to an old version of sshtunnel | ssh provider has limit to sshtunnel < 0.2
https://github.com/apache/airflow/blob/9d6ae609b60449bd274c2f96e72486d73ad2b8f9/setup.py#L451
I don't know why? sshtunnel already released 0.4
I didn't open PR because I don't know this provider i just tried to install it and couldn't due to conflicts with other requirements in my system so eventually i used vitrual env to solve my problem but logging this issue so that someone who knows this code can handle it. | https://github.com/apache/airflow/issues/16699 | https://github.com/apache/airflow/pull/18684 | 016f55c41370b5c2ab7612c9fc1fa7a7ee1a0fa5 | 537963f24d83b08c546112bac33bf0f44d95fe1c | 2021-06-28T19:23:04Z | python | 2021-10-05T12:20:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,684 | ["chart/templates/_helpers.yaml", "chart/tests/test_airflow_common.py", "chart/values.schema.json", "chart/values.yaml", "docs/helm-chart/production-guide.rst"] | Helm Chart Allow for Empty data settings | The helm-chart should not force the settings of Resultsbackend, Metadata and Broker. Either through an additional configuration option or implicitly by not providing values, the helm-chart should refrain from populating those Airflow-Env configurations.
**Use case / motivation**
The current Helm chart forces the configuration of certain values such as `brokerUrl` (https://github.com/apache/airflow/blob/main/chart/values.yaml#L236)
Which will then be populated as Airflow ENV config.
Due to Airflows configuration precedence (https://airflow.apache.org/docs/apache-airflow/stable/howto/set-config.html) are these settings taking highest priority in Airflow configuration.
With the suggested change, users can dynamically provide Airflow configurations through Airflow's `__CMD` environment variables.
**In other words; provide users with the option to utilize Airflow's `__CMD` ENV configuration.**
_I am willing to submit a PR._
| https://github.com/apache/airflow/issues/16684 | https://github.com/apache/airflow/pull/18974 | 3ccbd4f4ee4f27c08ab39aa61aa0cf1e631bd154 | 5e46c1770fc0e76556669dc60bd20553b986667b | 2021-06-28T02:47:41Z | python | 2021-12-20T21:29:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,669 | ["airflow/providers/tableau/hooks/tableau.py", "airflow/providers/tableau/operators/tableau_refresh_workbook.py", "airflow/providers/tableau/sensors/tableau_job_status.py", "docs/apache-airflow-providers-tableau/connections/tableau.rst", "tests/providers/tableau/operators/test_tableau_refresh_workbook.py"] | TableauRefreshWorkbookOperator fails when using personal access token (Tableau authentication method) | **Apache Airflow version**: 2.0.1
**What happened**:
The operator fails at the last step, after successfully refreshing the workbook with this error:
```
tableauserverclient.server.endpoint.exceptions.ServerResponseError:
401002: Unauthorized Access
Invalid authentication credentials were provided.
```
**What you expected to happen**:
It should not fail, like when we use the username/password authentication method (instead of personal_access_token)
<!-- What do you think went wrong? -->
Tableau server does not allow concurrent connections when using personal_access_token https://github.com/tableau/server-client-python/issues/717
The solution would be redesigning completely the operator to only call the hook once.
My quick fix was to edit this in TableauHook:
```
def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None:
pass
```
**How to reproduce it**:
Run this operator TableauRefreshWorkbookOperator using Tableau personal_access_token authentication (token_name, personal_access_token).
| https://github.com/apache/airflow/issues/16669 | https://github.com/apache/airflow/pull/16916 | cc33d7e513e0f66a94a6e6277d6d30c08de94d64 | 53246ebef716933f71a28901e19367d84b0daa81 | 2021-06-25T23:09:23Z | python | 2021-07-15T06:29:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,651 | ["airflow/providers/apache/hdfs/hooks/webhdfs.py", "docs/apache-airflow-providers-apache-hdfs/connections.rst", "tests/providers/apache/hdfs/hooks/test_webhdfs.py"] | WebHDFS : why force http ? | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
**Environment**:
- **Cloud provider or hardware configuration**: /
- **OS** (e.g. from /etc/os-release): /
- **Kernel** (e.g. `uname -a`): /
- **Install tools**: /
- **Others**: /
**What happened**:
I'm trying to make the provider WebHDFS work...
I don't understand why "http" is not overridable by https : https://github.com/apache/airflow/blob/main/airflow/providers/apache/hdfs/hooks/webhdfs.py#L95
Is there aynone using WebHDFS with https ? | https://github.com/apache/airflow/issues/16651 | https://github.com/apache/airflow/pull/17637 | 9922287a4f9f70b57635b04436ddc4cfca0e84d2 | 0016007b86c6dd4a6c6900fa71137ed065acfe88 | 2021-06-25T07:22:56Z | python | 2021-08-18T21:51:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,646 | ["airflow/www/static/js/tree.js"] | Tree view - Skipped tasks showing Duration |
**Apache Airflow version**: 2.1.0
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
- **Kernel** (e.g. `uname -a`):
Linux airflow-scheduler-647d744f9-4zx2n 4.14.138-rancher #1 SMP Sat Aug 10 11:25:46 UTC 2019 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
When using BranchPythonOperator , skipped tasks show a `Duration` value (even if DAG is already completed). In comparison , same task shows no Duration at Graph View. Example:


Actually Duration time keeps increasing after checking again same task instance:

55 Min 50 sec vs 1Hours 4 mins
**What you expected to happen**:
Duration value should be empty (like in Graph view)
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16646 | https://github.com/apache/airflow/pull/16695 | 98c12d49f37f6879e3e9fd926853f57a15ab761b | f0b3345ddc489627d73d190a1401804e7b0d9c4e | 2021-06-25T02:06:10Z | python | 2021-06-28T15:23:19Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,635 | ["airflow/models/baseoperator.py", "docs/spelling_wordlist.txt", "tests/models/test_baseoperator.py"] | Update `airflow.models.baseoperator.chain()` function to support XComArgs | **Description**
The `airflow.models.baseoperator.chain()` is a very useful and convenient way to add sequential task dependencies in DAGs but the function only supports tasks of a `BaseOperator` type.
**Use case / motivation**
Users who create tasks via the `@task` decorator will not be able to use the `chain()` function to apply sequential dependencies that do not share an `XComArg` implicit dependency.
**Are you willing to submit a PR?**
Absolutely. 🚀
**Related Issues**
None
| https://github.com/apache/airflow/issues/16635 | https://github.com/apache/airflow/pull/16732 | 9f8f81f27d367fcde171173596f1f30a3a7069f8 | 7529546939250266ccf404c2eea98b298365ef46 | 2021-06-24T15:30:09Z | python | 2021-07-14T07:43:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,627 | ["airflow/providers/amazon/aws/hooks/s3.py", "tests/providers/amazon/aws/hooks/test_s3.py"] | add more filter options to list_keys of S3Hook | The hook has [list_keys](https://github.com/apache/airflow/blob/c8a628abf484f0bd9805f44dd37e284d2b5ee7db/airflow/providers/amazon/aws/hooks/s3.py#L265) allow to filter by prefix it would be nice to be able to filter by creation date of file or last modified date. ideally I think it would be nice if the function can support any kind of filter that boto3 allows.
The use case is that for the moment if you want to get all files that were modified after date X you need to list all the files and get them one by one to check their last modified date. this is not efficent. | https://github.com/apache/airflow/issues/16627 | https://github.com/apache/airflow/pull/22231 | b00fc786723c4356de93792c32c85f62b2e36ed9 | 926f6d1894ce9d097ef2256d14a99968638da9c0 | 2021-06-24T08:53:49Z | python | 2022-03-15T18:20:47Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,625 | ["airflow/jobs/scheduler_job.py", "airflow/models/taskinstance.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py"] | Task is not retried when worker pod fails to start | **Apache Airflow version**: 2.0.2
**Kubernetes version**:
```
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-gke.4900", GitCommit:"2812f9fb0003709fc44fc34166701b377020f1c9", GitTreeState:"clean", BuildDate:"2021-03-19T09:19:27Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
```
- **Cloud provider or hardware configuration**: GKE
**What happened**:
After the worker pod for the task failed to start, the task is marked as failed with the error message `Executor reports task instance <TaskInstance: datalake_db_cdc_data_integrity.check_integrity_core_prod_my_industries 2021-06-14 00:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?`. The task should have been reattempted as it still has retries left.
```
{kubernetes_executor.py:147} INFO - Event: datalakedbcdcdataintegritycheckintegritycoreprodmyindustries.17f690ef0328488fadeba2dd00f8175d had an event of type MODIFIED
{kubernetes_executor.py:202} ERROR - Event: datalakedbcdcdataintegritycheckintegritycoreprodmyindustries.17f690ef0328488fadeba2dd00f8175d Failed
{kubernetes_executor.py:352} INFO - Attempting to finish pod; pod_id: datalakedbcdcdataintegritycheckintegritycoreprodmyindustries.17f690ef0328488fadeba2dd00f8175d; state: failed; annotations: {'dag_id': 'datalake_db_cdc_data_integrity', 'task_id': 'check_integrity_core_prod_my_industries', 'execution_date': '2021-06-14T00:00:00+00:00', 'try_number': '1'}
{kubernetes_executor.py:532} INFO - Changing state of (TaskInstanceKey(dag_id='datalake_db_cdc_data_integrity', task_id='check_integrity_core_prod_my_industries', execution_date=datetime.datetime(2021, 6, 14, 0, 0, tzinfo=tzlocal()), try_number=1), 'failed', 'datalakedbcdcdataintegritycheckintegritycoreprodmyindustries.17f690ef0328488fadeba2dd00f8175d', 'prod', '1510796520') to failed
{scheduler_job.py:1210} INFO - Executor reports execution of datalake_db_cdc_data_integrity.check_integrity_core_prod_my_industries execution_date=2021-06-14 00:00:00+00:00 exited with status failed for try_number 1
{scheduler_job.py:1239} ERROR - Executor reports task instance <TaskInstance: datalake_db_cdc_data_integrity.check_integrity_core_prod_my_industries 2021-06-14 00:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
```
**What you expected to happen**:
The task status should have been set as `up_for_retry` instead of failing immediately.
**Anything else we need to know**:
This error has occurred 6 times over the past 2 months, and to seemingly random tasks in different DAGs. We run 60 DAGs with 50-100 tasks each every 30 minutes. The affected tasks are a mix of PythonOperator and SparkSubmitOperator. The first time we saw it was in mid Apr, and we were on Airflow version 2.0.1. We upgraded to Airflow version 2.0.2 in early May, and the error has occurred 3 more times since then.
Also, the issue where the worker pod cannot start is a common error that we frequently encounter, but in most cases these tasks are correctly marked as `up_for_retry` and reattempted.
This is currently not a big issue for us since it's so rare, but we have to manually clear the tasks that failed to get them to rerun because the tasks are not retrying. They have all succeeded on the first try after clearing.
Also, I'm not sure if this issue is related to #10790 or #16285, so I just created a new one. It's not quite the same as #10790 because the tasks affected are not ExternalTaskSensors, and also #16285 because the offending lines pointed out there are not in 2.0.2.
Thanks!
| https://github.com/apache/airflow/issues/16625 | https://github.com/apache/airflow/pull/17819 | 5f07675db75024513e1f0042107838b64e5c777f | 44f601e51cd7d369292728b7a7460c56528dbf27 | 2021-06-24T07:37:53Z | python | 2021-09-21T20:16:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,621 | ["BREEZE.rst", "Dockerfile.ci", "breeze", "scripts/ci/docker-compose/base.yml", "scripts/ci/libraries/_initialization.sh"] | Open up breeze container ssh access to support IDE remote development | **Description**
The breeze container is the preferred way of development setup since it provides a full-fledged and isolated environment. However, breeze container is not very suitable for development with IDE.
**Use case / motivation**
My IDEs actually support remote development and the only additional requirement is for a host to support ssh connection.
We can make breeze container support ssh connection by adding two changes:
1. Add a password to user airflow for the CI Docker image.
2. Add a portforward to 22 from host.
**Are you willing to submit a PR?**
Yes a PR is on the way
**Setup with Pycharm pro**
1. Add a ssh configuration, with user airflow and password airflow and the default ssh port 12322, (12322 will be forward to 22)
2. Add a remote ssh interpreter, (airflow@localhost:12322/usr/local/bin/python)
3. Add two source mappings from local source to container: \<source root\>/airflow -> /opt/airflow/airflow; \<source root\>/tests -> /opt/airflow/tests.
4. Set the unnit test framework for this project as pytest.
After those steps, one will be able to set breakpoints and debug through the python codes with Pycharm Pro.
| https://github.com/apache/airflow/issues/16621 | https://github.com/apache/airflow/pull/16639 | 88ee2aa7ddf91799f25add9c57e1ea128de2b7aa | ab2db0a40e9d2360682176dcbf6e818486e9f804 | 2021-06-23T22:52:38Z | python | 2021-06-25T08:02:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,614 | ["airflow/www/package.json", "airflow/www/yarn.lock"] | Connection password not being masked in default logging | ```
from airflow.hooks.base_hook import BaseHook
Basehook.get_connection('my_connection_id')
```
The second line prints out my connection details including the connection password in Airflow logs. Earlier connection passwords were masked by default.
https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/hooks/base.html
The above statement is run for logging. Is there a way to disable to logging to not print Connection password in my logs? | https://github.com/apache/airflow/issues/16614 | https://github.com/apache/airflow/pull/30112 | 869c1e3581fa163bbaad11a2d5ddaf8cf433296d | e09d00e6ab444ec323805386c2056c1f8a0ae6e7 | 2021-06-23T12:17:39Z | python | 2023-03-17T15:08:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,611 | ["airflow/kubernetes/pod_generator.py", "tests/kubernetes/models/test_secret.py", "tests/kubernetes/test_pod_generator.py"] | Pod name with period is causing issues for some apps in k8s | **Apache Airflow version**: 2.0.0+
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): All versions affected
**Environment**:
It affects all possible host configurations. The issue impacts KubernetesPodOperator.
What scripts will actually be affected inside of KubernetesPodOperator container - big question. In my scenario it was locally executed **Apache Spark**.
**What happened**:
This issue is consequence of change that was introduced in this commit/line:
https://github.com/apache/airflow/commit/862443f6d3669411abfb83082c29c2fad7fcf12d#diff-01764c9ba7b2270764a59e7ff281c95809071b1f801170ee75a02481a8a730aaR475
Pod operator is generating a pod name that has period in it. Whatever pod name is picked gets inherited by container itself, as result it becomes a hostname of it. The problem of hostnames in Linux is that if it contains period, it immediately becomes assumed a valid domain that DNS should be able to resolve. md5 digest in this weird case becomes assumed as first level "domain". Obviously, some libraries have no idea what to do with DNS domain like `airflow-pod-operator.9b702530e25f40f2b1cf6220842280c`, so they throw exceptions (either Unknown host, Unable to resolve hostname or such)
In my use case the component was barking was **Apache Spark** in local mode.
Error line is referring to Spark URL:
> 21/05/21 11:20:01 ERROR SparkApp$: org.apache.spark.SparkException: Invalid Spark URL: spark://HeartbeatReceiver@airflow-pod-operator.9b702530e25e30f2b1cf1622082280c:38031
**What you expected to happen**:
Apache Spark just works without issues and able to resolve itself by hostname without any code changes.
**How to reproduce it**:
As I'm not certain about full list of affected applications, I would for now assume anything that tries to resolve "current pod's" hostname.
In my scenario I was running Wordcount of Apache Spark in local mode in KubernetesPodOperator. Perhaps, there might be easier ways to replicate it.
**Anything else we need to know**:
Having this kind of unresolvable real domain vs hostname confusion in my opinion is very very bad, and should be discouraged.
The way for me to mitigate this issue right now was to build my own inheritance for KubernetesPodOperator that has method `create_pod_request_obj` overriden to call older method of generation of unique identifiers for pod name that used `-` (hyphen) instead of `.` (period) notation in name. | https://github.com/apache/airflow/issues/16611 | https://github.com/apache/airflow/pull/19036 | 121e1f197ac580ea4712b7a0e72b02cf7ed9b27a | 563315857d1f54f0c56059ff38dc6aa9af4f08b7 | 2021-06-23T03:48:01Z | python | 2021-11-30T05:00:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,610 | ["airflow/www/static/js/dag_dependencies.js"] | Dag Dependency page not showing anything | **Apache Airflow version**: 2.1.
**Environment**: Ubuntu 20.04
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): UBUNTU 20.04 LTS
- **Kernel** (e.g. `uname -a`): Linux 20.04.1-Ubuntu SMP Tue Jun 1 09:54:15 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**: python and pip
**What happened**: After performing the upgrade from 2.0.2 to 2.10 using the guide available in the documentation, Airflow upgraded successfully, Dag dependency page isn't working as expected.
The DAG dependency page doesn't show the dependency graph.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**: I expected the dag dependency page to show the dags and their dependency in a Graph view
<!-- What do you think went wrong? -->
**How to reproduce it**: Its reproduced by opening these pages every time.

How often does this problem occur? Once? Every time etc?
Every time
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>Upgrade Check Log</summary>
/home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/configuration.py:34
6 DeprecationWarning: The hide_sensitive_variable_fields option in [admin] has been moved to the hide_sensitive_var_conn_fields option in [core] - the old setting has been used, but please update your config.
/home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/configuration.py:34
6 DeprecationWarning: The default_queue option in [celery] has been moved to the default_queue option in [operators] - the old setting has been used, but please update your config.
/home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/plugins_manager.py:
239 DeprecationWarning: This decorator is deprecated.In previous versions, all subclasses of BaseOperator must use apply_default decorator for the`default_args` feature to work properly.
In current version, it is optional. The decorator is applied automatically using the metaclass.
/home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/configuration.py:34
6 DeprecationWarning: The default_queue option in [celery] has been moved to the default_queue option in [operators] - the old setting has been used, but please update your config.
</details>
| https://github.com/apache/airflow/issues/16610 | https://github.com/apache/airflow/pull/24166 | 7e56bf662915cd58849626d7a029a4ba70cdda4d | 3e51d8029ba34d3a76b3afe53e257f1fb5fb9da1 | 2021-06-23T03:42:06Z | python | 2022-06-07T11:25:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,608 | ["airflow/models/dagbag.py", "airflow/utils/dag_cycle_tester.py", "tests/utils/test_dag_cycle.py"] | Please consider renaming function airflow.utils.dag_cycle_tester.test_cycle to check_cycle | **Description**
function test_cycle will be collected by pytest if this function is imported in a user-defined test.
Such an error will be emitted as pytest will treat it as a test.
```
test setup failed
file /opt/conda/lib/python3.8/site-packages/airflow/utils/dag_cycle_tester.py, line 27
def test_cycle(dag):
E fixture 'dag' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
```
**Use case / motivation**
This can be simply "fixed" by renaming this function to check_cycle although the user can do it upon importing this function.
This change by airflow will reduce the amount of surprise received by the end-user, and save a large number of debugging hours.
**Are you willing to submit a PR?**
The changeset is too small to worth the trouble for a PR. | https://github.com/apache/airflow/issues/16608 | https://github.com/apache/airflow/pull/16617 | 86d0a96bf796fd767cf50a7224be060efa402d94 | fd7e6e1f6ed039000d8095d6657546b5782418e1 | 2021-06-23T01:31:41Z | python | 2021-06-24T18:05:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,597 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | Airflow logging secrets masker assumes dict_key is type `str` | **Apache Airflow version**: 2.1.0
**What happened**:
Airflow logging assume dict_key is type `str`
```
logging.info("Dictionary where key is int type: %s", modified_table_mapping)
File "/usr/lib64/python3.6/logging/__init__.py", line 1902, in info
root.info(msg, *args, **kwargs)
File "/usr/lib64/python3.6/logging/__init__.py", line 1308, in info
self._log(INFO, msg, args, **kwargs)
File "/usr/lib64/python3.6/logging/__init__.py", line 1444, in _log
self.handle(record)
File "/usr/lib64/python3.6/logging/__init__.py", line 1453, in handle
if (not self.disabled) and self.filter(record):
File "/usr/lib64/python3.6/logging/__init__.py", line 720, in filter
result = f.filter(record)
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 157, in filter
record.__dict__[k] = self.redact(v)
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 193, in redact
return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 193, in <dictcomp>
return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 189, in redact
if name and should_hide_value_for_key(name):
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 74, in should_hide_value_for_key
name = name.strip().lower()
AttributeError: 'int' object has no attribute 'strip'
```
**How to reproduce it**:
Define a dictionary where the type of keys is `int` and print it in any Airflow tasks.
| https://github.com/apache/airflow/issues/16597 | https://github.com/apache/airflow/pull/16601 | 129fc61a06932175387175b4c2d7e57f00163556 | 18cb0bbdbbb24e98ea8a944e97501a5657c88326 | 2021-06-22T17:08:39Z | python | 2021-06-22T20:49:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,591 | ["setup.cfg"] | Support for Jinja2 3.x | **Description**
Currently, Jinja2 is required to be < 2.12.0.
https://github.com/apache/airflow/blob/7af18ac856b470f91e75d419f27e78bc2a0b215b/setup.cfg#L116
However, the latest version of Jinja2 is 3.0.1.
**Use case / motivation**
This causes some build issues in my monorepo, because other libraries depend on Jinja2 3.x but Airflow does not yet support it. I stepped through the git blame, and it doesn't seem like there's a specific reason why Jinja2 3.x is not supported; the upper-bound appears to be there for stability and not incompatibility reasons.
**Are you willing to submit a PR?**
I would be happy to submit a PR, but I would need some guidance on how to test this change.
| https://github.com/apache/airflow/issues/16591 | https://github.com/apache/airflow/pull/16595 | ffb1fcacff21c31d7cacfbd843a84208fca38d1e | 5d5268f5e553a7031ebfb08754c31fca5c13bda7 | 2021-06-22T14:34:59Z | python | 2021-06-24T19:07:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,590 | ["airflow/providers/google/cloud/secrets/secret_manager.py"] | google.api_core.exceptions.Unknown: None Stream removed (Snowflake and GCP Secret Manager) | **Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**: Astronomer-based local setup using Docker `quay.io/astronomer/ap-airflow:2.1.0-2-buster-onbuild`
- **OS** (e.g. from /etc/os-release): `Debian GNU/Linux 10 (buster)`
- **Kernel** (e.g. `uname -a`): `Linux 7a92d1fd4406 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 GNU/Linux`
- **Install tools**: apache-airflow-providers-snowflake
- **Others**:
**What happened**:
Having configured Snowflake connection and pointing to GCP Secret Manager backend `AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend` I am getting a pretty consistent error traced all the way down to gRPC
```File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Stream removed"
debug_error_string = "{"created":"@1624370913.481874500","description":"Error received from peer ipv4:172.xxx.xx.xxx:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"Stream removed","grpc_status":2}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1137, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/operators/python.py", line 150, in execute
return_value = self.execute_callable()
File "/usr/local/lib/python3.7/site-packages/airflow/operators/python.py", line 161, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/usr/local/airflow/dags/qe/weekly.py", line 63, in snfk_hook
df = hook.get_pandas_df(sql)
File "/usr/local/lib/python3.7/site-packages/airflow/hooks/dbapi.py", line 116, in get_pandas_df
with closing(self.get_conn()) as conn:
File "/usr/local/lib/python3.7/site-packages/airflow/providers/snowflake/hooks/snowflake.py", line 220, in get_conn
conn_config = self._get_conn_params()
File "/usr/local/lib/python3.7/site-packages/airflow/providers/snowflake/hooks/snowflake.py", line 152, in _get_conn_params
self.snowflake_conn_id # type: ignore[attr-defined] # pylint: disable=no-member
File "/usr/local/lib/python3.7/site-packages/airflow/hooks/base.py", line 67, in get_connection
conn = Connection.get_connection_from_secrets(conn_id)
File "/usr/local/lib/python3.7/site-packages/airflow/models/connection.py", line 376, in get_connection_from_secrets
conn = secrets_backend.get_connection(conn_id=conn_id)
File "/usr/local/lib/python3.7/site-packages/airflow/secrets/base_secrets.py", line 64, in get_connection
conn_uri = self.get_conn_uri(conn_id=conn_id)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/google/cloud/secrets/secret_manager.py", line 134, in get_conn_uri
return self._get_secret(self.connections_prefix, conn_id)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/google/cloud/secrets/secret_manager.py", line 170, in _get_secret
return self.client.get_secret(secret_id=secret_id, project_id=self.project_id)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/google/cloud/_internal_client/secret_manager_client.py", line 86, in get_secret
response = self.client.access_secret_version(name)
File "/usr/local/lib/python3.7/site-packages/google/cloud/secretmanager_v1/gapic/secret_manager_service_client.py", line 968, in access_secret_version
request, retry=retry, timeout=timeout, metadata=metadata
File "/usr/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/usr/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.Unknown: None Stream removed
```
**What you expected to happen**:
DAG successfully retrieves a configured connection for Snowflake from GCP Secret Manager and executes a query returning back a result.
**How to reproduce it**:
1. Configure Google Cloud Platform as secrets backend
`AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend`
2. Configure a Snowflake connection (`requirements.txt` has `apache-airflow-providers-snowflake`)
3. Create a DAG which uses SnowflakeHook similar to this:
```python
import logging
import airflow
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.contrib.hooks.snowflake_hook import SnowflakeHook
from airflow.contrib.operators.snowflake_operator import SnowflakeOperator
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
args = {"owner": "Airflow", "start_date": airflow.utils.dates.days_ago(2)}
dag = DAG(
dag_id="snowflake_automation", default_args=args, schedule_interval=None
)
snowflake_query = [
"""create table public.test_employee (id number, name string);""",
"""insert into public.test_employee values(1, “Sam”),(2, “Andy”),(3, “Gill”);""",
]
def get_row_count(**context):
dwh_hook = SnowflakeHook(snowflake_conn_id="snowflake_conn")
result = dwh_hook.get_first("select count(*) from public.test_employee")
logging.info("Number of rows in `public.test_employee` - %s", result[0])
with dag:
create_insert = SnowflakeOperator(
task_id="snowfalke_create",
sql=snowflake_query ,
snowflake_conn_id="snowflake_conn",
)
get_count = PythonOperator(task_id="get_count", python_callable=get_row_count)
create_insert >> get_count
```
**Anything else we need to know**:
I looked around to see if this is an issue with Google's `api-core` and it seems like somebody has done research into it to point out that it might be downstream implementation issue and not the `api-core` issue: https://stackoverflow.com/questions/67374613/why-does-accessing-this-variable-fail-after-it-is-used-in-a-thread
| https://github.com/apache/airflow/issues/16590 | https://github.com/apache/airflow/pull/17539 | 8bd748d8b44bbf26bb423cdde5519cd0386d70e8 | b06d52860327cc0a52bcfc4f2305344b3f7c2b1d | 2021-06-22T14:33:40Z | python | 2021-08-11T17:51:08Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,587 | ["airflow/www/auth.py", "airflow/www/security.py", "airflow/www/templates/airflow/no_roles_permissions.html", "airflow/www/views.py", "tests/www/test_security.py", "tests/www/views/test_views_acl.py", "tests/www/views/test_views_base.py"] | Users with Guest role stuck in redirect loop upon login | Airflow 2.1.0, Docker
**What happened**:
Users with the Guest role assigned are stuck in a redirect loop once they attempt to login successfully to the web interface.
**What you expected to happen**:
Get minimal access to the dashboard with the appropriate views for a guest role
**How to reproduce it**:
1. Assign a guest role to any user and remove any other roles with the administrator user.
2. Logout from the admin account
3. Login as the guest user
4. You will notice constant HTTP redirects, and the dashboard will not show up.
| https://github.com/apache/airflow/issues/16587 | https://github.com/apache/airflow/pull/17838 | 933d863d6d39198dee40bd100658aa69e95d1895 | e18b6a6d19f9ea0d8fe760ba00adf38810f0e510 | 2021-06-22T12:57:03Z | python | 2021-08-26T20:59:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,573 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | State of this instance has been externally set to up_for_retry. Terminating instance. | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18.14
Environment:
Cloud provider or hardware configuration: Azure
OS (e.g. from /etc/os-release):
Kernel (e.g. uname -a):
Install tools:
Others:
**What happened**:
An occasional airflow tasks fails with the following error
```
[2021-06-21 05:39:48,424] {local_task_job.py:184} WARNING - State of this instance has been externally set to up_for_retry. Terminating instance.
[2021-06-21 05:39:48,425] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 259
[2021-06-21 05:39:48,426] {taskinstance.py:1238} ERROR - Received SIGTERM. Terminating subprocesses.
[2021-06-21 05:39:48,426] {bash.py:185} INFO - Sending SIGTERM signal to bash process group
[2021-06-21 05:39:49,133] {process_utils.py:66} INFO - Process psutil.Process(pid=329, status='terminated', started='04:32:14') (329) terminated with exit code None
[2021-06-21 05:39:50,278] {taskinstance.py:1454} ERROR - Task received SIGTERM signal
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1284, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1309, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/operators/bash.py", line 171, in execute
for raw_line in iter(self.sub_process.stdout.readline, b''):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1240, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
```
There is no indication as to what caused this error. The worker instance is healthy and task did not hit the task timeout.
**What you expected to happen**:
Task to complete successfully. If a task fad to fail for unavoidable reason (like timeout), it would be helpful to provide the reason for the failure.
**How to reproduce it**:
I'm not able to reproduce it consistently. It happens every now and then with the same error as provided above.
I'm also wish to know how to debug these failures
| https://github.com/apache/airflow/issues/16573 | https://github.com/apache/airflow/pull/19375 | e57c74263884ad5827a5bb9973eb698f0c269cc8 | 38d329bd112e8be891f077b4e3300182930cf74d | 2021-06-21T20:28:21Z | python | 2021-11-03T06:45:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,564 | ["airflow/models/taskinstance.py"] | No more SQL Exception in 2.1.0 | **Apache Airflow version**:
2.1.0
**Environment**:
- self hosted docker-compose based stack
**What happened**:
Using JDBCOperator if the sql result in an error we got only
```
[2021-06-21 11:05:55,377] {local_task_job.py:151} INFO - Task exited with return code 1
```
Before upgrading from 2.0.1 we got error details in logs:
```
jaydebeapi.DatabaseError: java.sql.SQLException:... [MapR][DrillJDBCDriver](500165) Query execution error. Details: VALIDATION ERROR:...
```
**What you expected to happen**:
See `SQLException` in logs
**How to reproduce it**:
Perform a generic SQL task with broken SQL.
**Anything else we need to know**:
I think is somehow related to https://github.com/apache/airflow/commit/abcd48731303d9e141bdc94acc2db46d73ccbe12#diff-4fd3febb74d94b2953bf5e9b4a981b617949195f83d96f4a589c3078085959b7R202 | https://github.com/apache/airflow/issues/16564 | https://github.com/apache/airflow/pull/16805 | b5ef3c841f735ea113e5d3639a620c2b63092e43 | f40ade4643966b3e78493589c5459ca2c01db0c2 | 2021-06-21T13:15:55Z | python | 2021-07-06T19:19:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,551 | ["airflow/models/dag.py", "tests/models/test_dag.py", "tests/serialization/test_dag_serialization.py"] | AttributeError: 'datetime.timezone' object has no attribute 'name' | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
In a DAG with `datetime(2021, 5, 31, tzinfo=timezone.utc)` it will raise an `AttributeError: 'datetime.timezone' object has no attribute 'name'` in the scheduler.
It seems that airflow relies on the tzinfo object to have a `.name` attribute so the "canonical" `datetime.timezone.utc` does not comply with that requirement.
```
AttributeError: 'datetime.timezone' object has no attribute 'name'
Process DagFileProcessor302-Process:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 184, in _run_file_processor
result: Tuple[int, int] = dag_file_processor.process_file(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 648, in process_file
dagbag.sync_to_db()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dagbag.py", line 556, in sync_to_db
for attempt in run_with_db_retries(logger=self.log):
File "/home/airflow/.local/lib/python3.8/site-packages/tenacity/__init__.py", line 390, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/airflow/.local/lib/python3.8/site-packages/tenacity/__init__.py", line 356, in iter
return fut.result()
File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 437, in result
return self.__get_result()
File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dagbag.py", line 570, in sync_to_db
DAG.bulk_write_to_db(self.dags.values(), session=session)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 1892, in bulk_write_to_db
orm_dag.calculate_dagrun_date_fields(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 2268, in calculate_dagrun_date_fields
self.next_dagrun, self.next_dagrun_create_after = dag.next_dagrun_info(most_recent_dag_run)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 536, in next_dagrun_info
next_execution_date = self.next_dagrun_after_date(date_last_automated_dagrun)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 571, in next_dagrun_after_date
next_start = self.following_schedule(now)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 485, in following_schedule
tz = pendulum.timezone(self.timezone.name)
AttributeError: 'datetime.timezone' object has no attribute 'name'
```
**What you expected to happen**:
If `start_date` or any other input parameter requires a `tzinfo` with a `name` attribute it should check for that in the DAG object and produce a more specific error message not `AttributeError`.
Also I guess this requirement should be explicitly mentioned in https://airflow.apache.org/docs/apache-airflow/stable/timezone.html with a comment like
```
you can't use datetime.timezone.utc because it does not have a name attribute
```
Or even better it would be not to rely on the presence of a `name` attribute in the tzinfo....
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
```
from datetime import timedelta, datetime, timezone
args = {
"owner": "airflow",
"retries": 3,
}
dag = DAG(
dag_id="xxxx",
default_args=args,
start_date=datetime(2021, 5, 31, tzinfo=timezone.utc),
schedule_interval="0 8 * * *",
max_active_runs=1,
dagrun_timeout=timedelta(minutes=60),
catchup=False,
description="xxxxx",
)
```
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/16551 | https://github.com/apache/airflow/pull/16599 | 1aa5e20fb3cd6e66bc036f3778dfe6e93c9b3d98 | 86c20910aed48f7d5b2ebaa91fa40d47c52d7db3 | 2021-06-20T17:49:13Z | python | 2021-06-23T14:31:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,533 | ["docs/exts/docs_build/fetch_inventories.py"] | Documentation building fails if helm-chart is not being built | When you've never built `helm-chart` documentation package locally. the intersphinx repository is missing for it and it cannot be downloaded as the helm-chart package is never published as package (I guess) .
This fails for example our command to build provider's documentation when you release providers:
```
cd "${AIRFLOW_REPO_ROOT}"
./breeze build-docs -- \
--for-production \
--package-filter apache-airflow-providers \
--package-filter 'apache-airflow-providers-*'
```
Adding `--package-filter 'helm-chart'` helps, but it also builds the helm-chart documentation which is undesired in this case (and it causes the docs-building for most providers to fail the first pass, until the `helm-chart` documentation is built
Possibly there is a way to get rid of that dependency ?
The error you get:
```
apache-airflow-providers Traceback (most recent call last):
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/cmd/build.py", line 279, in build_main
apache-airflow-providers args.tags, args.verbosity, args.jobs, args.keep_going)
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/application.py", line 278, in __init__
apache-airflow-providers self._init_builder()
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/application.py", line 337, in _init_builder
apache-airflow-providers self.events.emit('builder-inited')
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/events.py", line 110, in emit
apache-airflow-providers results.append(listener.handler(self.app, *args))
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/ext/intersphinx.py", line 238, in load_mappings
apache-airflow-providers updated =
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/ext/intersphinx.py", line 238, in <listcomp>
apache-airflow-providers updated =
apache-airflow-providers File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 425, in result
apache-airflow-providers return self.__get_result()
apache-airflow-providers File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
apache-airflow-providers raise self._exception
apache-airflow-providers File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
apache-airflow-providers result = self.fn(*self.args, **self.kwargs)
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/ext/intersphinx.py", line 224, in fetch_inventory_group
apache-airflow-providers "with the following issues:") + "\n" + issues)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1642, in warning
apache-airflow-providers self.log(WARNING, msg, *args, **kwargs)
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/util/logging.py", line 126, in log
apache-airflow-providers super().log(level, msg, *args, **kwargs)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1674, in log
apache-airflow-providers self.logger.log(level, msg, *args, **kwargs)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1374, in log
apache-airflow-providers self._log(level, msg, args, **kwargs)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1444, in _log
apache-airflow-providers self.handle(record)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1454, in handle
apache-airflow-providers self.callHandlers(record)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1516, in callHandlers
apache-airflow-providers hdlr.handle(record)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 861, in handle
apache-airflow-providers rv = self.filter(record)
apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 720, in filter
apache-airflow-providers result = f.filter(record)
apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/util/logging.py", line 422, in filter
apache-airflow-providers raise exc
apache-airflow-providers sphinx.errors.SphinxWarning: failed to reach any of the inventories with the following issues:
apache-airflow-providers intersphinx inventory '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv' not fetchable due to <class
'FileNotFoundError'>: [Errno 2] No such file or directory: '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv'
apache-airflow-providers
apache-airflow-providers [91mWarning, treated as error:[39;49;00m
apache-airflow-providers failed to reach any of the inventories with the following issues:
apache-airflow-providers intersphinx inventory '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv' not fetchable due to <class
'FileNotFoundError'>: [Errno 2] No such file or directory: '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv'
``` | https://github.com/apache/airflow/issues/16533 | https://github.com/apache/airflow/pull/16535 | 28e285ef9a4702b3babf6ed3c094af07c017581f | 609620a39c79dc410943e5fcce0425f6ef32cd3e | 2021-06-18T18:56:03Z | python | 2021-06-19T01:20:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,520 | ["airflow/hooks/dbapi.py", "airflow/providers/postgres/hooks/postgres.py", "tests/hooks/test_dbapi.py", "tests/providers/postgres/hooks/test_postgres.py"] | DbApiHook.get_uri() doesn't follow PostgresHook schema argument | **Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
`get_uri()` and `get_sqlalchemy_engine()` had not been implemented in PostgresHook.
When using `PostgresHook('CONNECTION_NAME', schema='another_schema').get_sqlalchemy_engine()` that will still use connection default schema setting through `get_uri()`, instead of schema that is assigned to `PostgresHook()`.
**What you expected to happen**:
`get_uri()` should follow schema in PostgresHook.
**How to reproduce it**:
`PostgresHook('CONNECTION_NAME', schema='another_schema'). get_uri()`
**Anything else we need to know**:
| https://github.com/apache/airflow/issues/16520 | https://github.com/apache/airflow/pull/16521 | 86c20910aed48f7d5b2ebaa91fa40d47c52d7db3 | 3ee916e9e11f0e9d9c794fa41b102161df3f2cd4 | 2021-06-18T03:38:10Z | python | 2021-06-23T18:54:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,502 | ["airflow/providers/amazon/aws/hooks/athena.py", "tests/providers/amazon/aws/hooks/test_athena.py"] | Feature: Method in AWSAthenaHook to get output URI from S3 | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
Athena is commonly used service amongst data engineers where one might need to get location of CSV result file in S3. This method will return S3 URI of CSV result on athena aquery.
**Use case / motivation**
Current implementation of [AWSAthenaHook](https://airflow.apache.org/docs/apache-airflow/1.10.12/_modules/airflow/contrib/hooks/aws_athena_hook.html) has methods to get result data as list of dictionaries, which is not always desired. Instead S3 URI of CSV file is more apt when one have many rows in the CSV file. One can use the S3 URI to process the data further(at some other service like Batch)
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
This method shall let the used get S3 URI of result CSV file of athena query.(If there some way to add some method to get S3 file URI in `AWSAthenaOperator` then it can be helpful as well)
**Are you willing to submit a PR?**
Yes
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/16502 | https://github.com/apache/airflow/pull/20124 | 70818319a038f1d17c179c278930b5b85035085d | 0e2a0ccd3087f53222e7859f414daf0ffa50dfbb | 2021-06-17T10:55:48Z | python | 2021-12-08T20:38:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,500 | ["airflow/www/static/js/dag.js", "airflow/www/static/js/dags.js"] | Pause/Unpause DAG tooltip doesn't disappear after click |
**Apache Airflow version**:
2.2.0dev
**What happened**:
The on/off toggle shows a tooltip "Pause/Unpause DAG" when hovering over it.
This works as expected. However if clicking on the toggle the tooltip will stick until clicking elsewhere in the screen.
**What you expected to happen**:
The tooltip should disappear when user isn't hovering the button.
**How to reproduce it**:

| https://github.com/apache/airflow/issues/16500 | https://github.com/apache/airflow/pull/17957 | 9c19f0db7dd39103ac9bc884995d286ba8530c10 | ee93935bab6e5841b48a07028ea701d9aebe0cea | 2021-06-17T08:57:10Z | python | 2021-09-01T12:14:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,493 | ["airflow/www/static/js/connection_form.js"] | UI: Port is not an integer error on Connection Test | When adding the port in the Webserver it error as it considers the port as a string instead of int

Error in Webserver:
```
[2021-06-16 22:56:33,430] {validation.py:204} ERROR - http://localhost:28080/api/v1/connections/test validation error: '25433' is not of type 'integer' - 'port'
```
cc @msumit | https://github.com/apache/airflow/issues/16493 | https://github.com/apache/airflow/pull/16497 | 1c82b4d015a1785a881bb916ffa0265249c2cde7 | e72e5295fd5e710599bc0ecc9a70b0b3b5728f38 | 2021-06-16T23:22:07Z | python | 2021-06-17T11:39:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,473 | ["airflow/utils/log/secrets_masker.py"] | secrets_masker RecursionError with nested TriggerDagRunOperators | **Apache Airflow version**: 2.1.0
**Environment**: tested on Windows docker-compose envirnoment and on k8s (both with celery executor).
**What happened**:
```
[2021-06-16 07:56:32,682] {taskinstance.py:1481} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1137, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/operators/trigger_dagrun.py", line 134, in execute
replace_microseconds=False,
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/api/common/experimental/trigger_dag.py", line 123, in trigger_dag
replace_microseconds=replace_microseconds,
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/api/common/experimental/trigger_dag.py", line 48, in _trigger_dag
dag = dag_bag.get_dag(dag_id) # prefetch dag if it is stored serialized
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagbag.py", line 186, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagbag.py", line 252, in _add_dag_from_db
dag = row.dag
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/serialized_dag.py", line 175, in dag
dag = SerializedDAG.from_dict(self.data) # type: Any
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/serialization/serialized_objects.py", line 792, in from_dict
return cls.deserialize_dag(serialized_obj['dag'])
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/serialization/serialized_objects.py", line 716, in deserialize_dag
v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v}
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/serialization/serialized_objects.py", line 716, in <dictcomp>
v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v}
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/serialization/serialized_objects.py", line 493, in deserialize_operator
op_predefined_extra_links = cls._deserialize_operator_extra_links(v)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/serialization/serialized_objects.py", line 600, in _deserialize_operator_extra_links
if _operator_link_class_path in get_operator_extra_links():
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/serialization/serialized_objects.py", line 86, in get_operator_extra_links
_OPERATOR_EXTRA_LINKS.update(ProvidersManager().extra_links_class_names)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers_manager.py", line 400, in extra_links_class_names
self.initialize_providers_manager()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers_manager.py", line 129, in initialize_providers_manager
self._discover_all_providers_from_packages()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers_manager.py", line 151, in _discover_all_providers_from_packages
log.debug("Loading %s from package %s", entry_point, package_name)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1366, in debug
self._log(DEBUG, msg, args, **kwargs)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1514, in _log
self.handle(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1524, in handle
self.callHandlers(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1586, in callHandlers
hdlr.handle(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 890, in handle
rv = self.filter(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 751, in filter
result = f.filter(record)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 157, in filter
record.__dict__[k] = self.redact(v)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in redact
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in <genexpr>
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in redact
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in <genexpr>
return tuple(self.redact(subval) for subval in item)
....
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in redact
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in <genexpr>
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in redact
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 203, in <genexpr>
return tuple(self.redact(subval) for subval in item)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/log/secrets_masker.py", line 201, in redact
elif isinstance(item, (tuple, set)):
RecursionError: maximum recursion depth exceeded in __instancecheck__
```
**What you expected to happen**:
I think new masker is not able to handle TriggerDagRunOperator running dag with TriggerDagRunOperator
**How to reproduce it**:
```
from datetime import datetime, timedelta
from airflow.models import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
def pprint(**kwargs):
print(1)
with DAG("test",
catchup=False,
max_active_runs=1,
start_date=datetime(2021, 1, 1),
is_paused_upon_creation=False,
schedule_interval=None) as dag:
task_observe_pea_data = PythonOperator(
task_id="test_task",
python_callable=pprint,
provide_context=True
)
with DAG("test_1",
catchup=False,
max_active_runs=1,
start_date=datetime(2021, 1, 1),
is_paused_upon_creation=False,
schedule_interval=None) as dag:
task_observe_pea_data = TriggerDagRunOperator(
task_id="test_trigger_1",
trigger_dag_id="test"
)
with DAG("test_2",
catchup=False,
max_active_runs=1,
start_date=datetime(2021, 1, 1),
is_paused_upon_creation=False,
schedule_interval=None) as dag:
task_observe_pea_data = TriggerDagRunOperator(
task_id="test_trigger_2",
trigger_dag_id="test_1"
)
```
**Anything else we need to know**:
How often does this problem occur? Every time
I have tried hide_sensitive_var_conn_fields=False but error still occurs. | https://github.com/apache/airflow/issues/16473 | https://github.com/apache/airflow/pull/16491 | 2011da25a50edfcdf7657ec172f57ae6e43ca216 | 7453d3e81039573f4d621d13439bd6bcc97e6fa5 | 2021-06-16T08:08:34Z | python | 2021-06-17T15:38:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 16,468 | ["docs/apache-airflow/howto/email-config.rst"] | SMTP connection type clarifications are needed | **Description**
The documentation on how to set up STMP is not clear.
https://airflow.apache.org/docs/apache-airflow/stable/howto/email-config.html
It says to create a connection named `smtp_default` but does not say what type of connection to create. There is no connection type named `SMTP`
It was suggested to me in Airflow Slack to create it as an `HTTP` connection, but this connection type does not contain all fields necessary to configure SMTP, particularly with TLS.
**Use case / motivation**
I would like the instructions for setting up SMTP in Airflow to be clearer, and it would make sense that there is an `SMTP` connection type with all necessary fields.
| https://github.com/apache/airflow/issues/16468 | https://github.com/apache/airflow/pull/16523 | bbc627a3dab17ba4cf920dd1a26dbed6f5cebfd1 | df1220a420b8fd7c6fcdcacc5345459c284acff2 | 2021-06-15T21:17:14Z | python | 2021-06-18T12:47:15Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.