status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 10,921 | ["airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Asynchronous run for DatabricksRunNowOperator | **Description**
Ability to let `DatabricksRunNowOperator` execute asynchronously based on an optional argument `async=False`.
**Use case / motivation**
Sometimes a databricks job would want to perform actions without letting the Dag operator wait for its completion. This would be very useful when the child tasks would periodically want to interact with that running job such as for message queue.
If this is valid, would like to be assigned the same. | https://github.com/apache/airflow/issues/10921 | https://github.com/apache/airflow/pull/20536 | a63753764bce26fd2d13c79fc60df7387b98d424 | 58afc193776a8e811e9a210a18f93dabebc904d4 | 2020-09-14T05:08:47Z | python | 2021-12-28T17:13:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,894 | ["breeze"] | Running 'breeze initialize-local-virtualenv' results in TypeError | CC: @potiuk
Running `./breeze initialize-local-virtualenv` from inside a brand new pyenv virtualenv (3.7) is causing errors:
```
Exception ignored in: <function _ConnectionRecord.checkout.<locals>.<lambda> at 0x108f35a70>
Traceback (most recent call last):
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 506, in <lambda>
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 714, in _finalize_fairy
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 531, in checkin
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 388, in _return_conn
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 236, in _do_return_conn
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 543, in close
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 645, in __close
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 267, in _close_connection
File "/Users/abhilash1in/.pyenv/versions/3.7.8/lib/python3.7/logging/__init__.py", line 1365, in debug
File "/Users/abhilash1in/.pyenv/versions/3.7.8/lib/python3.7/logging/__init__.py", line 1621, in isEnabledFor
TypeError: 'NoneType' object is not callable
```
Full logs:
```
abhilash1in@Abhilashs-MBP airflow % pyenv activate airflow
pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior.
(airflow) abhilash1in@Abhilashs-MBP airflow % ./breeze initialize-local-virtualenv
Initializing local virtualenv
CI image.
Branch name: master
Docker image: apache/airflow:master-python3.7-ci
Airflow source version: 2.0.0.dev0
Python version: 3.7
DockerHub user: apache
DockerHub repo: airflow
Backend: sqlite
Initializing the virtualenv: /Users/abhilash1in/.pyenv/shims/python!
This will wipe out /Users/abhilash1in/airflow and reset all the databases!
Please confirm Proceeding with the initialization. Are you sure? [y/N/q]
y
The answer is 'yes'. Proceeding with the initialization. This can take some time!
~/Source/airflow ~/Source/airflow ~/Source/airflow
Obtaining file:///Users/abhilash1in/Source/airflow
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing wheel metadata ... done
Collecting Flask-AppBuilder==3.0.1
Using cached Flask_AppBuilder-3.0.1-py3-none-any.whl (1.7 MB)
Collecting Flask-Babel==1.0.0
Using cached Flask_Babel-1.0.0-py3-none-any.whl (9.5 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/ba/fe/43/659919277d75dff2d7f32ef221eefa4d2fb1559b42600339c2/Flask_Bcrypt-0.7.1-py3-none-any.whl
Collecting Flask-Caching==1.9.0
Using cached Flask_Caching-1.9.0-py2.py3-none-any.whl (33 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/d4/8f/d0/e3c62af58d89cbd80847eea2323da4af633993ca71f5c3ba85/Flask_JWT_Extended-3.24.1-py2.py3-none-any.whl
Processing /Users/abhilash1in/Library/Caches/pip/wheels/39/10/74/d68194e28d5f7a83de5f66e5b2deff5ccbb424fe45e6b0e927/Flask_Login-0.4.1-py2.py3-none-any.whl
Processing /Users/abhilash1in/Library/Caches/pip/wheels/3b/1e/c0/b941aa1a5954f1acabbf1671a64e27adc359c0b0e74b16f8b4/Flask_OpenID-1.2.5-cp37-none-any.whl
Collecting Flask-SQLAlchemy==2.4.4
Using cached Flask_SQLAlchemy-2.4.4-py2.py3-none-any.whl (17 kB)
Collecting Flask-WTF==0.14.3
Using cached Flask_WTF-0.14.3-py2.py3-none-any.whl (13 kB)
Collecting Flask==1.1.2
Using cached Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting GitPython==3.1.7
Using cached GitPython-3.1.7-py3-none-any.whl (158 kB)
Collecting Jinja2==2.11.2
Using cached Jinja2-2.11.2-py2.py3-none-any.whl (125 kB)
Collecting Markdown==2.6.11
Using cached Markdown-2.6.11-py2.py3-none-any.whl (78 kB)
Collecting MarkupSafe==1.1.1
Using cached MarkupSafe-1.1.1-cp37-cp37m-macosx_10_6_intel.whl (18 kB)
Collecting PyJWT==1.7.1
Using cached PyJWT-1.7.1-py2.py3-none-any.whl (18 kB)
Collecting Pygments==2.6.1
Using cached Pygments-2.6.1-py3-none-any.whl (914 kB)
Collecting SQLAlchemy-JSONField==0.9.0
Using cached SQLAlchemy_JSONField-0.9.0-py2.py3-none-any.whl (10 kB)
Collecting SQLAlchemy-Utils==0.36.8
Using cached SQLAlchemy-Utils-0.36.8.tar.gz (138 kB)
Collecting SQLAlchemy==1.3.19
Using cached SQLAlchemy-1.3.19-cp37-cp37m-macosx_10_14_x86_64.whl (1.2 MB)
Collecting Sphinx==3.2.1
Using cached Sphinx-3.2.1-py3-none-any.whl (2.9 MB)
Collecting WTForms==2.3.3
Using cached WTForms-2.3.3-py2.py3-none-any.whl (169 kB)
Collecting Werkzeug==0.16.1
Using cached Werkzeug-0.16.1-py2.py3-none-any.whl (327 kB)
Collecting alabaster==0.7.12
Using cached alabaster-0.7.12-py2.py3-none-any.whl (14 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/4e/b5/00/f93fe1c90b3d501774e91e2e99987f49d16019e40e4bd3afc3/alembic-1.4.2-py2.py3-none-any.whl
Collecting apispec==3.3.1
Using cached apispec-3.3.1-py2.py3-none-any.whl (26 kB)
Collecting argcomplete==1.12.0
Using cached argcomplete-1.12.0-py2.py3-none-any.whl (38 kB)
Collecting attrs==19.3.0
Using cached attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Collecting bcrypt==3.2.0
Using cached bcrypt-3.2.0-cp36-abi3-macosx_10_9_x86_64.whl (31 kB)
Collecting beautifulsoup4==4.7.1
Using cached beautifulsoup4-4.7.1-py3-none-any.whl (94 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/22/f5/18/df711b66eb25b21325c132757d4314db9ac5e8dabeaf196eab/blinker-1.4-py3-none-any.whl
Collecting bowler==0.8.0
Using cached bowler-0.8.0-py3-none-any.whl (34 kB)
Collecting cached-property==1.5.1
Using cached cached_property-1.5.1-py2.py3-none-any.whl (6.0 kB)
Collecting cattrs==1.0.0
Using cached cattrs-1.0.0-py2.py3-none-any.whl (14 kB)
Collecting cffi==1.14.2
Using cached cffi-1.14.2-cp37-cp37m-macosx_10_9_x86_64.whl (176 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/6b/8e/89/f93dc69d26c2bba667b1615ffd0bb0f91b977ec59cf9e19fa8/cgroupspy-0.1.6-py3-none-any.whl
Collecting click==6.7
Using cached click-6.7-py2.py3-none-any.whl (71 kB)
Collecting colorama==0.4.3
Using cached colorama-0.4.3-py2.py3-none-any.whl (15 kB)
Collecting colorlog==4.0.2
Using cached colorlog-4.0.2-py2.py3-none-any.whl (17 kB)
Collecting connexion==2.7.0
Using cached connexion-2.7.0-py2.py3-none-any.whl (77 kB)
Collecting coverage==5.2.1
Using cached coverage-5.2.1-cp37-cp37m-macosx_10_13_x86_64.whl (205 kB)
Collecting croniter==0.3.34
Using cached croniter-0.3.34-py2.py3-none-any.whl (19 kB)
Collecting cryptography==3.0
Using cached cryptography-3.0-cp35-abi3-macosx_10_10_x86_64.whl (1.8 MB)
Collecting dill==0.3.2
Using cached dill-0.3.2.zip (177 kB)
Collecting docutils==0.16
Using cached docutils-0.16-py2.py3-none-any.whl (548 kB)
Collecting email-validator==1.1.1
Using cached email_validator-1.1.1-py2.py3-none-any.whl (17 kB)
Collecting fissix==20.8.0
Using cached fissix-20.8.0-py3-none-any.whl (186 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/3f/f5/68/634cb960bec7e229a1cba75ed2903a398b0491c2c62bccd79d/flake8_colors-0.1.6-py3-none-any.whl
Collecting flake8==3.8.3
Using cached flake8-3.8.3-py2.py3-none-any.whl (72 kB)
Collecting flaky==3.7.0
Using cached flaky-3.7.0-py2.py3-none-any.whl (22 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/81/b7/f4/1b38146234576bec8e9275e8bc49387d33e6af2aaa17f0b099/flask_swagger-0.2.13-cp37-none-any.whl
Collecting freezegun==0.3.15
Using cached freezegun-0.3.15-py2.py3-none-any.whl (14 kB)
Collecting funcsigs==1.0.2
Using cached funcsigs-1.0.2-py2.py3-none-any.whl (17 kB)
Collecting gitdb==4.0.5
Using cached gitdb-4.0.5-py3-none-any.whl (63 kB)
Collecting github3.py==1.3.0
Using cached github3.py-1.3.0-py2.py3-none-any.whl (153 kB)
Collecting graphviz==0.14.1
Using cached graphviz-0.14.1-py2.py3-none-any.whl (18 kB)
Collecting gunicorn==19.10.0
Using cached gunicorn-19.10.0-py2.py3-none-any.whl (113 kB)
Collecting idna==2.10
Using cached idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting imagesize==1.2.0
Using cached imagesize-1.2.0-py2.py3-none-any.whl (4.8 kB)
Collecting importlib-metadata==1.7.0
Using cached importlib_metadata-1.7.0-py2.py3-none-any.whl (31 kB)
Collecting inflection==0.5.0
Using cached inflection-0.5.0-py2.py3-none-any.whl (5.8 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/b1/2b/e0/4932698c94c886d9d476e90916b43af63d5e708e146eb8b273/ipdb-0.13.3-py3-none-any.whl
Collecting ipython==7.17.0
Using cached ipython-7.17.0-py3-none-any.whl (786 kB)
Collecting iso8601==0.1.12
Using cached iso8601-0.1.12-py2.py3-none-any.whl (12 kB)
Collecting itsdangerous==1.1.0
Using cached itsdangerous-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting jedi==0.17.2
Using cached jedi-0.17.2-py2.py3-none-any.whl (1.4 MB)
Collecting jira==2.0.0
Using cached jira-2.0.0-py2.py3-none-any.whl (57 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/68/87/a4/14d13a1e45fc800f5d73256bb8444c503c3132df4b9b1c50f9/json_merge_patch-0.2-cp37-none-any.whl
Collecting jsonschema==3.2.0
Using cached jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
Collecting jwcrypto==0.7
Using cached jwcrypto-0.7-py2.py3-none-any.whl (78 kB)
Collecting kubernetes==11.0.0
Using cached kubernetes-11.0.0-py3-none-any.whl (1.5 MB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/91/12/e1/4d21a652af35f37931a9537b5c8c91277e41a7efe1e7768587/lazy_object_proxy-1.5.1-cp37-cp37m-macosx_10_15_x86_64.whl
Collecting lockfile==0.12.2
Using cached lockfile-0.12.2-py2.py3-none-any.whl (13 kB)
Collecting marshmallow-enum==1.5.1
Using cached marshmallow_enum-1.5.1-py2.py3-none-any.whl (4.2 kB)
Collecting marshmallow-oneofschema==2.0.1
Using cached marshmallow_oneofschema-2.0.1-py2.py3-none-any.whl (5.7 kB)
Collecting marshmallow-sqlalchemy==0.23.1
Using cached marshmallow_sqlalchemy-0.23.1-py2.py3-none-any.whl (18 kB)
Collecting marshmallow==3.7.1
Using cached marshmallow-3.7.1-py2.py3-none-any.whl (45 kB)
Collecting mccabe==0.6.1
Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Collecting mongomock==3.20.0
Using cached mongomock-3.20.0-py2.py3-none-any.whl (51 kB)
Collecting moto==1.3.14
Using cached moto-1.3.14-py2.py3-none-any.whl (730 kB)
Collecting mypy==0.770
Using cached mypy-0.770-cp37-cp37m-macosx_10_6_x86_64.whl (15.4 MB)
Collecting mysql-connector-python==8.0.18
Using cached mysql_connector_python-8.0.18-cp37-cp37m-macosx_10_13_x86_64.whl (4.5 MB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/6d/7d/cb/181963137c414938d4faac9a57c966fb3a6ef675c25641c41a/mysqlclient-1.3.14-cp37-cp37m-macosx_10_14_x86_64.whl
Collecting natsort==7.0.1
Using cached natsort-7.0.1-py3-none-any.whl (33 kB)
Collecting oauthlib==2.1.0
Using cached oauthlib-2.1.0-py2.py3-none-any.whl (121 kB)
Collecting openapi-spec-validator==0.2.9
Using cached openapi_spec_validator-0.2.9-py3-none-any.whl (25 kB)
Collecting packaging==20.4
Using cached packaging-20.4-py2.py3-none-any.whl (37 kB)
Collecting pandas==1.1.0
Using cached pandas-1.1.0-cp37-cp37m-macosx_10_9_x86_64.whl (10.4 MB)
Collecting parameterized==0.7.4
Using cached parameterized-0.7.4-py2.py3-none-any.whl (25 kB)
Collecting paramiko==2.7.1
Using cached paramiko-2.7.1-py2.py3-none-any.whl (206 kB)
Collecting parso==0.7.1
Using cached parso-0.7.1-py2.py3-none-any.whl (109 kB)
Collecting pbr==5.4.5
Using cached pbr-5.4.5-py2.py3-none-any.whl (110 kB)
Collecting pendulum==2.1.2
Using cached pendulum-2.1.2-cp37-cp37m-macosx_10_15_x86_64.whl (124 kB)
Collecting pexpect==4.8.0
Using cached pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
Collecting pickleshare==0.7.5
Using cached pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting pipdeptree==1.0.0
Using cached pipdeptree-1.0.0-py3-none-any.whl (12 kB)
Collecting pre-commit==2.6.0
Using cached pre_commit-2.6.0-py2.py3-none-any.whl (171 kB)
Collecting prison==0.1.3
Using cached prison-0.1.3-py2.py3-none-any.whl (5.8 kB)
Collecting prompt-toolkit==3.0.6
Using cached prompt_toolkit-3.0.6-py3-none-any.whl (354 kB)
Collecting protobuf==3.13.0
Using cached protobuf-3.13.0-cp37-cp37m-macosx_10_9_x86_64.whl (1.3 MB)
Collecting psutil==5.7.2
Using cached psutil-5.7.2.tar.gz (460 kB)
Collecting ptyprocess==0.6.0
Using cached ptyprocess-0.6.0-py2.py3-none-any.whl (39 kB)
Collecting pycodestyle==2.6.0
Using cached pycodestyle-2.6.0-py2.py3-none-any.whl (41 kB)
Collecting pycparser==2.20
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyflakes==2.2.0
Using cached pyflakes-2.2.0-py2.py3-none-any.whl (66 kB)
Collecting pylint==2.5.3
Using cached pylint-2.5.3-py3-none-any.whl (324 kB)
Collecting pyparsing==2.4.7
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Collecting pyrsistent==0.16.0
Using cached pyrsistent-0.16.0.tar.gz (108 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/02/ee/6d/30c335b17af87fd32d14ff0d0b9dea36f0478da5ece9199597/pysftp-0.2.9-py3-none-any.whl
Collecting pytest-cov==2.10.1
Using cached pytest_cov-2.10.1-py2.py3-none-any.whl (19 kB)
Collecting pytest-instafail==0.4.2
Using cached pytest_instafail-0.4.2-py2.py3-none-any.whl (4.2 kB)
Collecting pytest-rerunfailures==9.0
Using cached pytest_rerunfailures-9.0-py3-none-any.whl (7.9 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/06/99/ad/39270e8b0ce8ad0784130fde3cc6392adc89ab92fb108f7b70/pytest_timeouts-1.2.1-py3-none-any.whl
Collecting pytest-xdist==2.0.0
Using cached pytest_xdist-2.0.0-py2.py3-none-any.whl (36 kB)
Collecting pytest==6.0.1
Using cached pytest-6.0.1-py3-none-any.whl (270 kB)
Collecting python-daemon==2.2.4
Using cached python_daemon-2.2.4-py2.py3-none-any.whl (35 kB)
Collecting python-dateutil==2.8.1
Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting python-editor==1.0.4
Using cached python_editor-1.0.4-py3-none-any.whl (4.9 kB)
Collecting python-jose==3.2.0
Using cached python_jose-3.2.0-py2.py3-none-any.whl (26 kB)
Collecting python-nvd3==0.15.0
Using cached python-nvd3-0.15.0.tar.gz (31 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/67/b8/ba/041548f30a6fc058c9b3f79a5b7b6aea925a15dd1e5c4992a4/python_slugify-4.0.1-py2.py3-none-any.whl
Collecting python3-openid==3.2.0
Using cached python3_openid-3.2.0-py3-none-any.whl (133 kB)
Collecting pytz==2020.1
Using cached pytz-2020.1-py2.py3-none-any.whl (510 kB)
Collecting pytzdata==2020.1
Using cached pytzdata-2020.1-py2.py3-none-any.whl (489 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/59/fd/14/c1cfdba3a4fd676c09ce87247cdbec3398b84892967f016e9c/pywinrm-0.4.1-py2.py3-none-any.whl
Processing /Users/abhilash1in/Library/Caches/pip/wheels/ae/6c/2b/6147e9b6d20afd24f0723b49ed70efb23eb8c94b404f0c4117/qds_sdk-1.16.0-py3-none-any.whl
Collecting requests-mock==1.8.0
Using cached requests_mock-1.8.0-py2.py3-none-any.whl (23 kB)
Collecting requests-ntlm==1.1.0
Using cached requests_ntlm-1.1.0-py2.py3-none-any.whl (5.7 kB)
Collecting requests-oauthlib==1.1.0
Using cached requests_oauthlib-1.1.0-py2.py3-none-any.whl (21 kB)
Collecting requests-toolbelt==0.9.1
Using cached requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
Collecting requests==2.24.0
Using cached requests-2.24.0-py2.py3-none-any.whl (61 kB)
Collecting responses==0.10.16
Using cached responses-0.10.16-py2.py3-none-any.whl (15 kB)
Collecting rsa==4.6
Using cached rsa-4.6-py3-none-any.whl (47 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/24/4b/6f/cceb54c29f42b50b5d11b8196eab1de6707c5742beac65f52b/sentinels-1.0.0-py3-none-any.whl
Processing /Users/abhilash1in/Library/Caches/pip/wheels/e6/b1/a6/9719530228e258eba904501fef99d5d85c80d52bd8f14438a3/setproctitle-1.1.10-cp37-cp37m-macosx_10_15_x86_64.whl
Collecting sh==1.13.1
Using cached sh-1.13.1-py2.py3-none-any.whl (40 kB)
Collecting six==1.15.0
Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting smmap==3.0.4
Using cached smmap-3.0.4-py2.py3-none-any.whl (25 kB)
Collecting snowballstemmer==2.0.0
Using cached snowballstemmer-2.0.0-py2.py3-none-any.whl (97 kB)
Collecting soupsieve==2.0.1
Using cached soupsieve-2.0.1-py3-none-any.whl (32 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/4a/4c/12/83c88bdc1bc352d7036de2891b5d6729adcf1bda3bab015560/sphinx_argparse-0.2.5-py3-none-any.whl
Collecting sphinx-autoapi==1.0.0
Using cached sphinx_autoapi-1.0.0-py2.py3-none-any.whl (48 kB)
Collecting sphinx-copybutton==0.3.0
Using cached sphinx_copybutton-0.3.0-py3-none-any.whl (11 kB)
Collecting sphinx-jinja==1.1.1
Using cached sphinx_jinja-1.1.1-py3-none-any.whl (4.8 kB)
Collecting sphinx-rtd-theme==0.5.0
Using cached sphinx_rtd_theme-0.5.0-py2.py3-none-any.whl (10.8 MB)
Collecting sphinxcontrib-applehelp==1.0.2
Using cached sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl (121 kB)
Collecting sphinxcontrib-devhelp==1.0.2
Using cached sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl (84 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/d8/4a/8e/f60726b9b4066d09f33048252c6b3b9922e3f04ab2f11a88fc/sphinxcontrib_dotnetdomain-0.4-py3-none-any.whl
Collecting sphinxcontrib-golangdomain==0.2.0.dev0
Using cached sphinxcontrib_golangdomain-0.2.0.dev0-py3-none-any.whl (7.1 kB)
Collecting sphinxcontrib-htmlhelp==1.0.3
Using cached sphinxcontrib_htmlhelp-1.0.3-py2.py3-none-any.whl (96 kB)
Collecting sphinxcontrib-httpdomain==1.7.0
Using cached sphinxcontrib_httpdomain-1.7.0-py2.py3-none-any.whl (18 kB)
Collecting sphinxcontrib-jsmath==1.0.1
Using cached sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl (5.1 kB)
Collecting sphinxcontrib-qthelp==1.0.3
Using cached sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl (90 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/f7/c6/6b/cd797468204c3d60f18b3bffa41c407b9e125ed535cb17479a/sphinxcontrib_redoc-1.6.0-py3-none-any.whl
Collecting sphinxcontrib-serializinghtml==1.1.4
Using cached sphinxcontrib_serializinghtml-1.1.4-py2.py3-none-any.whl (89 kB)
Collecting sphinxcontrib-spelling==5.2.1
Using cached sphinxcontrib_spelling-5.2.1-py3-none-any.whl (16 kB)
Collecting sshpubkeys==3.1.0
Using cached sshpubkeys-3.1.0-py2.py3-none-any.whl (12 kB)
Collecting swagger-ui-bundle==0.0.8
Using cached swagger_ui_bundle-0.0.8-py3-none-any.whl (3.8 MB)
Collecting tabulate==0.8.7
Using cached tabulate-0.8.7-py3-none-any.whl (24 kB)
Collecting tenacity==5.1.5
Using cached tenacity-5.1.5-py2.py3-none-any.whl (34 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/7c/06/54/bc84598ba1daf8f970247f550b175aaaee85f68b4b0c5ab2c6/termcolor-1.1.0-cp37-none-any.whl
Collecting text-unidecode==1.3
Using cached text_unidecode-1.3-py2.py3-none-any.whl (78 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/02/a2/46/689ccfcf40155c23edc7cdbd9de488611c8fdf49ff34b1706e/thrift-0.13.0-cp37-cp37m-macosx_10_15_x86_64.whl
Collecting toml==0.10.1
Using cached toml-0.10.1-py2.py3-none-any.whl (19 kB)
Collecting traitlets==4.3.3
Using cached traitlets-4.3.3-py2.py3-none-any.whl (75 kB)
Collecting typed-ast==1.4.1
Using cached typed_ast-1.4.1-cp37-cp37m-macosx_10_9_x86_64.whl (223 kB)
Collecting typing-extensions==3.7.4.2
Using cached typing_extensions-3.7.4.2-py3-none-any.whl (22 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/15/ae/df/a67bf1ed84e9bf230187d36d8dcfd30072bea0236cb059ed91/tzlocal-1.5.1-cp37-none-any.whl
Processing /Users/abhilash1in/Library/Caches/pip/wheels/a6/09/e9/e800279c98a0a8c94543f3de6c8a562f60e51363ed26e71283/unicodecsv-0.14.1-cp37-none-any.whl
Collecting uritemplate==3.0.1
Using cached uritemplate-3.0.1-py2.py3-none-any.whl (15 kB)
Collecting urllib3==1.25.10
Using cached urllib3-1.25.10-py2.py3-none-any.whl (127 kB)
Collecting virtualenv==20.0.31
Using cached virtualenv-20.0.31-py2.py3-none-any.whl (4.9 MB)
Collecting wcwidth==0.2.5
Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting websocket-client==0.57.0
Using cached websocket_client-0.57.0-py2.py3-none-any.whl (200 kB)
Collecting xmltodict==0.12.0
Using cached xmltodict-0.12.0-py2.py3-none-any.whl (9.2 kB)
Collecting yamllint==1.24.2
Using cached yamllint-1.24.2-py2.py3-none-any.whl (59 kB)
Collecting zipp==3.1.0
Using cached zipp-3.1.0-py3-none-any.whl (4.9 kB)
Collecting wheel; extra == "devel"
Using cached wheel-0.35.1-py2.py3-none-any.whl (33 kB)
Requirement already satisfied: setuptools; extra == "devel" in /Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages (from apache-airflow==2.0.0.dev0) (47.1.0)
Collecting Babel==2.8.0
Using cached Babel-2.8.0-py2.py3-none-any.whl (8.6 MB)
Collecting Mako==1.1.3
Using cached Mako-1.1.3-py2.py3-none-any.whl (75 kB)
Collecting PyYAML==5.3.1
Using cached PyYAML-5.3.1.tar.gz (269 kB)
Collecting clickclick==1.2.2
Using cached clickclick-1.2.2-py2.py3-none-any.whl (9.8 kB)
Collecting dnspython==1.16.0
Using cached dnspython-1.16.0-py2.py3-none-any.whl (188 kB)
Collecting appdirs==1.4.4
Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting appnope; sys_platform == "darwin"
Using cached appnope-0.1.0-py2.py3-none-any.whl (4.0 kB)
Collecting backcall==0.2.0
Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting decorator==4.4.2
Using cached decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
Collecting defusedxml==0.6.0
Using cached defusedxml-0.6.0-py2.py3-none-any.whl (23 kB)
Collecting certifi==2020.6.20
Using cached certifi-2020.6.20-py2.py3-none-any.whl (156 kB)
Collecting google-auth==1.20.1
Using cached google_auth-1.20.1-py2.py3-none-any.whl (91 kB)
Collecting cfn-lint==0.35.0
Using cached cfn_lint-0.35.0-py3-none-any.whl (3.9 MB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/3c/c6/cf/5fbcd01b053fee453a1c214363bdc65776ad82f219b0c9157c/jsondiff-1.1.2-py3-none-any.whl
Collecting boto3==1.14.44
Using cached boto3-1.14.44-py2.py3-none-any.whl (129 kB)
Collecting aws-xray-sdk==2.6.0
Using cached aws_xray_sdk-2.6.0-py2.py3-none-any.whl (94 kB)
Collecting mock==4.0.2
Using cached mock-4.0.2-py3-none-any.whl (28 kB)
Collecting boto==2.49.0
Using cached boto-2.49.0-py2.py3-none-any.whl (1.4 MB)
Collecting botocore==1.17.44
Using cached botocore-1.17.44-py2.py3-none-any.whl (6.5 MB)
Collecting docker==3.7.3
Using cached docker-3.7.3-py2.py3-none-any.whl (134 kB)
Collecting mypy-extensions==0.4.3
Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)
Collecting numpy==1.19.1
Using cached numpy-1.19.1-cp37-cp37m-macosx_10_9_x86_64.whl (15.3 MB)
Collecting PyNaCl==1.4.0
Using cached PyNaCl-1.4.0-cp35-abi3-macosx_10_10_x86_64.whl (380 kB)
Requirement already satisfied: pip>=6.0.0 in /Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages (from pipdeptree==1.0.0->-c https://raw.githubusercontent.com/apache/airflow/constraints-master/constraints-3.7.txt (line 261)) (20.1.1)
Collecting cfgv==3.2.0
Using cached cfgv-3.2.0-py2.py3-none-any.whl (7.3 kB)
Collecting nodeenv==1.4.0
Using cached nodeenv-1.4.0-py2.py3-none-any.whl (21 kB)
Collecting identify==1.4.28
Using cached identify-1.4.28-py2.py3-none-any.whl (97 kB)
Collecting astroid==2.4.2
Using cached astroid-2.4.2-py3-none-any.whl (213 kB)
Collecting isort==4.3.21
Using cached isort-4.3.21-py2.py3-none-any.whl (42 kB)
Collecting execnet==1.7.1
Using cached execnet-1.7.1-py2.py3-none-any.whl (39 kB)
Collecting pytest-forked==1.3.0
Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB)
Collecting py==1.9.0
Using cached py-1.9.0-py2.py3-none-any.whl (99 kB)
Collecting iniconfig==1.0.1
Using cached iniconfig-1.0.1-py3-none-any.whl (4.2 kB)
Collecting pluggy==0.13.1
Using cached pluggy-0.13.1-py2.py3-none-any.whl (18 kB)
Collecting more-itertools==8.4.0
Using cached more_itertools-8.4.0-py3-none-any.whl (43 kB)
Collecting pyasn1==0.4.8
Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting ecdsa==0.15
Using cached ecdsa-0.15-py2.py3-none-any.whl (100 kB)
Collecting ntlm-auth==1.5.0
Using cached ntlm_auth-1.5.0-py2.py3-none-any.whl (29 kB)
Collecting chardet==3.0.4
Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB)
Collecting Unidecode==1.1.1
Using cached Unidecode-1.1.1-py2.py3-none-any.whl (238 kB)
Collecting pyenchant==3.1.1
Using cached pyenchant-3.1.1-py3-none-any.whl (55 kB)
Collecting ipython-genutils==0.2.0
Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting distlib==0.3.1
Using cached distlib-0.3.1-py2.py3-none-any.whl (335 kB)
Collecting filelock==3.0.12
Using cached filelock-3.0.12-py3-none-any.whl (7.6 kB)
Collecting pathspec==0.8.0
Using cached pathspec-0.8.0-py2.py3-none-any.whl (28 kB)
Collecting pyasn1-modules==0.2.8
Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting cachetools==4.1.1
Using cached cachetools-4.1.1-py3-none-any.whl (10 kB)
Collecting aws-sam-translator==1.26.0
Using cached aws_sam_translator-1.26.0-py3-none-any.whl (181 kB)
Collecting jsonpatch==1.26
Using cached jsonpatch-1.26-py2.py3-none-any.whl (11 kB)
Collecting junit-xml==1.9
Using cached junit_xml-1.9-py2.py3-none-any.whl (7.1 kB)
Collecting networkx==2.4
Using cached networkx-2.4-py3-none-any.whl (1.6 MB)
Collecting jmespath==0.10.0
Using cached jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Collecting s3transfer==0.3.3
Using cached s3transfer-0.3.3-py2.py3-none-any.whl (69 kB)
Collecting future==0.18.2
Using cached future-0.18.2.tar.gz (829 kB)
Processing /Users/abhilash1in/Library/Caches/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6/wrapt-1.12.1-cp37-cp37m-macosx_10_15_x86_64.whl
Collecting jsonpickle==1.4.1
Using cached jsonpickle-1.4.1-py2.py3-none-any.whl (36 kB)
Collecting docker-pycreds==0.4.0
Using cached docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)
Collecting apipkg==1.5
Using cached apipkg-1.5-py2.py3-none-any.whl (4.9 kB)
Collecting jsonpointer==2.0
Using cached jsonpointer-2.0-py2.py3-none-any.whl (7.6 kB)
Using legacy setup.py install for PyYAML, since package 'wheel' is not installed.
Using legacy setup.py install for SQLAlchemy-Utils, since package 'wheel' is not installed.
Using legacy setup.py install for dill, since package 'wheel' is not installed.
Using legacy setup.py install for future, since package 'wheel' is not installed.
Using legacy setup.py install for psutil, since package 'wheel' is not installed.
Using legacy setup.py install for pyrsistent, since package 'wheel' is not installed.
Using legacy setup.py install for python-nvd3, since package 'wheel' is not installed.
ERROR: astroid 2.4.2 has requirement lazy-object-proxy==1.4.*, but you'll have lazy-object-proxy 1.5.1 which is incompatible.
ERROR: botocore 1.17.44 has requirement docutils<0.16,>=0.10, but you'll have docutils 0.16 which is incompatible.
ERROR: python-jose 3.2.0 has requirement ecdsa<0.15, but you'll have ecdsa 0.15 which is incompatible.
ERROR: moto 1.3.14 has requirement idna<2.9,>=2.5, but you'll have idna 2.10 which is incompatible.
Installing collected packages: pytz, Babel, marshmallow, marshmallow-enum, itsdangerous, Werkzeug, MarkupSafe, Jinja2, click, Flask, WTForms, Flask-WTF, PyJWT, Flask-Login, idna, dnspython, email-validator, six, pyrsistent, zipp, importlib-metadata, attrs, jsonschema, SQLAlchemy, Flask-SQLAlchemy, SQLAlchemy-Utils, colorama, defusedxml, python3-openid, Flask-OpenID, marshmallow-sqlalchemy, python-dateutil, Flask-Babel, PyYAML, apispec, Flask-JWT-Extended, prison, Flask-AppBuilder, pycparser, cffi, bcrypt, Flask-Bcrypt, Flask-Caching, smmap, gitdb, GitPython, Mako, Markdown, PyNaCl, Pygments, SQLAlchemy-JSONField, sphinxcontrib-applehelp, sphinxcontrib-devhelp, sphinxcontrib-qthelp, imagesize, sphinxcontrib-htmlhelp, pyparsing, packaging, sphinxcontrib-jsmath, docutils, urllib3, certifi, chardet, requests, alabaster, sphinxcontrib-serializinghtml, snowballstemmer, Sphinx, Unidecode, python-editor, alembic, apipkg, appdirs, argcomplete, lazy-object-proxy, typed-ast, wrapt, astroid, jmespath, botocore, s3transfer, boto3, aws-sam-translator, future, jsonpickle, aws-xray-sdk, backcall, soupsieve, beautifulsoup4, blinker, boto, fissix, sh, bowler, cached-property, cachetools, cattrs, cfgv, jsonpointer, jsonpatch, junit-xml, decorator, networkx, cfn-lint, cgroupspy, clickclick, colorlog, openapi-spec-validator, inflection, swagger-ui-bundle, connexion, coverage, natsort, croniter, cryptography, dill, distlib, docker-pycreds, websocket-client, docker, ecdsa, execnet, filelock, pycodestyle, pyflakes, mccabe, flake8, flake8-colors, flaky, flask-swagger, freezegun, funcsigs, jwcrypto, uritemplate, github3.py, pyasn1, rsa, pyasn1-modules, google-auth, graphviz, gunicorn, identify, iniconfig, appnope, pickleshare, parso, jedi, ptyprocess, pexpect, wcwidth, prompt-toolkit, ipython-genutils, traitlets, ipython, ipdb, iso8601, isort, oauthlib, requests-oauthlib, pbr, requests-toolbelt, jira, json-merge-patch, jsondiff, kubernetes, lockfile, marshmallow-oneofschema, mock, sentinels, mongomock, more-itertools, sshpubkeys, python-jose, xmltodict, responses, moto, mypy-extensions, typing-extensions, mypy, protobuf, mysql-connector-python, mysqlclient, nodeenv, ntlm-auth, numpy, pandas, parameterized, paramiko, pathspec, pytzdata, pendulum, pipdeptree, pluggy, toml, virtualenv, pre-commit, psutil, py, pyenchant, pylint, pysftp, pytest, pytest-cov, pytest-forked, pytest-instafail, pytest-rerunfailures, pytest-timeouts, pytest-xdist, python-daemon, text-unidecode, python-slugify, python-nvd3, requests-ntlm, pywinrm, qds-sdk, requests-mock, setproctitle, sphinx-argparse, sphinxcontrib-dotnetdomain, sphinxcontrib-golangdomain, sphinx-autoapi, sphinx-copybutton, sphinx-jinja, sphinx-rtd-theme, sphinxcontrib-httpdomain, sphinxcontrib-redoc, sphinxcontrib-spelling, tabulate, tenacity, termcolor, thrift, tzlocal, unicodecsv, yamllint, wheel, apache-airflow
Running setup.py install for pyrsistent ... done
Running setup.py install for SQLAlchemy-Utils ... done
Running setup.py install for PyYAML ... done
Running setup.py install for future ... done
Running setup.py install for dill ... done
Running setup.py install for psutil ... done
Running setup.py install for python-nvd3 ... done
Running setup.py develop for apache-airflow
Successfully installed Babel-2.8.0 Flask-1.1.2 Flask-AppBuilder-3.0.1 Flask-Babel-1.0.0 Flask-Bcrypt-0.7.1 Flask-Caching-1.9.0 Flask-JWT-Extended-3.24.1 Flask-Login-0.4.1 Flask-OpenID-1.2.5 Flask-SQLAlchemy-2.4.4 Flask-WTF-0.14.3 GitPython-3.1.7 Jinja2-2.11.2 Mako-1.1.3 Markdown-2.6.11 MarkupSafe-1.1.1 PyJWT-1.7.1 PyNaCl-1.4.0 PyYAML-5.3.1 Pygments-2.6.1 SQLAlchemy-1.3.19 SQLAlchemy-JSONField-0.9.0 SQLAlchemy-Utils-0.36.8 Sphinx-3.2.1 Unidecode-1.1.1 WTForms-2.3.3 Werkzeug-0.16.1 alabaster-0.7.12 alembic-1.4.2 apache-airflow apipkg-1.5 apispec-3.3.1 appdirs-1.4.4 appnope-0.1.0 argcomplete-1.12.0 astroid-2.4.2 attrs-19.3.0 aws-sam-translator-1.26.0 aws-xray-sdk-2.6.0 backcall-0.2.0 bcrypt-3.2.0 beautifulsoup4-4.7.1 blinker-1.4 boto-2.49.0 boto3-1.14.44 botocore-1.17.44 bowler-0.8.0 cached-property-1.5.1 cachetools-4.1.1 cattrs-1.0.0 certifi-2020.6.20 cffi-1.14.2 cfgv-3.2.0 cfn-lint-0.35.0 cgroupspy-0.1.6 chardet-3.0.4 click-6.7 clickclick-1.2.2 colorama-0.4.3 colorlog-4.0.2 connexion-2.7.0 coverage-5.2.1 croniter-0.3.34 cryptography-3.0 decorator-4.4.2 defusedxml-0.6.0 dill-0.3.2 distlib-0.3.1 dnspython-1.16.0 docker-3.7.3 docker-pycreds-0.4.0 docutils-0.16 ecdsa-0.15 email-validator-1.1.1 execnet-1.7.1 filelock-3.0.12 fissix-20.8.0 flake8-3.8.3 flake8-colors-0.1.6 flaky-3.7.0 flask-swagger-0.2.13 freezegun-0.3.15 funcsigs-1.0.2 future-0.18.2 gitdb-4.0.5 github3.py-1.3.0 google-auth-1.20.1 graphviz-0.14.1 gunicorn-19.10.0 identify-1.4.28 idna-2.10 imagesize-1.2.0 importlib-metadata-1.7.0 inflection-0.5.0 iniconfig-1.0.1 ipdb-0.13.3 ipython-7.17.0 ipython-genutils-0.2.0 iso8601-0.1.12 isort-4.3.21 itsdangerous-1.1.0 jedi-0.17.2 jira-2.0.0 jmespath-0.10.0 json-merge-patch-0.2 jsondiff-1.1.2 jsonpatch-1.26 jsonpickle-1.4.1 jsonpointer-2.0 jsonschema-3.2.0 junit-xml-1.9 jwcrypto-0.7 kubernetes-11.0.0 lazy-object-proxy-1.5.1 lockfile-0.12.2 marshmallow-3.7.1 marshmallow-enum-1.5.1 marshmallow-oneofschema-2.0.1 marshmallow-sqlalchemy-0.23.1 mccabe-0.6.1 mock-4.0.2 mongomock-3.20.0 more-itertools-8.4.0 moto-1.3.14 mypy-0.770 mypy-extensions-0.4.3 mysql-connector-python-8.0.18 mysqlclient-1.3.14 natsort-7.0.1 networkx-2.4 nodeenv-1.4.0 ntlm-auth-1.5.0 numpy-1.19.1 oauthlib-2.1.0 openapi-spec-validator-0.2.9 packaging-20.4 pandas-1.1.0 parameterized-0.7.4 paramiko-2.7.1 parso-0.7.1 pathspec-0.8.0 pbr-5.4.5 pendulum-2.1.2 pexpect-4.8.0 pickleshare-0.7.5 pipdeptree-1.0.0 pluggy-0.13.1 pre-commit-2.6.0 prison-0.1.3 prompt-toolkit-3.0.6 protobuf-3.13.0 psutil-5.7.2 ptyprocess-0.6.0 py-1.9.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycodestyle-2.6.0 pycparser-2.20 pyenchant-3.1.1 pyflakes-2.2.0 pylint-2.5.3 pyparsing-2.4.7 pyrsistent-0.16.0 pysftp-0.2.9 pytest-6.0.1 pytest-cov-2.10.1 pytest-forked-1.3.0 pytest-instafail-0.4.2 pytest-rerunfailures-9.0 pytest-timeouts-1.2.1 pytest-xdist-2.0.0 python-daemon-2.2.4 python-dateutil-2.8.1 python-editor-1.0.4 python-jose-3.2.0 python-nvd3-0.15.0 python-slugify-4.0.1 python3-openid-3.2.0 pytz-2020.1 pytzdata-2020.1 pywinrm-0.4.1 qds-sdk-1.16.0 requests-2.24.0 requests-mock-1.8.0 requests-ntlm-1.1.0 requests-oauthlib-1.1.0 requests-toolbelt-0.9.1 responses-0.10.16 rsa-4.6 s3transfer-0.3.3 sentinels-1.0.0 setproctitle-1.1.10 sh-1.13.1 six-1.15.0 smmap-3.0.4 snowballstemmer-2.0.0 soupsieve-2.0.1 sphinx-argparse-0.2.5 sphinx-autoapi-1.0.0 sphinx-copybutton-0.3.0 sphinx-jinja-1.1.1 sphinx-rtd-theme-0.5.0 sphinxcontrib-applehelp-1.0.2 sphinxcontrib-devhelp-1.0.2 sphinxcontrib-dotnetdomain-0.4 sphinxcontrib-golangdomain-0.2.0.dev0 sphinxcontrib-htmlhelp-1.0.3 sphinxcontrib-httpdomain-1.7.0 sphinxcontrib-jsmath-1.0.1 sphinxcontrib-qthelp-1.0.3 sphinxcontrib-redoc-1.6.0 sphinxcontrib-serializinghtml-1.1.4 sphinxcontrib-spelling-5.2.1 sshpubkeys-3.1.0 swagger-ui-bundle-0.0.8 tabulate-0.8.7 tenacity-5.1.5 termcolor-1.1.0 text-unidecode-1.3 thrift-0.13.0 toml-0.10.1 traitlets-4.3.3 typed-ast-1.4.1 typing-extensions-3.7.4.2 tzlocal-1.5.1 unicodecsv-0.14.1 uritemplate-3.0.1 urllib3-1.25.10 virtualenv-20.0.31 wcwidth-0.2.5 websocket-client-0.57.0 wheel-0.35.1 wrapt-1.12.1 xmltodict-0.12.0 yamllint-1.24.2 zipp-3.1.0
WARNING: You are using pip version 20.1.1; however, version 20.2.3 is available.
You should consider upgrading via the '/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/bin/python3.7 -m pip install --upgrade pip' command.
~/Source/airflow ~/Source/airflow
Wiping and recreating /Users/abhilash1in/airflow
/Users/abhilash1in/airflow/unittests.cfg
/Users/abhilash1in/airflow/webserver_config.py
/Users/abhilash1in/airflow/airflow.cfg
/Users/abhilash1in/airflow/airflow.db
/Users/abhilash1in/airflow/logs/scheduler/latest
/Users/abhilash1in/airflow/logs/scheduler/2020-09-10
/Users/abhilash1in/airflow/logs/scheduler
/Users/abhilash1in/airflow/logs
/Users/abhilash1in/airflow/unittests.db
/Users/abhilash1in/airflow
Resetting AIRFLOW sqlite database
DB: sqlite:////Users/abhilash1in/airflow/airflow.db
[2020-09-10 13:50:33,692] {db.py:629} INFO - Dropping tables that exist
[2020-09-10 13:50:33,796] {migration.py:155} INFO - Context impl SQLiteImpl.
[2020-09-10 13:50:33,796] {migration.py:162} INFO - Will assume non-transactional DDL.
[2020-09-10 13:50:34,100] {db.py:616} INFO - Creating tables
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> e3a246e0dc1, current schema
INFO [alembic.runtime.migration] Running upgrade e3a246e0dc1 -> 1507a7289a2f, create is_encrypted
/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/alembic/ddl/sqlite.py:41: UserWarning: Skipping unsupported ALTER for creation of implicit constraintPlease refer to the batch mode feature which allows for SQLite migrations using a copy-and-move strategy.
"Skipping unsupported ALTER for "
INFO [alembic.runtime.migration] Running upgrade 1507a7289a2f -> 13eb55f81627, maintain history for compatibility with earlier migrations
INFO [alembic.runtime.migration] Running upgrade 13eb55f81627 -> 338e90f54d61, More logging into task_instance
INFO [alembic.runtime.migration] Running upgrade 338e90f54d61 -> 52d714495f0, job_id indices
INFO [alembic.runtime.migration] Running upgrade 52d714495f0 -> 502898887f84, Adding extra to Log
INFO [alembic.runtime.migration] Running upgrade 502898887f84 -> 1b38cef5b76e, add dagrun
INFO [alembic.runtime.migration] Running upgrade 1b38cef5b76e -> 2e541a1dcfed, task_duration
INFO [alembic.runtime.migration] Running upgrade 2e541a1dcfed -> 40e67319e3a9, dagrun_config
INFO [alembic.runtime.migration] Running upgrade 40e67319e3a9 -> 561833c1c74b, add password column to user
INFO [alembic.runtime.migration] Running upgrade 561833c1c74b -> 4446e08588, dagrun start end
INFO [alembic.runtime.migration] Running upgrade 4446e08588 -> bbc73705a13e, Add notification_sent column to sla_miss
INFO [alembic.runtime.migration] Running upgrade bbc73705a13e -> bba5a7cfc896, Add a column to track the encryption state of the 'Extra' field in connection
INFO [alembic.runtime.migration] Running upgrade bba5a7cfc896 -> 1968acfc09e3, add is_encrypted column to variable table
INFO [alembic.runtime.migration] Running upgrade 1968acfc09e3 -> 2e82aab8ef20, rename user table
INFO [alembic.runtime.migration] Running upgrade 2e82aab8ef20 -> 211e584da130, add TI state index
INFO [alembic.runtime.migration] Running upgrade 211e584da130 -> 64de9cddf6c9, add task fails journal table
INFO [alembic.runtime.migration] Running upgrade 64de9cddf6c9 -> f2ca10b85618, add dag_stats table
INFO [alembic.runtime.migration] Running upgrade f2ca10b85618 -> 4addfa1236f1, Add fractional seconds to mysql tables
INFO [alembic.runtime.migration] Running upgrade 4addfa1236f1 -> 8504051e801b, xcom dag task indices
INFO [alembic.runtime.migration] Running upgrade 8504051e801b -> 5e7d17757c7a, add pid field to TaskInstance
INFO [alembic.runtime.migration] Running upgrade 5e7d17757c7a -> 127d2bf2dfa7, Add dag_id/state index on dag_run table
INFO [alembic.runtime.migration] Running upgrade 127d2bf2dfa7 -> cc1e65623dc7, add max tries column to task instance
INFO [alembic.runtime.migration] Running upgrade cc1e65623dc7 -> bdaa763e6c56, Make xcom value column a large binary
INFO [alembic.runtime.migration] Running upgrade bdaa763e6c56 -> 947454bf1dff, add ti job_id index
INFO [alembic.runtime.migration] Running upgrade 947454bf1dff -> d2ae31099d61, Increase text size for MySQL (not relevant for other DBs' text types)
INFO [alembic.runtime.migration] Running upgrade d2ae31099d61 -> 0e2a74e0fc9f, Add time zone awareness
INFO [alembic.runtime.migration] Running upgrade d2ae31099d61 -> 33ae817a1ff4, kubernetes_resource_checkpointing
INFO [alembic.runtime.migration] Running upgrade 33ae817a1ff4 -> 27c6a30d7c24, kubernetes_resource_checkpointing
INFO [alembic.runtime.migration] Running upgrade 27c6a30d7c24 -> 86770d1215c0, add kubernetes scheduler uniqueness
INFO [alembic.runtime.migration] Running upgrade 86770d1215c0, 0e2a74e0fc9f -> 05f30312d566, merge heads
INFO [alembic.runtime.migration] Running upgrade 05f30312d566 -> f23433877c24, fix mysql not null constraint
INFO [alembic.runtime.migration] Running upgrade f23433877c24 -> 856955da8476, fix sqlite foreign key
INFO [alembic.runtime.migration] Running upgrade 856955da8476 -> 9635ae0956e7, index-faskfail
INFO [alembic.runtime.migration] Running upgrade 9635ae0956e7 -> dd25f486b8ea, add idx_log_dag
INFO [alembic.runtime.migration] Running upgrade dd25f486b8ea -> bf00311e1990, add index to taskinstance
INFO [alembic.runtime.migration] Running upgrade 9635ae0956e7 -> 0a2a5b66e19d, add task_reschedule table
INFO [alembic.runtime.migration] Running upgrade 0a2a5b66e19d, bf00311e1990 -> 03bc53e68815, merge_heads_2
INFO [alembic.runtime.migration] Running upgrade 03bc53e68815 -> 41f5f12752f8, add superuser field
INFO [alembic.runtime.migration] Running upgrade 41f5f12752f8 -> c8ffec048a3b, add fields to dag
INFO [alembic.runtime.migration] Running upgrade c8ffec048a3b -> dd4ecb8fbee3, Add schedule interval to dag
INFO [alembic.runtime.migration] Running upgrade dd4ecb8fbee3 -> 939bb1e647c8, task reschedule fk on cascade delete
INFO [alembic.runtime.migration] Running upgrade 939bb1e647c8 -> 6e96a59344a4, Make TaskInstance.pool not nullable
INFO [alembic.runtime.migration] Running upgrade 6e96a59344a4 -> d38e04c12aa2, add serialized_dag table
INFO [alembic.runtime.migration] Running upgrade d38e04c12aa2 -> b3b105409875, add root_dag_id to DAG
INFO [alembic.runtime.migration] Running upgrade 6e96a59344a4 -> 74effc47d867, change datetime to datetime2(6) on MSSQL tables
INFO [alembic.runtime.migration] Running upgrade 939bb1e647c8 -> 004c1210f153, increase queue name size limit
INFO [alembic.runtime.migration] Running upgrade c8ffec048a3b -> a56c9515abdc, Remove dag_stat table
INFO [alembic.runtime.migration] Running upgrade a56c9515abdc, 004c1210f153, 74effc47d867, b3b105409875 -> 08364691d074, Merge the four heads back together
INFO [alembic.runtime.migration] Running upgrade 08364691d074 -> fe461863935f, increase_length_for_connection_password
INFO [alembic.runtime.migration] Running upgrade fe461863935f -> 7939bcff74ba, Add DagTags table
INFO [alembic.runtime.migration] Running upgrade 7939bcff74ba -> a4c2fd67d16b, add pool_slots field to task_instance
INFO [alembic.runtime.migration] Running upgrade a4c2fd67d16b -> 852ae6c715af, Add RenderedTaskInstanceFields table
INFO [alembic.runtime.migration] Running upgrade 852ae6c715af -> 952da73b5eff, add dag_code table
INFO [alembic.runtime.migration] Running upgrade 952da73b5eff -> a66efa278eea, Add Precision to execution_date in RenderedTaskInstanceFields table
INFO [alembic.runtime.migration] Running upgrade a66efa278eea -> cf5dc11e79ad, drop_user_and_chart
INFO [alembic.runtime.migration] Running upgrade cf5dc11e79ad -> bbf4a7ad0465, Remove id column from xcom
INFO [alembic.runtime.migration] Running upgrade bbf4a7ad0465 -> b25a55525161, Increase length of pool name
INFO [alembic.runtime.migration] Running upgrade b25a55525161 -> 3c20cacc0044, Add DagRun run_type
INFO [alembic.runtime.migration] Running upgrade 3c20cacc0044 -> 8f966b9c467a, Set conn_type as non-nullable
INFO [alembic.runtime.migration] Running upgrade 8f966b9c467a -> 8d48763f6d53, add unique constraint to conn_id
INFO [alembic.runtime.migration] Running upgrade 8d48763f6d53 -> da3f683c3a5a, Add dag_hash Column to serialized_dag table
INFO [alembic.runtime.migration] Running upgrade da3f683c3a5a -> e38be357a868, Add sensor_instance table
INFO [alembic.runtime.migration] Running upgrade e38be357a868 -> b247b1e3d1ed, Add queued by Job ID to TI
Resetting AIRFLOW sqlite unit test database
DB: sqlite:////Users/abhilash1in/airflow/unittests.db
[2020-09-10 13:50:36,898] {db.py:629} INFO - Dropping tables that exist
[2020-09-10 13:50:36,967] {migration.py:155} INFO - Context impl SQLiteImpl.
[2020-09-10 13:50:36,967] {migration.py:162} INFO - Will assume non-transactional DDL.
[2020-09-10 13:50:37,125] {db.py:616} INFO - Creating tables
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> e3a246e0dc1, current schema
INFO [alembic.runtime.migration] Running upgrade e3a246e0dc1 -> 1507a7289a2f, create is_encrypted
/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/alembic/ddl/sqlite.py:41: UserWarning: Skipping unsupported ALTER for creation of implicit constraintPlease refer to the batch mode feature which allows for SQLite migrations using a copy-and-move strategy.
"Skipping unsupported ALTER for "
INFO [alembic.runtime.migration] Running upgrade 1507a7289a2f -> 13eb55f81627, maintain history for compatibility with earlier migrations
INFO [alembic.runtime.migration] Running upgrade 13eb55f81627 -> 338e90f54d61, More logging into task_instance
INFO [alembic.runtime.migration] Running upgrade 338e90f54d61 -> 52d714495f0, job_id indices
INFO [alembic.runtime.migration] Running upgrade 52d714495f0 -> 502898887f84, Adding extra to Log
INFO [alembic.runtime.migration] Running upgrade 502898887f84 -> 1b38cef5b76e, add dagrun
INFO [alembic.runtime.migration] Running upgrade 1b38cef5b76e -> 2e541a1dcfed, task_duration
INFO [alembic.runtime.migration] Running upgrade 2e541a1dcfed -> 40e67319e3a9, dagrun_config
INFO [alembic.runtime.migration] Running upgrade 40e67319e3a9 -> 561833c1c74b, add password column to user
INFO [alembic.runtime.migration] Running upgrade 561833c1c74b -> 4446e08588, dagrun start end
INFO [alembic.runtime.migration] Running upgrade 4446e08588 -> bbc73705a13e, Add notification_sent column to sla_miss
INFO [alembic.runtime.migration] Running upgrade bbc73705a13e -> bba5a7cfc896, Add a column to track the encryption state of the 'Extra' field in connection
INFO [alembic.runtime.migration] Running upgrade bba5a7cfc896 -> 1968acfc09e3, add is_encrypted column to variable table
INFO [alembic.runtime.migration] Running upgrade 1968acfc09e3 -> 2e82aab8ef20, rename user table
INFO [alembic.runtime.migration] Running upgrade 2e82aab8ef20 -> 211e584da130, add TI state index
INFO [alembic.runtime.migration] Running upgrade 211e584da130 -> 64de9cddf6c9, add task fails journal table
INFO [alembic.runtime.migration] Running upgrade 64de9cddf6c9 -> f2ca10b85618, add dag_stats table
INFO [alembic.runtime.migration] Running upgrade f2ca10b85618 -> 4addfa1236f1, Add fractional seconds to mysql tables
INFO [alembic.runtime.migration] Running upgrade 4addfa1236f1 -> 8504051e801b, xcom dag task indices
INFO [alembic.runtime.migration] Running upgrade 8504051e801b -> 5e7d17757c7a, add pid field to TaskInstance
INFO [alembic.runtime.migration] Running upgrade 5e7d17757c7a -> 127d2bf2dfa7, Add dag_id/state index on dag_run table
INFO [alembic.runtime.migration] Running upgrade 127d2bf2dfa7 -> cc1e65623dc7, add max tries column to task instance
INFO [alembic.runtime.migration] Running upgrade cc1e65623dc7 -> bdaa763e6c56, Make xcom value column a large binary
INFO [alembic.runtime.migration] Running upgrade bdaa763e6c56 -> 947454bf1dff, add ti job_id index
INFO [alembic.runtime.migration] Running upgrade 947454bf1dff -> d2ae31099d61, Increase text size for MySQL (not relevant for other DBs' text types)
INFO [alembic.runtime.migration] Running upgrade d2ae31099d61 -> 0e2a74e0fc9f, Add time zone awareness
INFO [alembic.runtime.migration] Running upgrade d2ae31099d61 -> 33ae817a1ff4, kubernetes_resource_checkpointing
INFO [alembic.runtime.migration] Running upgrade 33ae817a1ff4 -> 27c6a30d7c24, kubernetes_resource_checkpointing
INFO [alembic.runtime.migration] Running upgrade 27c6a30d7c24 -> 86770d1215c0, add kubernetes scheduler uniqueness
INFO [alembic.runtime.migration] Running upgrade 86770d1215c0, 0e2a74e0fc9f -> 05f30312d566, merge heads
INFO [alembic.runtime.migration] Running upgrade 05f30312d566 -> f23433877c24, fix mysql not null constraint
INFO [alembic.runtime.migration] Running upgrade f23433877c24 -> 856955da8476, fix sqlite foreign key
INFO [alembic.runtime.migration] Running upgrade 856955da8476 -> 9635ae0956e7, index-faskfail
INFO [alembic.runtime.migration] Running upgrade 9635ae0956e7 -> dd25f486b8ea, add idx_log_dag
INFO [alembic.runtime.migration] Running upgrade dd25f486b8ea -> bf00311e1990, add index to taskinstance
INFO [alembic.runtime.migration] Running upgrade 9635ae0956e7 -> 0a2a5b66e19d, add task_reschedule table
INFO [alembic.runtime.migration] Running upgrade 0a2a5b66e19d, bf00311e1990 -> 03bc53e68815, merge_heads_2
INFO [alembic.runtime.migration] Running upgrade 03bc53e68815 -> 41f5f12752f8, add superuser field
/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/alembic/ddl/sqlite.py:41: UserWarning: Skipping unsupported ALTER for creation of implicit constraintPlease refer to the batch mode feature which allows for SQLite migrations using a copy-and-move strategy.
"Skipping unsupported ALTER for "
INFO [alembic.runtime.migration] Running upgrade 41f5f12752f8 -> c8ffec048a3b, add fields to dag
INFO [alembic.runtime.migration] Running upgrade c8ffec048a3b -> dd4ecb8fbee3, Add schedule interval to dag
INFO [alembic.runtime.migration] Running upgrade dd4ecb8fbee3 -> 939bb1e647c8, task reschedule fk on cascade delete
INFO [alembic.runtime.migration] Running upgrade 939bb1e647c8 -> 6e96a59344a4, Make TaskInstance.pool not nullable
INFO [alembic.runtime.migration] Running upgrade 6e96a59344a4 -> d38e04c12aa2, add serialized_dag table
INFO [alembic.runtime.migration] Running upgrade d38e04c12aa2 -> b3b105409875, add root_dag_id to DAG
INFO [alembic.runtime.migration] Running upgrade 6e96a59344a4 -> 74effc47d867, change datetime to datetime2(6) on MSSQL tables
INFO [alembic.runtime.migration] Running upgrade 939bb1e647c8 -> 004c1210f153, increase queue name size limit
INFO [alembic.runtime.migration] Running upgrade c8ffec048a3b -> a56c9515abdc, Remove dag_stat table
INFO [alembic.runtime.migration] Running upgrade a56c9515abdc, 004c1210f153, 74effc47d867, b3b105409875 -> 08364691d074, Merge the four heads back together
INFO [alembic.runtime.migration] Running upgrade 08364691d074 -> fe461863935f, increase_length_for_connection_password
INFO [alembic.runtime.migration] Running upgrade fe461863935f -> 7939bcff74ba, Add DagTags table
INFO [alembic.runtime.migration] Running upgrade 7939bcff74ba -> a4c2fd67d16b, add pool_slots field to task_instance
INFO [alembic.runtime.migration] Running upgrade a4c2fd67d16b -> 852ae6c715af, Add RenderedTaskInstanceFields table
INFO [alembic.runtime.migration] Running upgrade 852ae6c715af -> 952da73b5eff, add dag_code table
INFO [alembic.runtime.migration] Running upgrade 952da73b5eff -> a66efa278eea, Add Precision to execution_date in RenderedTaskInstanceFields table
INFO [alembic.runtime.migration] Running upgrade a66efa278eea -> cf5dc11e79ad, drop_user_and_chart
INFO [alembic.runtime.migration] Running upgrade cf5dc11e79ad -> bbf4a7ad0465, Remove id column from xcom
INFO [alembic.runtime.migration] Running upgrade bbf4a7ad0465 -> b25a55525161, Increase length of pool name
INFO [alembic.runtime.migration] Running upgrade b25a55525161 -> 3c20cacc0044, Add DagRun run_type
INFO [alembic.runtime.migration] Running upgrade 3c20cacc0044 -> 8f966b9c467a, Set conn_type as non-nullable
INFO [alembic.runtime.migration] Running upgrade 8f966b9c467a -> 8d48763f6d53, add unique constraint to conn_id
INFO [alembic.runtime.migration] Running upgrade 8d48763f6d53 -> da3f683c3a5a, Add dag_hash Column to serialized_dag table
INFO [alembic.runtime.migration] Running upgrade da3f683c3a5a -> e38be357a868, Add sensor_instance table
INFO [alembic.runtime.migration] Running upgrade e38be357a868 -> b247b1e3d1ed, Add queued by Job ID to TI
Exception ignored in: <function _ConnectionRecord.checkout.<locals>.<lambda> at 0x108f35a70>
Traceback (most recent call last):
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 506, in <lambda>
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 714, in _finalize_fairy
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 531, in checkin
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 388, in _return_conn
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 236, in _do_return_conn
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 543, in close
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 645, in __close
File "/Users/abhilash1in/.pyenv/versions/3.7.8/envs/airflow/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 267, in _close_connection
File "/Users/abhilash1in/.pyenv/versions/3.7.8/lib/python3.7/logging/__init__.py", line 1365, in debug
File "/Users/abhilash1in/.pyenv/versions/3.7.8/lib/python3.7/logging/__init__.py", line 1621, in isEnabledFor
TypeError: 'NoneType' object is not callable
```
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
This questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0dev
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): macOS Catalina
- **Kernel** (e.g. `uname -a`): `Darwin Abhilashs-MBP 19.6.0 Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64 x86_64`
- **Install tools**:
- **Others**:
**What happened**:
Running `breeze initialize-local-virtualenv` inside a brand new pyenv virualenv (3.7) resulted in an error.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
Expected `breeze initialize-local-virtualenv` inside a brand new pyenv virualenv (3.7) to complete successfully.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Run `breeze initialize-local-virtualenv` inside a brand new pyenv virualenv (3.7).
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/10894 | https://github.com/apache/airflow/pull/10896 | 2e8b4ece36b2edf20e50331fbc55269033755954 | b9f868b5ec30d028e52f49b5deafb5214320f521 | 2020-09-12T05:13:13Z | python | 2020-09-12T12:39:34Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,882 | ["airflow/utils/log/logging_mixin.py", "tests/utils/test_logging_mixin.py"] | StreamLogWriter has no close method, clashes with abseil logging (Tensorflow) | **Apache Airflow version**: 1.10.12
**What happened**:
If any Python code run in an operator, has imported `absl.logging` (directly or indirectly), `airflow run` on that task ends with `AttributeError: 'StreamLogWriter' object has no attribute 'close'`. This is not normally seen, as this only happens in the `--raw` inner stage. The task is marked as successful, but the task exited with exit code 1.
The full traceback is:
```python
Traceback (most recent call last):
File "/.../bin/airflow", line 37, in <module>
args.func(args)
File "/.../lib/python3.6/site-packages/airflow/utils/cli.py", line 76, in wrapper
return f(*args, **kwargs)
File "/.../lib/python3.6/site-packages/airflow/bin/cli.py", line 588, in run
logging.shutdown()
File "/.../lib/python3.6/logging/__init__.py", line 1946, in shutdown
h.close()
File "/.../lib/python3.6/site-packages/absl/logging/__init__.py", line 864, in close
self.stream.close()
AttributeError: 'StreamLogWriter' object has no attribute 'close'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/.../lib/python3.6/logging/__init__.py", line 1946, in shutdown
h.close()
File "/.../lib/python3.6/site-packages/absl/logging/__init__.py", line 864, in close
self.stream.close()
AttributeError: 'StreamLogWriter' object has no attribute 'close'
```
Abseil is Google's utility package, and is used in Tensorflow. The same issue would be seen if you used `import tensorflow`.
**What you expected to happen**:
I expected the task to exit with exit code 0, without issues at exit.
**How to reproduce it**:
- Install abseil: `pip install absl-py`
- Create a test dag with a single operator that only uses `import abs.logging`
- Trigger a dagrun (no scheduler needs to be running)
- execute `airflow run [dagid] [taskid] [execution-date] --raw`
**Anything else we need to know**:
What happens is that `absl.logging` sets up a logger with a custom handler (`absl.logging.ABSLHandler()`, which is a proxy for either `absl.logging.PythonHandler()`), and that proxy will call `.close()` on its `stream`. By default that stream is `sys.stderr`. However, airflow has swapped out `sys.stderr` for a `StreamLogWriter` object when it runs the task under the `airflow.utils.log.logging_mixin.redirect_stderr` context manager. When the context manager exits, the logger is still holding on to the surrogate stderr object.
Normally, the abseil handler would handle such cases, it deliberately won't close a stream that is the same object as `sys.stderr` or `sys.__stderr__` (see [their source code](https://github.com/abseil/abseil-py/blob/06edd9c20592cec39178b94240b5e86f32e19768/absl/logging/__init__.py#L852-L870)). But *at exit time* that's no longer the case. At that time `logging.shutdown()` is called, and that leads to the above exception.
Since `sys.stderr` has a close method, the best fix would be for `StreamLogWriter` to also have one. | https://github.com/apache/airflow/issues/10882 | https://github.com/apache/airflow/pull/10884 | 3ee618623be6079ed177da793b490cb7436d5cb6 | 26ae8e93e8b8075105faec18dc2e6348fa9efc72 | 2020-09-11T14:28:17Z | python | 2020-10-20T08:20:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,874 | ["airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | SSHHook get_conn() does not re-use client | **Apache Airflow version**: 1.10.8
**Environment**:
- **Cloud provider or hardware configuration**: 4 VCPU 8GB RAM VM
- **OS** (e.g. from /etc/os-release): RHEL 7.7
- **Kernel** (e.g. `uname -a`): `Linux 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux`
- **Install tools**:
- **Others**:
**What happened**:
Sub-classing the SSHOperator and calling its execute repeatedly will create a new SSH connection each time, to run the command.
Not sure if this is a bug or an enhancement / feature. I can re-log as a feature request if needed.
**What you expected to happen**:
SSH client / connection should be re-used if it was already established.
**How to reproduce it**:
Sub-class the SSHOperator.
In your sub class execute method, call super().execute() a few times.
Observe in the logs how an SSH Connection is created each time.
**Anything else we need to know**:
The SSHHook.get_conn() method creates a new Paramiko SSH client each time. Despite storing the client on self.client before returning, the hook get_conn() method does not actually use the self.client next time. A new connection is therefore created.
I think this is because the SSH Operator uses a context manager to operate on the Paramiko client, so the Hook needs to create a new client if a previous context manager had closed the last one.
Fixing this would mean changing the SSH Operator execute() to not use the ssh_hook.get_conn() as a context manager since this will open and close the session each time. Perhaps the conn can be closed with the operator's post_execute method rather than in the execute.
***Example logs***
```
[2020-09-11 07:04:37,960] {ssh_operator.py:89} INFO - ssh_conn_id is ignored when ssh_hook is provided.
[2020-09-11 07:04:37,960] {logging_mixin.py:112} INFO - [2020-09-11 07:04:37,960] {ssh_hook.py:166} WARNING - Remote Identification Change is not verified. This wont protect against Man-In-The-Middle attacks
[2020-09-11 07:04:37,961] {logging_mixin.py:112} INFO - [2020-09-11 07:04:37,961] {ssh_hook.py:170} WARNING - No Host Key Verification. This wont protect against Man-In-The-Middle attacks
[2020-09-11 07:04:37,976] {logging_mixin.py:112} INFO - [2020-09-11 07:04:37,975] {transport.py:1819} INFO - Connected (version 2.0, client OpenSSH_7.4)
[2020-09-11 07:04:38,161] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,161] {transport.py:1819} INFO - Auth banner: b'Authorized uses only. All activity may be monitored and reported.\n'
[2020-09-11 07:04:38,161] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,161] {transport.py:1819} INFO - Authentication (publickey) successful!
[2020-09-11 07:04:38,161] {ssh_operator.py:109} INFO - Running command: [REDACTED COMMAND 1]
...
[2020-09-11 07:04:38,383] {ssh_operator.py:89} INFO - ssh_conn_id is ignored when ssh_hook is provided.
[2020-09-11 07:04:38,383] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,383] {ssh_hook.py:166} WARNING - Remote Identification Change is not verified. This wont protect against Man-In-The-Middle attacks
[2020-09-11 07:04:38,383] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,383] {ssh_hook.py:170} WARNING - No Host Key Verification. This wont protect against Man-In-The-Middle attacks
[2020-09-11 07:04:38,399] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,399] {transport.py:1819} INFO - Connected (version 2.0, client OpenSSH_7.4)
[2020-09-11 07:04:38,545] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,545] {transport.py:1819} INFO - Auth banner: b'Authorized uses only. All activity may be monitored and reported.\n'
[2020-09-11 07:04:38,546] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,546] {transport.py:1819} INFO - Authentication (publickey) successful!
[2020-09-11 07:04:38,546] {ssh_operator.py:109} INFO - Running command: [REDACTED COMMAND 2]
....
[2020-09-11 07:04:38,722] {ssh_operator.py:89} INFO - ssh_conn_id is ignored when ssh_hook is provided.
[2020-09-11 07:04:38,722] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,722] {ssh_hook.py:166} WARNING - Remote Identification Change is not verified. This wont protect against Man-In-The-Middle attacks
[2020-09-11 07:04:38,723] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,723] {ssh_hook.py:170} WARNING - No Host Key Verification. This wont protect against Man-In-The-Middle attacks
[2020-09-11 07:04:38,734] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,734] {transport.py:1819} INFO - Connected (version 2.0, client OpenSSH_7.4)
[2020-09-11 07:04:38,867] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,867] {transport.py:1819} INFO - Auth banner: b'Authorized uses only. All activity may be monitored and reported.\n'
[2020-09-11 07:04:38,868] {logging_mixin.py:112} INFO - [2020-09-11 07:04:38,867] {transport.py:1819} INFO - Authentication (publickey) successful!
[2020-09-11 07:04:38,868] {ssh_operator.py:109} INFO - Running command: [REDACTED COMMAND 3]
``` | https://github.com/apache/airflow/issues/10874 | https://github.com/apache/airflow/pull/17378 | 306d0601246b43a4fcf1f21c6e30a917e6d18c28 | 73fcbb0e4e151c9965fd69ba08de59462bbbe6dc | 2020-09-11T05:31:27Z | python | 2021-10-13T20:14:54Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,868 | ["airflow/plugins_manager.py", "tests/plugins/test_plugin.py", "tests/plugins/test_plugins_manager.py"] | on_load method is not working for plugins located in folder | **Apache Airflow version**: 1.10.12
**Environment**:
- **Cloud provider or hardware configuration**: Google Cloud, custom-8-16384
- **OS** (e.g. from /etc/os-release): CentOS Linux release 7.7.1908 (Core)
- **Kernel** (e.g. `uname -a`): 3.10.0-1062.18.1.el7.x86_64 #1 SMP Tue Mar 17 23:49:17 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
**What happened**:
Class method [on_load](https://github.com/apache/airflow/blob/ce66bc944d246aa3b51cce6e2fc13cd25da08d6e/airflow/plugins_manager.py#L102) of AirflowPlugin class doesn't run after plugins located in plugin folder, only for plugins installed through pip.
**What you expected to happen**:
Method 'on_load' should run on plugins initialization, that are located in plugin folder
**How to reproduce it**:
1) Create plugin with next code and place it in plugins folder:
```
class TestPlugin(AirflowPlugin):
menu_links = [MenuLink(category='Google', name='Google', url='google.com')]
@classmethod
def on_load(cls, *args, **kwargs):
raise AirflowPluginException("Test On Load Exception")
```
2) Reload webserver.
_Result_: Webserver logs are empty, new menu appeared in UI.
_Expected result_: Webserver logs should contain information with raised exception, new menu shouldn't appear in UI.
_Reason_: on_load method is called only for airflow.plugins.* entry_points
https://github.com/apache/airflow/blob/ce66bc944d246aa3b51cce6e2fc13cd25da08d6e/airflow/plugins_manager.py#L152
**Additional information**: It is not clear what is the reason for skipping this method call for plugins contained in the folder. At the same time, plugin validation successfully run for plugins from a folder and installed via a package. Validation uses validate method in same AirflowPlugin class and can be extended by user.
https://github.com/apache/airflow/blob/b9dc3c51ba2cba1c61d327488cecf2623d6445b3/airflow/plugins_manager.py#L96 | https://github.com/apache/airflow/issues/10868 | https://github.com/apache/airflow/pull/15208 | 042be2e4e06b988f5ba2dc146f53774dabc8b76b | 97b7780df48b412e104ff4adeecbe715264f00eb | 2020-09-10T19:58:49Z | python | 2021-04-06T21:48:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,856 | ["BREEZE.rst", "Dockerfile", "Dockerfile.ci", "IMAGES.rst", "breeze", "breeze-complete", "docs/production-deployment.rst", "scripts/ci/libraries/_build_images.sh", "scripts/ci/libraries/_initialization.sh"] | Add better "extensions" model for "build image" part | There should be an easy way to add various build time steps and dependencies in the "build image" segment and it should be quite obvious how to do it.
Example of what kind of extensions should be supported is described here: https://github.com/apache/airflow/issues/8605#issuecomment-690065621 | https://github.com/apache/airflow/issues/10856 | https://github.com/apache/airflow/pull/11176 | 17c810ec36a61ca2e285ccf44de27a598cca15f5 | ebd71508627e68f6c35f1aff2d03b4569de80f4b | 2020-09-10T08:30:58Z | python | 2020-09-29T13:30:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,816 | ["BREEZE.rst", "breeze", "docs/apache-airflow/concepts.rst"] | Docs around different execution modes for Sensor | **Description**
It would be good to add docs to explain different modes for Sensors:
1. Poke mode
1. Reschedule mode
1. Smart Sensor
and to explain the advantages of one over the other | https://github.com/apache/airflow/issues/10816 | https://github.com/apache/airflow/pull/12803 | e9b2ff57b81b12cfbf559d957a370d497015acc2 | df9493c288f33c8798d9b02331f01b3a285c03a9 | 2020-09-08T22:51:29Z | python | 2020-12-05T20:08:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,815 | ["docs/smart-sensor.rst"] | Mark "Smart Sensor" as an early-access feature | **Description**
Based on our discussion during Airflow 2.0 dev call, the consensus was that **Smart Sensors** ([PR](https://github.com/apache/airflow/pull/5499)) will be included in Airflow 2.0 as an **early-access** feature with a clear note that this feature might potentially change in future Airflow version with breaking changes.
Also, make it clear that Airbnb is running it in PROD since ~6-7 months to give confidence to our users
| https://github.com/apache/airflow/issues/10815 | https://github.com/apache/airflow/pull/11499 | f43d8559fec91e473aa4f67ea262325462de0b5f | e3e8fd896bb28c7902fda917d5b5ceda93d6ac0b | 2020-09-08T22:42:53Z | python | 2020-10-13T15:11:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,804 | ["airflow/providers/amazon/aws/transfers/gcs_to_s3.py", "tests/providers/amazon/aws/transfers/test_gcs_to_s3.py"] | Add acl_policy into GCSToS3Operator | **Description**
The goal's feature is to add the `acl_policy` field to the `GCSToS3Operator`
**Use case / motivation**
The `acl_policy` field has been added to the `S3Hook` but not in the `GCSToS3Operator`
| https://github.com/apache/airflow/issues/10804 | https://github.com/apache/airflow/pull/10829 | 03ff067152ed3202b7d4beb0fe9b371a0ef51058 | dd98b21494ff6036242b63268140abe1294b3657 | 2020-09-08T15:28:23Z | python | 2020-10-06T11:09:01Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,794 | ["airflow/models/taskinstance.py", "airflow/sensors/external_task_sensor.py", "tests/models/test_taskinstance.py", "tests/sensors/test_external_task_sensor.py"] | ExternalTaskMarker don't work with store_serialized_dags | **Apache Airflow version**: 1.10.10
**Kubernetes version (if you are using kubernetes)**: 1.14.10
**Environment**:
- **Cloud provider or hardware configuration**: Any, GCP
- **OS** (e.g. from /etc/os-release): Ubuntu 1.10.10
- **Kernel** (e.g. `uname -a`): Any
- **Install tools**: Any
- **Others**: NA
**What happened**:
DAGs with ExternalTaskMarker don't clean external task after second usage of clean on whole DAG
**What you expected to happen**:
All external task should be cleaned
**How to reproduce it**:
enable serialization `store_serialized_dags = True`
create example DAGs:
```
default_args = {'owner': 'airflow',
'start_date': datetime(2018, 1, 1)}
def hello_world_py(*args):
print('Hello World')
print('This is DAG is dep}')
schedule = '@daily'
dag_id = 'dep_dag'
with DAG(dag_id=dag_id,
schedule_interval=schedule,
default_args=default_args) as dag:
t1 = PythonOperator(task_id='hello_world',
python_callable=hello_world_py,)
dep_1 = ExternalTaskSensor(task_id='child_task1',
external_dag_id='hello_world_2',
external_task_id='parent_task',
mode='reschedule')
dep_1 >> t1
def create_dag(dag_id, schedule, dag_number, default_args):
dag = DAG(dag_id, schedule_interval=schedule,
default_args=default_args)
with dag:
t1 = PythonOperator(task_id='hello_world',
python_callable=hello_world_py,
dag_number=dag_number)
parent_task = SerializableExternalTaskMarker(task_id='parent_task',
external_dag_id='dep_dag',
external_task_id='child_task1')
t1 >> parent_task
return dag
for n in range(1, 4):
dag_id = 'hello_world_{}'.format(str(n))
default_args = {'owner': 'airflow',
'start_date': datetime(2018, 1, 1)}
schedule = '@daily'
dag_number = n
globals()[dag_id] = create_dag(dag_id, schedule, dag_number, default_args)
```
1. Run both DAGs
2. Wait until first few dagruns where completed
3. Clean first dugrun in DAG with marker
4. Check external dug was cleaned on this date
5. Mark success this date in each DAGs or wait until complete
6. Clean DAG with marker second time on same date
7. ExternalTaskMarker don't work
**Anything else we need to know**:
I think ExternalTaskMarker don't work because of serialization, after serialization each task instance get operator field equal 'SerializedBaseOperator' and markers logic dot' work [here](https://github.com/apache/airflow/blob/master/airflow/models/dag.py#L1072)
To test ExternalTaskMarker with serialization you can use:
<details>
```
from airflow.sensors.external_task_sensor import ExternalTaskMarker
class FakeName(type):
def __new__(metacls, name, bases, namespace, **kw):
name = namespace.get("__name__", name)
return super().__new__(metacls, name, bases, namespace, **kw)
class SerializableExternalTaskMarker(ExternalTaskMarker, metaclass=FakeName):
# The _serialized_fields are lazily loaded when get_serialized_fields() method is called
__serialized_fields = None # type: Optional[FrozenSet[str]]
__name__ = "ExternalTaskMarker"
@classmethod
def get_serialized_fields(cls):
"""Serialized BigQueryOperator contain exactly these fields."""
if not cls.__serialized_fields:
cls.__serialized_fields = frozenset(
ExternalTaskMarker.get_serialized_fields() | {
"recursion_depth", "external_dag_id", "external_taskid", "execution_date"
}
)
return cls.__serialized_fields
```
</details>
@kaxil | https://github.com/apache/airflow/issues/10794 | https://github.com/apache/airflow/pull/10924 | ce19657ec685abff5871df80c8d47f8585eeed99 | f7da7d94b4ac6dc59fb50a4f4abba69776aac798 | 2020-09-08T07:46:31Z | python | 2020-09-15T22:40:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,793 | ["chart/files/pod-template-file.kubernetes-helm-yaml"] | Mounting DAGS from an externally populated PVC doesn't work in K8 Executor | **Apache Airflow version**: 1.10.12
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18
**What happened**: Mounting DAGS from an externally populated PVC doesn't work:
```
--set dags.persistence.enabled=true \
--set dags.persistence.existingClaim=my-volume-claim
--set dags.gitSync.enabled=false
```
Envionment variables from K8 Executor worker
> β Environment:
> ββ AIRFLOW_HOME: /opt/airflow ββ AIRFLOW__CORE__DAGS_FOLDER: /opt/airflow/dags/repo/
> ββ AIRFLOW__CORE__DAG_CONCURRENCY: 5 β
> β AIRFLOW__CORE__EXECUTOR: LocalExecutor β
> β AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-fernet-key'> Optional: false β
> β AIRFLOW__CORE__PARALLELISM: 5 β
> β AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-airflow-metadata'> Optional: false β
> β AIRFLOW__KUBERNETES__DAGS_VOLUME_SUBPATH: repo/
<!-- (please include exact error messages if you can) -->
**What you expected to happen**: Dags mounted in workers from PVC
<!-- What do you think went wrong? -->
**How to reproduce it**: Use chart from master and set variables as above
<!---
| https://github.com/apache/airflow/issues/10793 | https://github.com/apache/airflow/pull/13686 | 7ec858c4523b24e7a3d6dd1d49e3813e6eee7dff | 8af5a33950cfe59a38931a8a605394ef0cbc3c08 | 2020-09-08T07:39:29Z | python | 2021-01-17T12:53:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,792 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | Allow labels in KubernetesPodOperator to be templated | **Description**
It would be useful to have labels being a templated field in KubernetesPodOperator, in order for example, to be able to identify pods by run_id when there are multiple concurrent dag runs. | https://github.com/apache/airflow/issues/10792 | https://github.com/apache/airflow/pull/10796 | fd682fd70a97a1f937786a1a136f0fa929c8fb80 | b93b6c5be3ab60960f650d0d4ee6c91271ac7909 | 2020-09-08T07:35:49Z | python | 2020-10-05T08:05:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,788 | ["airflow/www/extensions/init_views.py", "docs/apache-airflow/plugins.rst", "tests/plugins/test_plugin.py", "tests/plugins/test_plugins_manager.py"] | Airflow Plugins should have support for views without menu | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
<!-- A short description of your feature -->
This feature requests a way to support custom views in plugins without any menu. Right now, all the views listed in `AirflowPlugin.appbuilder_views` are added with menu to the appbuilder.
**Use case / motivation**
In a custom plugin I built, I need to add a distinct view for details of custom operator. Then I use the `BaseOperator.operator_extra_links` to link this new UI view with the task links.
However, this view has no need to show in the airflow menu, but rather should be shown in UI similarly to `views.DagModelView`. That is, the view should be added to flask appbuilder using `appbuilder.add_view_no_menu` call but right now all the views in `AirflowPlugin.appbuilder_views` are added by calling `appbuilder.add_view`
**What do you want to happen?**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
Maybe if "name" is missing in the dict in `AirflowPlugin.appbuilder_views` list, then when integrating plugin to flask app context, it'll just call:
```python
appbuilder.add_view_no_menu(v["view"])
```
otherwise the default behavior.
| https://github.com/apache/airflow/issues/10788 | https://github.com/apache/airflow/pull/11742 | 6ef23aff802032e85ec42dabda83907bfd812b2c | 429e54c19217f9e78cba2297b3ab25fa098eb819 | 2020-09-08T06:23:57Z | python | 2021-01-04T06:49:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,786 | ["airflow/providers/databricks/operators/databricks.py", "docs/apache-airflow-providers-databricks/operators.rst", "tests/providers/databricks/operators/test_databricks.py"] | DatabricksRunNowOperator missing jar_params as a kwarg | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
DatabricksRunNowOperator is missing the option to take in key word arguement _jar_params_ , it already can take the other ones notebook_params,python_params,spark_submit_params. https://docs.databricks.com/dev-tools/api/latest/jobs.html#run-now
**Use case / motivation**
Provide parity with the other options
| https://github.com/apache/airflow/issues/10786 | https://github.com/apache/airflow/pull/19443 | 854b70b9048c4bbe97abde2252b3992892a4aab0 | 3a0c4558558689d7498fe2fc171ad9a8e132119e | 2020-09-08T00:22:32Z | python | 2021-11-07T19:10:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,743 | ["CONTRIBUTING.rst"] | [Docs] Update node installation command | **Environment**:
- **OS** (e.g. from /etc/os-release): macOS 10.15.6
- **Install tools**: brew
**What happened**:
In CONTRIBUTING.rst, we have `brew install node --without-npm` for installing node in macOS. The `--without-npm` flag is outdated and running this command will throw `Error: invalid option: --without-npm`.
Also, references can found here:
https://discourse.brew.sh/t/brew-install-node-without-npm/4755
and lots of users' comments in this gist here (search keyword `without-npm`):
https://gist.github.com/DanHerbert/9520689
| https://github.com/apache/airflow/issues/10743 | https://github.com/apache/airflow/pull/10744 | 079d7b59464921f7fd7d615b6c74195a9c2f831f | d84b62d7e17dd559041754634bf299274f54e83f | 2020-09-05T04:53:30Z | python | 2020-09-05T06:50:29Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,730 | ["BREEZE.rst", "Dockerfile", "Dockerfile.ci", "IMAGES.rst", "breeze", "breeze-complete", "docs/production-deployment.rst", "scripts/ci/libraries/_build_images.sh", "scripts/ci/libraries/_initialization.sh"] | Dockerfile - Enable gpg to receive keys behind a proxy | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
Allow build of airflow docker image behind a corporate firewall.
**Use case / motivation**
I would like to manually build an image by specifying the http_proxy argument. Almost all of the Dockerfile instructions work fine with --build-arg http_proxy / https_proxy , however gpg command is not able to use it correctly . Gpg requires --keyserver-options "http-proxy=${http_proxy}" to be set explicitly
`gpg --keyserver "${KEYSERVER}" --keyserver-options "http-proxy=${http_proxy}" --recv-keys "${KEY}"`
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/10730 | https://github.com/apache/airflow/pull/11176 | 17c810ec36a61ca2e285ccf44de27a598cca15f5 | ebd71508627e68f6c35f1aff2d03b4569de80f4b | 2020-09-04T17:50:34Z | python | 2020-09-29T13:30:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,726 | ["airflow/providers/databricks/hooks/databricks.py", "tests/providers/databricks/hooks/test_databricks.py"] | databricks host name pulling out from extra json data when token is used | When the token is used in the databricks connection, the host name is being pulled from extra json data
https://github.com/apache/airflow/blob/fdd9b6f65b608c516b8a062b058972d9a45ec9e3/airflow/providers/databricks/hooks/databricks.py#L160
I think the host name should be pulled from databricks_conn directly as the line 164, where basic auth is being used https://github.com/apache/airflow/blob/fdd9b6f65b608c516b8a062b058972d9a45ec9e3/airflow/providers/databricks/hooks/databricks.py#L164
Thanks!
Liusong
| https://github.com/apache/airflow/issues/10726 | https://github.com/apache/airflow/pull/10762 | 5b3fb53d9c5c943abb8ddc90214c320b7a11c8c2 | 966a06d96bbfe330f1d2825f7b7eaa16d43b7a00 | 2020-09-04T13:50:48Z | python | 2020-09-18T11:15:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,656 | ["airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | Error in SSHOperator " 'NoneType' object has no attribute 'startswith' " | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
This questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.10
**What happened**:
I wrote the following piece of code:
```
from datetime import datetime
from airflow import DAG
from airflow.contrib.operators.ssh_operator import SSHOperator
args = {
'owner': 'airflow',
'start_date': datetime(year=2020, month=7, day=21,
hour=3, minute=0, second=0),
'provide_context': True,
}
dag = DAG(
dag_id='test_ssh_operator',
default_args=args,
schedule_interval='@daily',
)
ssh_command = """
echo 'hello work'
"""
task = SSHOperator(
task_id="check_ssh_perator",
ssh_conn_id='ssh_default',
command=ssh_command,
do_xcom_push=True,
dag=dag,
)
task
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
And I got the following error:
`Broken DAG: [/etc/airflow/dags/dag_test_sshoperator.py] 'NoneType' object has no attribute 'startswith'`
<!-- What do you think went wrong? -->
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
I add new ssh connection in Admin--> Connections. and in Extra field, I put the following JSON:
```
{"key_file":"/root/.ssh/airflow-connector/id_ed25519",
"timeout": "10",
"compress": "false",
"no_host_key_check": "false",
"allow_host_key_change": "false"}
```
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/10656 | https://github.com/apache/airflow/pull/11361 | 11eb649d4acdbd3582fb0a77b5f5af3b75e2262c | 27e637fbe3f17737e898774ff151448f4f0aa129 | 2020-08-31T07:49:46Z | python | 2020-10-09T07:35:39Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,646 | ["chart/files/pod-template-file.kubernetes-helm-yaml", "chart/templates/workers/worker-deployment.yaml", "chart/tests/test_pod_template_file.py"] | Kubernetes config dags_volume_subpath breaks PVC in helm chart | **Apache Airflow version**: 1.10.12, master
**Kubernetes version: v1.17.9-eks-4c6976 (server)/ v.1.18.6 (client)
**Environment**:
- **Cloud provider or hardware configuration**: EKS
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Current logic of setting `dags_volume_subpath` is broken for the following use case:
> Dag loaded from PVC but gitSync disabled.
I am using the chart from apache/airflow master thusly:
```
helm install airflow chart --namespace airflow-dev \
--set dags.persistence.enabled=true \
--set dags.persistence.existingClaim=airflow-dag-pvc \
--set dags.gitSync.enabled=false
```
For the longest time, even with a vanilla install, the workers kept dying. Tailing the logs clued me in that workers were not able to find the dag. I verified from the scheduler that dags were present (it showed up in the UI etc)
Further debugging (looking at the worker pod config) was the main clue ... here is the volumemount
```yaml
- mountPath: /opt/airflow/dags
name: airflow-dags
readOnly: true
subPath: repo/tests/dags
```
Why/who would add `repo/tests/dags` as a subpath?? π€¦ββοΈ
Finally found the problem logic here:
https://github.com/apache/airflow/blob/9b2efc6dcc298e3df4d1365fe809ea1dc0697b3b/chart/values.yaml#L556
Note the implied connection between `dags.persistence.enabled` and `dags.gitSync`! This looks like some leftover code from when gitsync and external dag went hand in hand.
Ideally, an user should be able to use a PVC _without_ using git sync
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I should be able to use a PVC without gitsync logic messing up my mount path
<!-- What do you think went wrong? -->
See above.
**How to reproduce it**:
I am kinda surprised that this has not bitten anyone yet. I'd like to think my example is essentially a `hello world` of helm chart with an dag from a PVC.
**Anything else we need to know**:
This is the patch that worked for me. Seems fairly reasonable -- only muck with dags_volume_subpath if gitSync is enabled.
Even better would be to audit other code and clearly separate out the different competing use case
1. use PVC but no gitsync
2. use PVC with gitsync
3. use gitsync without PVC
```
diff --git a/chart/values.yaml b/chart/values.yaml
index 00832b435..8f6506cd4 100644
--- a/chart/values.yaml
+++ b/chart/values.yaml
@@ -550,10 +550,10 @@ config:
delete_worker_pods: 'True'
run_as_user: '{{ .Values.uid }}'
fs_group: '{{ .Values.gid }}'
dags_volume_claim: '{{- if .Values.dags.persistence.enabled }}{{ include "airflow_dags_volume_claim" . }}{{ end }}'
- dags_volume_subpath: '{{- if .Values.dags.persistence.enabled }}{{.Values.dags.gitSync.dest }}/{{ .Values.dags.gitSync.subPath }}{{ end }}'
+ dags_volume_subpath: '{{- if .Values.dags.gitSync.enabled }}{{.Values.dags.gitSync.dest }}/{{ .Values.dags.gitSync.subPath }}{{ end }}'
git_repo: '{{- if and .Values.dags.gitSync.enabled (not .Values.dags.persistence.enabled) }}{{ .Values.dags.gitSync.repo }}{{ end }}'
git_branch: '{{ .Values.dags.gitSync.branch }}'
git_sync_rev: '{{ .Values.dags.gitSync.rev }}'
```
| https://github.com/apache/airflow/issues/10646 | https://github.com/apache/airflow/pull/15657 | b1bd59440baa839eccdb2770145d0713ade4f82a | 367d64befbf2f61532cf70ab69e32f596e1ed06e | 2020-08-30T06:24:38Z | python | 2021-05-04T18:40:21Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,636 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/kubernetes/kube_client.py", "docs/spelling_wordlist.txt"] | Kubernetes executors hangs on pod submission | **Apache Airflow version**: 1.10.10, 1.10.11, 1.10.12
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.15.11, 1.17.7
**Environment**:
- **Cloud provider or hardware configuration**: AKS
- **Others**: Python 3.6, Python 3.7, Python 3.8
Kubernetes executors hangs from time to time on worker pod submission. Pod creation request is timeouting after 15 minutes and scheduler loop continues. I have recreated the problem in several Kubernetes / Python / Airflow version configurations.
py-spy dump:
```
Process 6: /usr/local/bin/python /usr/local/bin/airflow scheduler
Python v3.7.9 (/usr/local/bin/python3.7)
Thread 6 (idle): "MainThread"
read (ssl.py:929)
recv_into (ssl.py:1071)
readinto (socket.py:589)
_read_status (http/client.py:271)
begin (http/client.py:310)
getresponse (http/client.py:1369)
_make_request (urllib3/connectionpool.py:421)
urlopen (urllib3/connectionpool.py:677)
urlopen (urllib3/poolmanager.py:336)
request_encode_body (urllib3/request.py:171)
request (urllib3/request.py:80)
request (kubernetes/client/rest.py:170)
POST (kubernetes/client/rest.py:278)
request (kubernetes/client/api_client.py:388)
__call_api (kubernetes/client/api_client.py:176)
call_api (kubernetes/client/api_client.py:345)
create_namespaced_pod_with_http_info (kubernetes/client/api/core_v1_api.py:6265)
create_namespaced_pod (kubernetes/client/api/core_v1_api.py:6174)
run_pod_async (airflow/contrib/kubernetes/pod_launcher.py:81)
run_next (airflow/contrib/executors/kubernetes_executor.py:486)
sync (airflow/contrib/executors/kubernetes_executor.py:878)
heartbeat (airflow/executors/base_executor.py:134)
_validate_and_run_task_instances (airflow/jobs/scheduler_job.py:1505)
_execute_helper (airflow/jobs/scheduler_job.py:1443)
_execute (airflow/jobs/scheduler_job.py:1382)
run (airflow/jobs/base_job.py:221)
scheduler (airflow/bin/cli.py:1040)
wrapper (airflow/utils/cli.py:75)
<module> (airflow:37)
```
logs:
```
020-08-26 18:26:25,721] {base_executor.py:122} DEBUG - 0 running task instances
[2020-08-26 18:26:25,722] {base_executor.py:123} DEBUG - 1 in queue
[2020-08-26 18:26:25,722] {base_executor.py:124} DEBUG - 32 open slots
[2020-08-26 18:26:25,722] {kubernetes_executor.py:840} INFO - Add task ('ProcessingTask', 'exec_spark_notebook', datetime.datetime(2020, 8, 26, 18, 26, 21, 61159, tzinfo=<
Timezone [UTC]>), 1) with command ['airflow', 'run', 'ProcessingTask', 'exec_spark_notebook', '2020-08-26T18:26:21.061159+00:00', '--local', '--pool', 'default_pool', '-sd
', '/usr/local/airflow/dags/qubole_processing.py'] with executor_config {}
[2020-08-26 18:26:25,723] {base_executor.py:133} DEBUG - Calling the <class 'airflow.contrib.executors.kubernetes_executor.KubernetesExecutor'> sync method
[2020-08-26 18:26:25,723] {kubernetes_executor.py:848} DEBUG - self.running: {('ProcessingTask', 'exec_spark_notebook', datetime.datetime(2020, 8, 26, 18, 26, 21, 61159, t
zinfo=<Timezone [UTC]>), 1): ['airflow', 'run', 'ProcessingTask', 'exec_spark_notebook', '2020-08-26T18:26:21.061159+00:00', '--local', '--pool', 'default_pool', '-sd', '/
usr/local/airflow/dags/qubole_processing.py']}
[2020-08-26 18:26:25,725] {kubernetes_executor.py:471} INFO - Kubernetes job is (('ProcessingTask', 'exec_spark_notebook', datetime.datetime(2020, 8, 26, 18, 26, 21, 61159
, tzinfo=<Timezone [UTC]>), 1), ['airflow', 'run', 'ProcessingTask', 'exec_spark_notebook', '2020-08-26T18:26:21.061159+00:00', '--local', '--pool', 'default_pool', '-sd',
'/usr/local/airflow/dags/qubole_processing.py'], KubernetesExecutorConfig(image=None, image_pull_policy=None, request_memory=None, request_cpu=None, limit_memory=None, limi
t_cpu=None, limit_gpu=None, gcp_service_account_key=None, node_selectors=None, affinity=None, annotations={}, volumes=[], volume_mounts=[], tolerations=None, labels={}))
[2020-08-26 18:26:25,725] {kubernetes_executor.py:474} DEBUG - Kubernetes running for command ['airflow', 'run', 'ProcessingTask', 'exec_spark_notebook', '2020-08-26T18:26
:21.061159+00:00', '--local', '--pool', 'default_pool', '-sd', '/usr/local/airflow/dags/qubole_processing.py']
[2020-08-26 18:26:25,726] {kubernetes_executor.py:475} DEBUG - Kubernetes launching image dpadevairflowacr01.azurecr.io/airflow:1.10.10-20200826-v2
[2020-08-26 18:26:25,729] {pod_launcher.py:79} DEBUG - Pod Creation Request:
{{ POD JSON }}
[2020-08-26 18:26:26,003] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5546)
[2020-08-26 18:26:26,612] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor779-Process, stopped)>
[2020-08-26 18:26:28,148] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5553)
[2020-08-26 18:26:28,628] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor780-Process, stopped)>
[2020-08-26 18:26:31,473] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5560)
[2020-08-26 18:26:32,005] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor781-Process, stopped)>
[2020-08-26 18:26:32,441] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5567)
[2020-08-26 18:26:33,017] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor782-Process, stopped)>
[2020-08-26 18:26:37,501] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5578)
[2020-08-26 18:26:38,044] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor784-Process, stopped)>
[2020-08-26 18:26:38,510] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5585)
[2020-08-26 18:26:39,054] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor785-Process, stopped)>
[2020-08-26 18:26:39,481] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5574)
[2020-08-26 18:26:40,057] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor783-Process, stopped)>
[2020-08-26 18:26:44,549] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5599)
[2020-08-26 18:26:45,108] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor787-Process, stopped)>
[2020-08-26 18:26:45,613] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5606)
[2020-08-26 18:26:46,118] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor788-Process, stopped)>
[2020-08-26 18:26:48,742] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5595)
[2020-08-26 18:26:49,127] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor786-Process, stopped)>
[2020-08-26 18:26:50,596] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5616)
[2020-08-26 18:26:51,151] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor789-Process, stopped)>
[2020-08-26 18:26:51,653] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5623)
[2020-08-26 18:26:52,161] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor790-Process, stopped)>
[2020-08-26 18:26:54,664] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5630)
[2020-08-26 18:26:55,179] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor791-Process, stopped)>
[2020-08-26 18:26:56,645] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5637)
[2020-08-26 18:26:57,194] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor792-Process, stopped)>
[2020-08-26 18:26:57,592] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5644)
[2020-08-26 18:26:58,207] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor793-Process, stopped)>
[2020-08-26 18:27:00,809] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5651)
[2020-08-26 18:27:01,599] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor794-Process, stopped)>
[2020-08-26 18:27:03,124] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5658)
[2020-08-26 18:27:03,615] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor795-Process, stopped)>
[2020-08-26 18:27:04,120] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5665)
[2020-08-26 18:27:04,627] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor796-Process, stopped)>
[2020-08-26 18:27:07,167] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5672)
[2020-08-26 18:27:07,642] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor797-Process, stopped)>
[2020-08-26 18:27:09,125] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5679)
[2020-08-26 18:27:09,654] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor798-Process, stopped)>
[2020-08-26 18:27:10,201] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5686)
[2020-08-26 18:27:10,664] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor799-Process, stopped)>
[2020-08-26 18:27:15,149] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5697)
[2020-08-26 18:27:15,706] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor801-Process, stopped)>
[2020-08-26 18:27:16,128] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5704)
[2020-08-26 18:27:16,717] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor802-Process, stopped)>
[2020-08-26 18:27:18,167] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5693)
[2020-08-26 18:27:18,725] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor800-Process, stopped)>
[2020-08-26 18:27:21,200] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5714)
[2020-08-26 18:27:21,751] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor803-Process, stopped)>
[2020-08-26 18:27:22,221] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5721)
[2020-08-26 18:27:22,760] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor804-Process, stopped)>
[2020-08-26 18:27:24,192] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5728)
[2020-08-26 18:27:24,773] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor805-Process, stopped)>
[2020-08-26 18:27:27,249] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5735)
[2020-08-26 18:27:27,787] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor806-Process, stopped)>
[2020-08-26 18:27:28,246] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5742)
[2020-08-26 18:27:28,798] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor807-Process, stopped)>
[2020-08-26 18:27:30,318] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5749)
[2020-08-26 18:27:30,810] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor808-Process, stopped)>
[2020-08-26 18:27:33,747] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5756)
[2020-08-26 18:27:34,260] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor809-Process, stopped)>
[2020-08-26 18:27:34,670] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5763)
[2020-08-26 18:27:35,271] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor810-Process, stopped)>
[2020-08-26 18:27:36,765] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5770)
[2020-08-26 18:27:37,286] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor811-Process, stopped)>
[2020-08-26 18:27:39,802] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5777)
[2020-08-26 18:27:40,304] {scheduler_job.py:268} DEBUG - Waiting for <Process(DagFileProcessor812-Process, stopped)>
[2020-08-26 18:27:45,820] {settings.py:278} DEBUG - Disposing DB connection pool (PID 5784)
[2020-08-26 18:42:04,209] {kubernetes_executor.py:885} WARNING - HTTPError when attempting to run task, re-queueing. Exception: HTTPSConnectionPool(host='10.2.0.1', port=443
): Read timed out. (read timeout=None)
[2020-08-26 18:42:04,211] {scheduler_job.py:1450} DEBUG - Heartbeating the scheduler
[2020-08-26 18:42:04,352] {base_job.py:200} DEBUG - [heartbeat]
[2020-08-26 18:42:04,353] {scheduler_job.py:1459} DEBUG - Ran scheduling loop in 938.71 seconds
[2020-08-26 18:42:04,353] {scheduler_job.py:1462} DEBUG - Sleeping for 1.00 seconds
[2020-08-26 18:42:05,354] {scheduler_job.py:1425} DEBUG - Starting Loop...
[2020-08-26 18:42:05,355] {scheduler_job.py:1436} DEBUG - Harvesting DAG parsing results
[2020-08-26 18:42:05,355] {dag_processing.py:648} DEBUG - Received message of type SimpleDag
[2020-08-26 18:42:05,355] {dag_processing.py:648} DEBUG - Received message of type DagParsingStat
[2020-08-26 18:42:05,356] {dag_processing.py:648} DEBUG - Received message of type DagParsingStat
```
How I can configure Airflow to emit logs from imported packages ? I would like to check `urllib3` and `http.client` logs in order to understand the problem. Airflow scheduler logs shows only Airflow codebase logs.
| https://github.com/apache/airflow/issues/10636 | https://github.com/apache/airflow/pull/11406 | 32f2a458198f50b85075d72a25d7de8a55109e44 | da565c9019c72e5c2646741e3b73f6c03cb3b485 | 2020-08-28T18:21:52Z | python | 2020-10-12T15:19:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,620 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | Reattach ECS Task when Airflow restarts | **Description**
In similar fashion to https://github.com/apache/airflow/pull/4083, it would be helpful for Airflow to reattach itself to the ECS Task rather than letting another instance to start. However, instead of making this the default behavior, it would be better to use a `reattach` flag.
**Use case / motivation**
Allow Airflow the option to reattach to an existing ECS task when a restart happens, which would avoid having "rogue" tasks. | https://github.com/apache/airflow/issues/10620 | https://github.com/apache/airflow/pull/10643 | e4c239fc98d4b13608b0bbb55c503b4563249300 | 0df60b773671ecf8d4e5f582ac2be200cf2a2edd | 2020-08-28T02:19:18Z | python | 2020-10-23T07:10:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,611 | ["airflow/www/views.py"] | Graph View shows other relations than in DAG | **Apache Airflow version**:
master
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**: breeze
**What happened**:
This DAG
```python
from airflow import models
from airflow.operators.dummy_operator import DummyOperator
from airflow.utils.dates import days_ago
with models.DAG("test", start_date=days_ago(1), schedule_interval=None,) as dag:
t1 = DummyOperator(task_id="t1")
t2 = DummyOperator(task_id="t2")
t1 >> t2
```
is rendering like that:
<img width="1374" alt="Screenshot 2020-08-27 at 19 59 41" src="https://user-images.githubusercontent.com/9528307/91478403-11d7fb00-e8a0-11ea-91d0-d7d578bcb5a2.png">
**What you expected to happen**:
I expect to see same relations as defined in DAG file
**How to reproduce it**:
Render the example DAG from above.
**Anything else we need to know**:
I'm surprised by this bug π
| https://github.com/apache/airflow/issues/10611 | https://github.com/apache/airflow/pull/10612 | 775c22091e61e605f9572caabe160baa237cfbbd | 479d6220b7d0c93d5ad6a7d53d875e777287342b | 2020-08-27T18:02:43Z | python | 2020-08-27T19:14:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,605 | ["airflow/providers/cncf/kubernetes/hooks/kubernetes.py", "airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/xcom_sidecar.py", "docs/apache-airflow-providers-cncf-kubernetes/connections/kubernetes.rst", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/hooks/test_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | Use private docker repository with K8S operator and XCOM sidecar container | **Use private docker repository with K8S operator and XCOM sidecar container**
An extra parameter to KubernetesPodOperator: docker_repository, this allows to specify the repository where the sidecar container is located
**My company force docker proxy usage for K8S**
I need to use my company docker repository, images that are not proxifed by the company docker repository are not allowed
| https://github.com/apache/airflow/issues/10605 | https://github.com/apache/airflow/pull/26766 | 409a4de858385c14d0ea4f32b8c4ad1fcfb9d130 | aefadb8c5b9272613d5806b054a1b46edf29d82e | 2020-08-27T15:35:58Z | python | 2022-11-09T06:16:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,586 | ["airflow/kubernetes/pod_launcher.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/kubernetes/test_pod_launcher.py"] | KubernetesPodOperator truncates logs | **Apache Airflow version**: 1.10.10
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.15.11
KubernetesPodOperator truncates logs when container produces more than 10 lines of logs before execution of `read_pod_logs` function. Is there any reason to make 10 as default value for `tail_lines` argument ?
```python
def read_pod_logs(self, pod: V1Pod, tail_lines: int = 10):
"""Reads log from the POD"""
try:
return self._client.read_namespaced_pod_log(
name=pod.metadata.name,
namespace=pod.metadata.namespace,
container='base',
follow=True,
tail_lines=tail_lines,
_preload_content=False
)
except BaseHTTPError as e:
raise AirflowException(
'There was an error reading the kubernetes API: {}'.format(e)
)
``` | https://github.com/apache/airflow/issues/10586 | https://github.com/apache/airflow/pull/11325 | 6fe020e105531dd5a7097d8875eac0f317045298 | b7404b079ab57b6493d8ddd319bccdb40ff3ddc5 | 2020-08-26T16:43:38Z | python | 2020-10-09T22:59:47Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,575 | ["airflow/providers/amazon/aws/hooks/athena.py", "airflow/providers/amazon/aws/operators/athena.py"] | Confusing parameter | https://github.com/apache/airflow/blob/3a349624a20d3432dc75e337d6ffb1109a50e451/airflow/providers/amazon/aws/operators/athena.py#L49
What is the unit of measurement of time used here? | https://github.com/apache/airflow/issues/10575 | https://github.com/apache/airflow/pull/10580 | 46ac09d5c9b9f6e36cce0a1d3812f483ed7201eb | 8349061f9cb01a92c87edd349cc844c4053851e8 | 2020-08-26T08:44:54Z | python | 2020-08-26T17:57:32Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,555 | ["BREEZE.rst", "Dockerfile", "Dockerfile.ci", "IMAGES.rst", "breeze", "breeze-complete", "docs/production-deployment.rst", "scripts/ci/libraries/_build_images.sh", "scripts/ci/libraries/_initialization.sh"] | Allow installation of apt and other packages from different servers | **Description**
By default we are installing apt deps and PI deps from diferent repositories, but there shoudl be an option (via build-arg) to install it from elsewhere.
**Use case / motivation**
Corporate customers often use mirrors of registries to install packages and firewall outgoing connections. We should be able to support such scenarios.
| https://github.com/apache/airflow/issues/10555 | https://github.com/apache/airflow/pull/11176 | 17c810ec36a61ca2e285ccf44de27a598cca15f5 | ebd71508627e68f6c35f1aff2d03b4569de80f4b | 2020-08-25T18:24:07Z | python | 2020-09-29T13:30:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,553 | ["airflow/sensors/sql_sensor.py", "tests/sensors/test_sql_sensor.py"] | Exception logging success function instead of failure | https://github.com/apache/airflow/blob/fdd9b6f65b608c516b8a062b058972d9a45ec9e3/airflow/sensors/sql_sensor.py#L97
| https://github.com/apache/airflow/issues/10553 | https://github.com/apache/airflow/pull/12057 | 79836bb92c26abf631a67c986050fee41a9d99fd | cadae496b385b65f8a48a55c0603479669966703 | 2020-08-25T18:09:58Z | python | 2020-11-04T19:23:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,549 | ["airflow/www/extensions/init_views.py", "airflow/www/static/js/circles.js", "airflow/www/templates/airflow/not_found.html", "airflow/www/views.py", "airflow/www/webpack.config.js"] | option to disable "lots of circles" in error page | **Description**
The "lots of circles" error page is very rough via Remote Desktop connections. In the current global pandemic many people are working remotely via already constrained connections. Needless redraws caused by the highly animated circles can cause frustrating slowdowns and sometimes lost connections.
**Use case / motivation**
It should be trivially simple to disable the animated portion of the error page and instead use a standard error page. Ideally this would be something easily achievable via configuration options and exposed in the Helm chart.
**Related Issues**
N/A
| https://github.com/apache/airflow/issues/10549 | https://github.com/apache/airflow/pull/17501 | 7b4ce7b73746466133a9c93e3a68bee1e0f7dd27 | 2092988c68030b91c79a9631f0482ab01abdba4d | 2020-08-25T13:25:33Z | python | 2021-08-13T00:54:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,523 | ["docs/apache-airflow/kubernetes.rst", "docs/apache-airflow/production-deployment.rst"] | Host Airflow-managed Helm repo | **Description**
@mik-laj Hi π I was just coming to this repo to see if you're interested in help getting the Helm chart repo set up to replace the chart in `stable`.
Stable repo is nearing the end of it's deprecation period, and I'm glad to see a version of the Airflow chart is already here. See https://github.com/helm/charts/issues/21103 for the meta issue tracking moving all the `stable` repo charts to new homes.
### To-do
- [ ] Decide on hosting options below (self-host/leverage Bitnami)
- [ ] If self-host, set up CI/CD for chart-testing and chart-releasing
- [ ] if self-host, list in Helm Hub (http://hub.helm.sh/) and Artifact Hub (https://artifacthub.io/)
**Use case / motivation**
### Set up Helm repo hosting for your chart
Set up a Helm repo, either as a separate git repo in your org, or keeping the same setup you have now. We have created Helm chart repo actions for chart testing (CI) and releasing chart packages as your own GitHub-hosted Helm repo (CD).
#### Self-hosted options:
1. If we either move the chart to a separate git repo in the artifacthub gh org, or even move the hub github pages setting to a branch other than the main one, we can use the [@helm/chart-releaser-action](https://github.com/helm/chart-releaser-action) GitHub Action to automate the helm repo.
2. If we keep structure as-is, we can still use the [helm/chart-releaser](https://github.com/helm/chart-releaser) project, just with a custom script.
For either option we can also use the [@helm/chart-testing-action](https://github.com/helm/chart-testing-action) to wrap the chart-testing project @mattfarina mentioned above. Here's an demo repo to see how they work together: https://github.com/helm/charts-repo-actions-demo
Whichever option you decide I'm make a PR if it helps.
If you do decide to host your own Helm repo, you will also want to list it in
#### Alternatively leverage existing Bitnami Helm repo
There is also a version of the chart maintained by Bitnami, who have been very involved in the `stable` repo for years, but : https://github.com/bitnami/charts/tree/master/bitnami/airflow. You could instead decide to leverage that chart as the canonical source, and not host your own. It is also fine to have multiple instances of a chart to install the same app.
**Related Issues**
- https://github.com/helm/charts/issues/21103
- https://github.com/apache/airflow/issues/10486
- https://github.com/apache/airflow/issues/10379 | https://github.com/apache/airflow/issues/10523 | https://github.com/apache/airflow/pull/16014 | ce358b21533eeb7a237e6b0833872bf2daab7e30 | 74821c8d999fad129b905a8698df7c897941e069 | 2020-08-24T19:57:06Z | python | 2021-05-23T19:10:51Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,519 | ["airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_views.py"] | Trigger Dag requires a JSON conf but Dag Run view display a python dict | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
This questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.11
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Windows 10 WSL (Ubuntu18.04.4)
- **Kernel** (e.g. `uname -a`): Linux DESKTOP-8IVSCHM 4.4.0-18362-Microsoft #836-Microsoft Mon May 05 16:04:00 PST 2020 x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
In the __Trigger Dag__ view for a specific Dag, the view asks for JSON formatted object as inpunt.
In the __Dag Runs__ view (list), it shows python formatted object in the __Conf__ column.
Despite JSON and Python formating being quite similar they differ in respect to string quotation marks: json uses double-quotes(") and python uses single quote(').
This makes annoying to copy a previously used config to a new trigger.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I would expect a consistent read/write of Dag Run configuration.
In particular require a Json in the Trigger Dag view, and display a Json in the DAG Runs view.
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
1. Trigger a DAG manually passing a Json dict `{"test": "this is a test"}`.
2. Go to Dag Runs view, it shows as `{'test': 'this is a test'}`
**Anything else we need to know**:
This is not application breaking issue, more a quality of life/usability issue.
I imagine it would be a minor change and could be tagged as an first issue
| https://github.com/apache/airflow/issues/10519 | https://github.com/apache/airflow/pull/10644 | 596bc1337988f9377571295ddb748ef8703c19c0 | e6a0a5374dabc431542113633148445e4c5159b9 | 2020-08-24T18:22:17Z | python | 2020-08-31T13:31:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,516 | ["Dockerfile", "Dockerfile.ci", "IMAGES.rst"] | pkg_resources.DistributionNotFound: The 'apache-airflow==2.0.0.dev0' distribution was not found and is required by the application | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
This questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
```
apache-airflow==2.0.0.dev0
master
```
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
Mac OS 10.15.3
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Install Airflow :
```
(airflowenv) β airflow git:(master) β python3.8 setup.py install
running install
running bdist_egg
running egg_info
writing apache_airflow.egg-info/PKG-INFO
writing dependency_links to apache_airflow.egg-info/dependency_links.txt
writing entry points to apache_airflow.egg-info/entry_points.txt
writing requirements to apache_airflow.egg-info/requires.txt
writing top-level names to apache_airflow.egg-info/top_level.txt
reading manifest file 'apache_airflow.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '__pycache__' found anywhere in distribution
warning: no files found matching 'airflow/providers/cncf/kubernetes/example_dags/example_spark_kubernetes_operator_spark_pi.yaml'
writing manifest file 'apache_airflow.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.9-x86_64/egg
running install_lib
running build_py
copying kubernetes_tests/test_kubernetes_pod_operator.py -> build/lib/kubernetes_tests
copying airflow/settings.py -> build/lib/airflow
copying airflow/models/dagrun.py -> build/lib/airflow/models
copying airflow/models/baseoperator.py -> build/lib/airflow/models
copying airflow/jobs/scheduler_job.py -> build/lib/airflow/jobs
......
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/extensions/init_views.py to init_views.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/widgets.py to widgets.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/forms.py to forms.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/utils.py to utils.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/gunicorn_config.py to gunicorn_config.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/app.py to app.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/api/experimental/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/api/experimental/endpoints.py to endpoints.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/api/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/views.py to views.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/www/decorators.py to decorators.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/dependencies_states.py to dependencies_states.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/dep_context.py to dep_context.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/dependencies_deps.py to dependencies_deps.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/prev_dagrun_dep.py to prev_dagrun_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/task_concurrency_dep.py to task_concurrency_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/dag_unpaused_dep.py to dag_unpaused_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/exec_date_after_start_date_dep.py to exec_date_after_start_date_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/dagrun_exists_dep.py to dagrun_exists_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/runnable_exec_date_dep.py to runnable_exec_date_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/dagrun_id_dep.py to dagrun_id_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/base_ti_dep.py to base_ti_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/pool_slots_available_dep.py to pool_slots_available_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/trigger_rule_dep.py to trigger_rule_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/not_previously_skipped_dep.py to not_previously_skipped_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/task_not_running_dep.py to task_not_running_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/not_in_retry_period_dep.py to not_in_retry_period_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/dag_ti_slots_available_dep.py to dag_ti_slots_available_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/valid_state_dep.py to valid_state_dep.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/ti_deps/deps/ready_to_reschedule.py to ready_to_reschedule.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/macros/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/macros/hive.py to hive.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_latest_only.py to example_latest_only.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_complex.py to example_complex.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_python_operator.py to example_python_operator.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/test_utils.py to test_utils.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_external_task_marker_dag.py to example_external_task_marker_dag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_bash_operator.py to example_bash_operator.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_short_circuit_operator.py to example_short_circuit_operator.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_branch_operator.py to example_branch_operator.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/tutorial.py to tutorial.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_kubernetes_executor_config.py to example_kubernetes_executor_config.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_passing_params_via_test_command.py to example_passing_params_via_test_command.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_latest_only_with_trigger.py to example_latest_only_with_trigger.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/subdags/subdag.py to subdag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/subdags/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_xcom.py to example_xcom.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_xcomargs.py to example_xcomargs.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/libs/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/libs/helper.py to helper.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_skip_dag.py to example_skip_dag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_trigger_target_dag.py to example_trigger_target_dag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_kubernetes_executor.py to example_kubernetes_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_branch_python_dop_operator_3.py to example_branch_python_dop_operator_3.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_nested_branch_dag.py to example_nested_branch_dag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_subdag_operator.py to example_subdag_operator.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/example_dags/example_trigger_controller_dag.py to example_trigger_controller_dag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/celery_executor.py to celery_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/dask_executor.py to dask_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/executor_loader.py to executor_loader.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/kubernetes_executor.py to kubernetes_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/sequential_executor.py to sequential_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/local_executor.py to local_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/base_executor.py to base_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/executors/debug_executor.py to debug_executor.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/lineage/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/lineage/entities.py to entities.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/mypy/plugin/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/mypy/plugin/decorators.py to decorators.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/mypy/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/druid_hook.py to druid_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/base_hook.py to base_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/http_hook.py to http_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/presto_hook.py to presto_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/dbapi_hook.py to dbapi_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/slack_hook.py to slack_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/samba_hook.py to samba_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/hive_hooks.py to hive_hooks.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/filesystem.py to filesystem.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/oracle_hook.py to oracle_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/jdbc_hook.py to jdbc_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/mysql_hook.py to mysql_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/docker_hook.py to docker_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/postgres_hook.py to postgres_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/webhdfs_hook.py to webhdfs_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/pig_hook.py to pig_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/zendesk_hook.py to zendesk_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/S3_hook.py to S3_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/hdfs_hook.py to hdfs_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/mssql_hook.py to mssql_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/hooks/sqlite_hook.py to sqlite_hook.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/stats.py to stats.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/task/task_runner/standard_task_runner.py to standard_task_runner.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/task/task_runner/base_task_runner.py to base_task_runner.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/task/task_runner/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/task/task_runner/cgroup_task_runner.py to cgroup_task_runner.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/task/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/settings.py to settings.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/auth/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/auth/backend/kerberos_auth.py to kerberos_auth.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/auth/backend/deny_all.py to deny_all.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/auth/backend/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/auth/backend/default.py to default.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/auth/backend/basic_auth.py to basic_auth.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/mark_tasks.py to mark_tasks.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/get_dag_run_state.py to get_dag_run_state.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/trigger_dag.py to trigger_dag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/get_lineage.py to get_lineage.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/get_dag_runs.py to get_dag_runs.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/get_task.py to get_task.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/get_task_instance.py to get_task_instance.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/pool.py to pool.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/delete_dag.py to delete_dag.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/experimental/get_code.py to get_code.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/common/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/client/json_client.py to json_client.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/client/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/client/local_client.py to local_client.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/api/client/api_client.py to api_client.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/exceptions.py to exceptions.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/jobs/local_task_job.py to local_task_job.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/jobs/scheduler_job.py to scheduler_job.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/jobs/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/jobs/backfill_job.py to backfill_job.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/jobs/base_job.py to base_job.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/sentry.py to sentry.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/refresh_config.py to refresh_config.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/worker_configuration.py to worker_configuration.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/pod_generator.py to pod_generator.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/k8s_model.py to k8s_model.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/volume_mount.py to volume_mount.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/pod.py to pod.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/__init__.py to __init__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/kube_client.py to kube_client.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/volume.py to volume.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/secret.py to secret.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/pod_runtime_info_env.py to pod_runtime_info_env.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/kubernetes/pod_launcher.py to pod_launcher.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/__main__.py to __main__.cpython-38.pyc
byte-compiling build/bdist.macosx-10.9-x86_64/egg/airflow/decorators.py to decorators.cpython-38.pyc
creating build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
copying apache_airflow.egg-info/PKG-INFO -> build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
copying apache_airflow.egg-info/SOURCES.txt -> build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
copying apache_airflow.egg-info/dependency_links.txt -> build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
copying apache_airflow.egg-info/entry_points.txt -> build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
copying apache_airflow.egg-info/not-zip-safe -> build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
copying apache_airflow.egg-info/requires.txt -> build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
copying apache_airflow.egg-info/top_level.txt -> build/bdist.macosx-10.9-x86_64/egg/EGG-INFO
creating 'dist/apache_airflow-2.0.0.dev0-py3.8.egg' and adding 'build/bdist.macosx-10.9-x86_64/egg' to it
removing 'build/bdist.macosx-10.9-x86_64/egg' (and everything under it)
Processing apache_airflow-2.0.0.dev0-py3.8.egg
creating /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages/apache_airflow-2.0.0.dev0-py3.8.egg
Extracting apache_airflow-2.0.0.dev0-py3.8.egg to /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Adding apache-airflow 2.0.0.dev0 to easy-install.pth file
Installing airflow script to /Users/jax/xx/pythonenv/airflowenv/bin
Installed /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages/apache_airflow-2.0.0.dev0-py3.8.egg
Processing dependencies for apache-airflow==2.0.0.dev0
Searching for Werkzeug==0.16.1
Best match: Werkzeug 0.16.1
Adding Werkzeug 0.16.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for unicodecsv==0.14.1
Best match: unicodecsv 0.14.1
Adding unicodecsv 0.14.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for tzlocal==1.5.1
Best match: tzlocal 1.5.1
Adding tzlocal 1.5.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for thrift==0.13.0
Best match: thrift 0.13.0
Adding thrift 0.13.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for tenacity==4.12.0
Best match: tenacity 4.12.0
Adding tenacity 4.12.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for tabulate==0.8.7
Best match: tabulate 0.8.7
Adding tabulate 0.8.7 to easy-install.pth file
Installing tabulate script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for SQLAlchemy-JSONField==0.9.0
Best match: SQLAlchemy-JSONField 0.9.0
Adding SQLAlchemy-JSONField 0.9.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for SQLAlchemy==1.3.18
Best match: SQLAlchemy 1.3.18
Adding SQLAlchemy 1.3.18 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for setproctitle==1.1.10
Best match: setproctitle 1.1.10
Adding setproctitle 1.1.10 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for requests==2.24.0
Best match: requests 2.24.0
Adding requests 2.24.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for python-slugify==4.0.1
Best match: python-slugify 4.0.1
Adding python-slugify 4.0.1 to easy-install.pth file
Installing slugify script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for python-nvd3==0.15.0
Best match: python-nvd3 0.15.0
Adding python-nvd3 0.15.0 to easy-install.pth file
Installing nvd3 script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for python-dateutil==2.8.1
Best match: python-dateutil 2.8.1
Adding python-dateutil 2.8.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for python-daemon==2.2.4
Best match: python-daemon 2.2.4
Adding python-daemon 2.2.4 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Pygments==2.6.1
Best match: Pygments 2.6.1
Adding Pygments 2.6.1 to easy-install.pth file
Installing pygmentize script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for psutil==5.7.0
Best match: psutil 5.7.0
Adding psutil 5.7.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for pendulum==2.1.1
Best match: pendulum 2.1.1
Adding pendulum 2.1.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for pandas==1.0.5
Best match: pandas 1.0.5
Adding pandas 1.0.5 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for marshmallow-oneofschema==2.0.1
Best match: marshmallow-oneofschema 2.0.1
Adding marshmallow-oneofschema 2.0.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for MarkupSafe==1.1.1
Best match: MarkupSafe 1.1.1
Adding MarkupSafe 1.1.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Markdown==2.6.11
Best match: Markdown 2.6.11
Adding Markdown 2.6.11 to easy-install.pth file
Installing markdown_py script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for lockfile==0.12.2
Best match: lockfile 0.12.2
Adding lockfile 0.12.2 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for lazy-object-proxy==1.5.0
Best match: lazy-object-proxy 1.5.0
Adding lazy-object-proxy 1.5.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for jsonschema==3.2.0
Best match: jsonschema 3.2.0
Adding jsonschema 3.2.0 to easy-install.pth file
Installing jsonschema script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for json-merge-patch==0.2
Best match: json-merge-patch 0.2
Adding json-merge-patch 0.2 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Jinja2==2.10.3
Best match: Jinja2 2.10.3
Adding Jinja2 2.10.3 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for iso8601==0.1.12
Best match: iso8601 0.1.12
Adding iso8601 0.1.12 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for gunicorn==19.10.0
Best match: gunicorn 19.10.0
Adding gunicorn 19.10.0 to easy-install.pth file
Installing gunicorn script to /Users/jax/xx/pythonenv/airflowenv/bin
Installing gunicorn_paster script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for graphviz==0.14.1
Best match: graphviz 0.14.1
Adding graphviz 0.14.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for funcsigs==1.0.2
Best match: funcsigs 1.0.2
Adding funcsigs 1.0.2 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-WTF==0.14.3
Best match: Flask-WTF 0.14.3
Adding Flask-WTF 0.14.3 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for flask-swagger==0.2.13
Best match: flask-swagger 0.2.13
Adding flask-swagger 0.2.13 to easy-install.pth file
Installing flaskswagger script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-Login==0.4.1
Best match: Flask-Login 0.4.1
Adding Flask-Login 0.4.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-Caching==1.3.3
Best match: Flask-Caching 1.3.3
Adding Flask-Caching 1.3.3 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-AppBuilder==3.0.1
Best match: Flask-AppBuilder 3.0.1
Adding Flask-AppBuilder 3.0.1 to easy-install.pth file
Installing fabmanager script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask==1.1.2
Best match: Flask 1.1.2
Adding Flask 1.1.2 to easy-install.pth file
Installing flask script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for dill==0.3.2
Best match: dill 0.3.2
Adding dill 0.3.2 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for cryptography==2.9.2
Best match: cryptography 2.9.2
Adding cryptography 2.9.2 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for croniter==0.3.34
Best match: croniter 0.3.34
Adding croniter 0.3.34 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for connexion==2.7.0
Best match: connexion 2.7.0
Adding connexion 2.7.0 to easy-install.pth file
Installing connexion script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for colorlog==4.0.2
Best match: colorlog 4.0.2
Adding colorlog 4.0.2 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for cattrs==1.0.0
Best match: cattrs 1.0.0
Adding cattrs 1.0.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for cached-property==1.5.1
Best match: cached-property 1.5.1
Adding cached-property 1.5.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for attrs==19.3.0
Best match: attrs 19.3.0
Adding attrs 19.3.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for argcomplete==1.12.0
Best match: argcomplete 1.12.0
Adding argcomplete 1.12.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for alembic==1.4.2
Best match: alembic 1.4.2
Adding alembic 1.4.2 to easy-install.pth file
Installing alembic script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for pytz==2020.1
Best match: pytz 2020.1
Adding pytz 2020.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for six==1.15.0
Best match: six 1.15.0
Adding six 1.15.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for idna==2.10
Best match: idna 2.10
Adding idna 2.10 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for urllib3==1.25.9
Best match: urllib3 1.25.9
Adding urllib3 1.25.9 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for chardet==3.0.4
Best match: chardet 3.0.4
Adding chardet 3.0.4 to easy-install.pth file
Installing chardetect script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for certifi==2020.6.20
Best match: certifi 2020.6.20
Adding certifi 2020.6.20 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for text-unidecode==1.3
Best match: text-unidecode 1.3
Adding text-unidecode 1.3 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for docutils==0.16
Best match: docutils 0.16
Adding docutils 0.16 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for setuptools==49.6.0
Best match: setuptools 49.6.0
Adding setuptools 49.6.0 to easy-install.pth file
Installing easy_install script to /Users/jax/xx/pythonenv/airflowenv/bin
Installing easy_install-3.8 script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for pytzdata==2020.1
Best match: pytzdata 2020.1
Adding pytzdata 2020.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for numpy==1.19.0
Best match: numpy 1.19.0
Adding numpy 1.19.0 to easy-install.pth file
Installing f2py script to /Users/jax/xx/pythonenv/airflowenv/bin
Installing f2py3 script to /Users/jax/xx/pythonenv/airflowenv/bin
Installing f2py3.8 script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for marshmallow==3.7.0
Best match: marshmallow 3.7.0
Adding marshmallow 3.7.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for pyrsistent==0.16.0
Best match: pyrsistent 0.16.0
Adding pyrsistent 0.16.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for itsdangerous==1.1.0
Best match: itsdangerous 1.1.0
Adding itsdangerous 1.1.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for WTForms==2.3.1
Best match: WTForms 2.3.1
Adding WTForms 2.3.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for PyYAML==5.3.1
Best match: PyYAML 5.3.1
Adding PyYAML 5.3.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for apispec==3.3.1
Best match: apispec 3.3.1
Adding apispec 3.3.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for click==6.7
Best match: click 6.7
Adding click 6.7 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-OpenID==1.2.5
Best match: Flask-OpenID 1.2.5
Adding Flask-OpenID 1.2.5 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for colorama==0.4.3
Best match: colorama 0.4.3
Adding colorama 0.4.3 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for PyJWT==1.7.1
Best match: PyJWT 1.7.1
Adding PyJWT 1.7.1 to easy-install.pth file
Installing pyjwt script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for SQLAlchemy-Utils==0.36.8
Best match: SQLAlchemy-Utils 0.36.8
Adding SQLAlchemy-Utils 0.36.8 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for email-validator==1.1.1
Best match: email-validator 1.1.1
Adding email-validator 1.1.1 to easy-install.pth file
Installing email_validator script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-SQLAlchemy==2.4.4
Best match: Flask-SQLAlchemy 2.4.4
Adding Flask-SQLAlchemy 2.4.4 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-JWT-Extended==3.24.1
Best match: Flask-JWT-Extended 3.24.1
Adding Flask-JWT-Extended 3.24.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for prison==0.1.3
Best match: prison 0.1.3
Adding prison 0.1.3 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for marshmallow-sqlalchemy==0.23.1
Best match: marshmallow-sqlalchemy 0.23.1
Adding marshmallow-sqlalchemy 0.23.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Flask-Babel==1.0.0
Best match: Flask-Babel 1.0.0
Adding Flask-Babel 1.0.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for marshmallow-enum==1.5.1
Best match: marshmallow-enum 1.5.1
Adding marshmallow-enum 1.5.1 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for cffi==1.14.0
Best match: cffi 1.14.0
Adding cffi 1.14.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for natsort==7.0.1
Best match: natsort 7.0.1
Adding natsort 7.0.1 to easy-install.pth file
Installing natsort script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for swagger-ui-bundle==0.0.6
Best match: swagger-ui-bundle 0.0.6
Adding swagger-ui-bundle 0.0.6 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for openapi-spec-validator==0.2.8
Best match: openapi-spec-validator 0.2.8
Adding openapi-spec-validator 0.2.8 to easy-install.pth file
Installing openapi-spec-validator script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for clickclick==1.2.2
Best match: clickclick 1.2.2
Adding clickclick 1.2.2 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for inflection==0.5.0
Best match: inflection 0.5.0
Adding inflection 0.5.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for python-editor==1.0.4
Best match: python-editor 1.0.4
Adding python-editor 1.0.4 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Mako==1.1.3
Best match: Mako 1.1.3
Adding Mako 1.1.3 to easy-install.pth file
Installing mako-render script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for python3-openid==3.2.0
Best match: python3-openid 3.2.0
Adding python3-openid 3.2.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for dnspython==1.16.0
Best match: dnspython 1.16.0
Adding dnspython 1.16.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for Babel==2.8.0
Best match: Babel 2.8.0
Adding Babel 2.8.0 to easy-install.pth file
Installing pybabel script to /Users/jax/xx/pythonenv/airflowenv/bin
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for pycparser==2.20
Best match: pycparser 2.20
Adding pycparser 2.20 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Searching for defusedxml==0.6.0
Best match: defusedxml 0.6.0
Adding defusedxml 0.6.0 to easy-install.pth file
Using /Users/jax/xx/pythonenv/airflowenv/lib/python3.8/site-packages
Finished processing dependencies for apache-airflow==2.0.0.dev0
(airflowenv) β airflow git:(master) β
(airflowenv) β airflow git:(master) β
(airflowenv) β airflow git:(master) β
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
```
(airflowenv) β airflow git:(master) β
(airflowenv) β airflow git:(master) β
(airflowenv) β airflow git:(master) β airflow --help
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 6, in <module>
from pkg_resources import load_entry_point
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3262, in <module>
def _initialize_master_working_set():
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3245, in _call_aside
f(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3274, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pkg_resources/__init__.py", line 584, in _build_master
ws.require(__requires__)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pkg_resources/__init__.py", line 901, in require
needed = self.resolve(parse_requirements(requirements))
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pkg_resources/__init__.py", line 787, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'apache-airflow==2.0.0.dev0' distribution was not found and is required by the application
(airflowenv) β airflow git:(master) β
(airflowenv) β airflow git:(master) β
```
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/10516 | https://github.com/apache/airflow/pull/10542 | 5e822634de94ca516818cababc592d38dc882d46 | 018ae0ed95a5ff1cdb787fccf2c7e957580ab968 | 2020-08-24T16:53:35Z | python | 2020-08-25T13:45:59Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,474 | ["airflow/providers/google/cloud/hooks/bigtable.py", "airflow/providers/google/cloud/operators/bigtable.py", "tests/providers/google/cloud/hooks/test_bigtable.py", "tests/providers/google/cloud/operators/test_bigtable.py"] | Add support for creating multiple replicated clusters in Bigtable hook and operator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
From Cloud Bigtable [documentation](https://cloud.google.com/bigtable/docs/replication-overview#how-it-works)
> Cloud Bigtable supports up to 4 replicated clusters located in Google Cloud zones where Cloud Bigtable is available.
Currently Bigtable hook and operator only support creating one replicated cluster. It would be better to add support for creating multiple replicated clusters
**Use case / motivation**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
Supporting multiple replicated clusters will be handy. It will be helpful to increase the availability and durability of the data by copying it across multiple regions or multiple zones within the same region.
| https://github.com/apache/airflow/issues/10474 | https://github.com/apache/airflow/pull/10475 | 3a53039fd10543f01d7d619bf3cbbf69f5896cbe | b0598b5351d2d027286e2333231b6c0c0704dba2 | 2020-08-22T13:39:23Z | python | 2020-08-24T09:44:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,471 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/codeql-cancel.yml"] | Current CI builds slightly different sources than the one in PR build | The current CI system builds a slightly different set of sources in GitHub than the one in PR directly. But possibly it is better. We can fix it with a workaround but I am not sure if we should.
Details:
1) when we currently build the image in "Build Image" workflow we are using "scripts/ci" from master and rest of the sources from the commit that is the HEAD of PR.
2) However the sources in the PR run by GitHub are slightly different. When there are no conflicts, Github actually performs a merge between the master and the PR and the sources in build PR are those merged sources. But this only happens when there is no conflict, otherwise, original sources are used.
This is not a big issue IMHO now and I'd argue that it's better if we run original sources, because that might be a source of confusion if someone tries to reproduce it locally. What you have to do to replicate the PR build, you have to rebase it locally on top of latest master. But this is not at all clear - I just learned that this is happening while implementing the new CI and had no idea that this was happening - and it could have explained a number of "We do not know what happened in this CI" cases.
I can workaround this (there's no API in Github to know what is the merge request Commit SHA, but I Already use similar workarounds passing information via job names (waiting for missing API from GitHub). But I am not sure if it is worth it.
For now it might cause problems similar to the ones in #10445 but a fix is coming in #10472 so that such failures will not happen but we will use the HEAD of PR as commit SHA still. Possibly also #10806
Another problem that was caused by this was https://github.com/apache/airflow/runs/1108600218?check_suite_focus=true resulting from merging https://github.com/apache/airflow/pull/10898. In this PR new packages were added, and later not rebased PR which has been just merged, and the PR https://github.com/apache/airflow/pull/10906 which was not rebased on top of it resulted in Pylint being run on "merged" sources with an image that did not contain the "merged" dependencies.
@kaxil @feluelle @turbaszek @mik-laj @dimberman - WDYT.
BTW. We also have a safety net. The build with "merged" sources happens always after we merge the PR anyway - so in case a problem is hidden we will see it failing at "push" event.
| https://github.com/apache/airflow/issues/10471 | https://github.com/apache/airflow/pull/11268 | 6dce7a6c26bfaf991aa24482afd7fee1e263d6c9 | a4478f5665688afb3e112357f55b90f9838f83ab | 2020-08-22T07:58:21Z | python | 2020-10-04T20:53:18Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,456 | ["airflow/kubernetes/pod_runtime_info_env.py", "kubernetes_tests/test_kubernetes_pod_operator.py"] | PodRuntimeInfoEnv is not working in 1.10.11 | **Apache Airflow version**: 1.10.11
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.17.0
**What happened**:
PodRuntimeInfoEnv object is not working in Airflow 1.10.11. It might be result of some recent (past 1 year?) refactoring of kubernetes code or so.
Exception thrown is this
```
Invalid value for `field_path`, must not be `None`
```
**What you expected to happen**:
It's able to initialize and produce respective KubernestPodOperator with it.
**How to reproduce it**:
Create PodRuntimeInfoEnv object, try to plug it in KubernetesPodOperator. Kubernetes Pod Operator never gets submitted and stacktrace pops.
It happens 100% of times. I know for sure that it worked perfectly fine on Airflow 1.10.9. So might be in 1.10.10 or 1.10.11 there was introduced regression.
After some investigation it seem to bring me to this place - https://github.com/apache/airflow/blob/44d4ae809c1e3784ff95b6a5e95113c3412e56b3/airflow/kubernetes/pod_runtime_info_env.py#L50-L52
As you might see, field_path is first argument. However, if I look at official kubernetes-python code, the first argument in `V1ObjectFieldSelector` constructor is api_version.
https://github.com/kubernetes-client/python/blob/master/kubernetes/client/models/v1_object_field_selector.py#L45
Since its clearly not provided in here, we end up with exception related to field_path value:
https://github.com/kubernetes-client/python/blob/master/kubernetes/client/models/v1_object_field_selector.py#L103
Possible solutions could be:
1. Pass hardcoded api_version as well, this way ensuring that in case Kubernetes API changes, its remembered which version it complies to
```
field_ref=k8s.V1ObjectFieldSelector(
api_version='v1',
field_path=self.field_path
)
```
2. Instead of using positional arg, use kwarg in V1ObjectFieldSelector
```
field_ref=k8s.V1ObjectFieldSelector(
field_path=self.field_path
)
```
For people who might already be noticing issue right now the simplest approach would be inheritance that would overwrite or monkeypatch the `PodRuntimeInfoEnv` class (specifically its method `to_k8s_client_obj`) and then use with it one of above mentioned solutions. | https://github.com/apache/airflow/issues/10456 | https://github.com/apache/airflow/pull/10478 | 97749030888c7b4f41de725c07f7cb8a2148c3a6 | 47c6657ce012f6db147fdcce3ca5e77f46a9e491 | 2020-08-21T21:51:24Z | python | 2020-08-22T16:52:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,438 | ["scripts/ci/libraries/_initialization.sh", "scripts/ci/libraries/_verbosity.sh"] | breeze not working after recent commits | When I ran `./breeze`, there's no output. The script just exits with code 1 immediately.
I ran it with `bash -x` and saw it was looking for `HELM_BINARY` when it crashed. I'm not using `Kubernetes`, but I tried to `brew install helm` anyway. And then `./breeze` started doing something. It then crashed again complaining `kind` is not found. So I brew installed `kind` as well.
And then breeze still crashed saying this:
```
/Volumes/Workspace/airflow/scripts/ci/libraries/_md5sum.sh: line 86: FILES_FOR_REBUILD_CHECK[@]: unbound variable
```
I tried to checkout older commits and confirm that one of these commits is the potential cause. Not sure which one.
```
88c7d2e526af4994066f65f830e2fa8edcbbce2e (HEAD -> master, upstream/master) Dataflow operators don't not always create a virtualenv
(#10373)
30f46175eef3748df0058b432ab7a31ac17bd514 Add architecture diagram for basic Airflow deployment (#10428)
c35a01037ad4bb03d39a0319d37bc36b08ccf766 Switch to released cancel-workflow-runs action (#10423)
dc27a2a310f1d56476444d78884255962b576c90 Fix failing breeze (#10424)
a8e28f1af799a3999507f0f5e877aa1679809594 Fix typo in KubernetesPodOperator (#10419)
2c3ce8e2c04567bfaa1a139c857da25dec9d78c5 Enable optimisation of image building. (#10422)
de7500de849f85e4e07daa48b154289ba4c132c3 CI Images are now pre-build and stored in registry (#10368)
5739ba28a7d8634ba3702d5fa39e5ef663f5bf7c Fix broken breeze script (#10418)
7fa813f25fa9f5bc48e79a141ae9139b7f9fb0b8 Unnecessary use of list comprehension (#10416)
f1716bc958b50b991e9ca2bc3fa84f02471fcf80 Use sys.exit() instead of exit() (#10414)
2db8bf3274557b436d5857c935101a571e701e58 Group logging & monitoring guides in one section (#10394)
e195c6a3d261ad44d7be4c8d1f788e86d55ea2f5 Make KubernetesExecutor recognize kubernetes_labels (#10412)
f76938c1713c3141687c604ee546fd90f991499c Make Kubernetes tests pass locally (#10407)
3d334fdd98b1dd0e49b981c9cca70570a1da124e BugFix: K8s Executor Multinamespace mode is evaluated to true by default (#10410)
882e1870d6c0e70480f5974b4abe96eb147e5d66 Remove run-ons from scheduler docs. (#10397)
8fcb93b29456da021a31f6a2ee5f350658a50553 Fixes optimisation where doc only change should build much faster (#10344)
e1e7f11917d5b190dc3f4a290e76cbff0cb91e31 Move docker-compose ci.yml to ga.yml as it is GITHUB_* only (#10405)
08fe5c46a16b89314013cd75f6c1401ebc73fa89 Constraint CI scripts are now separated out (#10404)
db446f267748c1c44229f03990b3aeb519c112f8 Replaced aliases for common tools with functions. (#10402)
e17985382c2f76462a98e06f8a66c32817e453bb Kubernetes image is extended rather than customized (#10399)
0b3ded7a55c8b64064cfa080f06c7421bc8bb497 Correct typo in best-practices.rst (#10401)
b06a705c72505b5340421a16bee0506cfb761acd Improve headings on docs/executor (#10396)
2bab38cf0cadd4e8a20c05ed738a38c3951487a4 Update celery.rst (#10400)
3bc37013f6efc5cf758821f06257a02cae6e2d52 Add back 'refresh_all' method in airflow/www/views.py (#10328)
77a635eb45099227d8d708bcc22c1e19ad13a451 When precommits are run, output is silenced (#10390)
c54d17e6dc34e92ede394746f00f4a908150fa63 Capitalize 'Python' properly in Concepts docs (#10398)
49ce908c05dcccd2e3375fc3afba6a1fb5e598a1 Moved description of page size limit to security/ (#10392)
541c47c99804bf09b5c775904e20580e48bb242f Add basic auth API auth backend (#10356)
8368f4949f06f93e99fbdd732f2b87ce94322861 Correct verb tense for re-running task doc. (#10371)
```
**Apache Airflow version**: master
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Not using Kubernetes
**Environment**:
- **Cloud provider or hardware configuration**: NA
- **OS** (e.g. from /etc/os-release): macOS Mojave
- **Kernel** (e.g. `uname -a`): NA
- **Install tools**: NA
- **Others**: NA
**What happened**:
Run `./breeze` and it crashes with no output. Installed helm and kind and now ./breeze complains `FILES_FOR_REBUILD_CHECK[@]: unbound variable`.
**How to reproduce it**:
In a clean checkout of master, run this:
```
./breeze
``` | https://github.com/apache/airflow/issues/10438 | https://github.com/apache/airflow/pull/10440 | 27d08b76a2d171d716a1599157a8a60a121dbec6 | 52dec7b84f20f04bea8cec6bf14e3e540d67c2a8 | 2020-08-21T05:27:02Z | python | 2020-08-21T07:50:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,434 | ["airflow/www/templates/airflow/dags.html"] | Last Run links on home page UI not correct with RBAC UI | **Apache Airflow version**:
1.10.11
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-13T06:39:58Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
**Environment**:
- **Cloud provider or hardware configuration**: Azure Kubernetes Service
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux airflow-web-65cb7d9cb8-qzcbv 4.15.0-1089-azure #99~16.04.1-Ubuntu SMP Fri Jun 5 15:30:32 UTC 2020 x86_64 GNU/Linux
- **Install tools**: Helm chart "stable/airflow"
- **Others**:
**What happened**:
The "Last Run" link on the home page with RBAC UI is not taking you to the graph of that last run (if it has a time that is NOT `00:00:00`), instead it takes you to a graph with no activity shown. It looks like it is because it is not URL-encoding the `execution_date` in the URL.
**What you expected to happen**:
Clicking a "Last Run" link when "Graph" is your default view it should take you to the graph of that last run, instead it a graph with no activity shown.
**How to reproduce it**:
Using RBAC UI, click on a link in the "Last Run" column (that has a time that is NOT `00:00:00`) it will take you to a graph view with no activity shown. The URL will show as ending in a date similar to `execution_date=2020-08-19T05:00:00+00:00` and if you change all the `:` to `%3A` and `+` to `%2B` it should take you to the correct last run.
**Anything else we need to know**:
The URL for the link currently has `execution_date=2020-08-19T05:00:00+00:00`, whereas if you go to the actual graph page and select that same DAG run the URL is insead `execution_date=2020-08-19T05%3A00%3A00%2B00%3A00` which works.
If the link's url has the execution date URL-encoded then the link should work.
| https://github.com/apache/airflow/issues/10434 | https://github.com/apache/airflow/pull/10595 | 2e56ee7b2283d9413cab6939ffbe241c154b39e2 | 900f15ab692aea34ecd247a8cd2179e1f1835324 | 2020-08-20T21:29:53Z | python | 2020-08-27T11:07:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,431 | ["docs/apache-airflow/logging-monitoring/metrics.rst"] | Metrics Documentation, dag_bag_size | In the airflow documentation metrics section there is a field called dag_bag_size and the description is just "DAG bag size", this is fairly unhelpful for trying to gain insight into the metrics, so would it be possible for this description to either be improved, or linked to a more accurate description?
In the docstring, the description of a dag bag is "a collection of dags, parsed out of a folder tree and has high level configuration settings", but it's unclear which dags are actually in this collection during runtime. For example, is this every dag displayed on the webserver UI, or is it just the dags that are turned on? (I'm fairly certain from monitoring that it's the latter) | https://github.com/apache/airflow/issues/10431 | https://github.com/apache/airflow/pull/18824 | e9a72a4e95e6d23bae010ad92499cd7b06d50037 | 73a858f72848f03f70f39bd2a88985b5adf09007 | 2020-08-20T20:31:20Z | python | 2021-10-08T05:20:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,429 | ["airflow/www/package.json", "airflow/www/yarn.lock"] | jquery dependency needs to be updated to 3.5.0 or newer | Currently you're requring jquery 3.4.0 or newer, 3.5.0 has a vulnerability against it.
[CVE-2020-11022](https://github.com/advisories/GHSA-gxr4-xjj5-5px2)
Change is needed to these two lines:
https://github.com/apache/airflow/blob/master/airflow/www/package.json#L70
https://github.com/apache/airflow/blob/master/airflow/www/package.json#L48 | https://github.com/apache/airflow/issues/10429 | https://github.com/apache/airflow/pull/16440 | 3f84d3d315da0939fd6dab4b46658877e06d4b1d | f18e4ba612f8ce19bcce7fce612e2409c4afd7ab | 2020-08-20T17:51:07Z | python | 2021-06-15T01:52:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,426 | ["airflow/providers/google/cloud/hooks/gcs.py", "airflow/providers/google/cloud/operators/gcs.py", "tests/providers/google/cloud/hooks/test_gcs.py", "tests/providers/google/cloud/operators/test_gcs.py"] | Multiple prefixes in GoogleCloudStorageListOperator and GoogleCloudStorageDeleteOperator | **Description**
Support passing multiple prefixes to `GoogleCloudStorageListOperator` and `GoogleCloudStorageDeleteOperator` operators.
**Use case / motivation**
I have this folder structure in GCS bucket.
```
+-- year={year}
| +-- month={month}
| +--day={day}
| +-- topic={topic1}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic2}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic3}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic4}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic5}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic6}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic7}
| +--day={day}
| +-- topic={topic1}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic2}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic3}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic4}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic5}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic6}
| +--file 1
| +--file 2
| +--file 3
| +-- topic={topic7}
| +--file 1
| +--file 2
| +--file 3
| ....
```
What I need to achieve is delete one day of objects. For example, I need to delete objects in `year=2020/month=08/day=19`. I can do that easily using `gsutils`. In `gsutil` you can delete them via wild card `gsutil ear=2020/month=08/day=19/*` but using the REST APIs you can't even if you use a prefix. The reason is there is no one prefix to get all the objects inside a folder. I achieved that by using multiple prefixes and for each prefix, I will get the list of objects. Unfortunately, I can't pass more than one prefix to the operators.
**Prefixes used**
- `year=2020/month=08/day=19/topic={topic1}`
- `year=2020/month=08/day=19/topic={topic2}`
- `year=2020/month=08/day=19/topic={topic3}`
- `year=2020/month=08/day=19/topic={topic4}`
- `year=2020/month=08/day=19/topic={topic5}`
- `year=2020/month=08/day=19/topic={topic6}`
- `year=2020/month=08/day=19/topic={topic7}`
| https://github.com/apache/airflow/issues/10426 | https://github.com/apache/airflow/pull/30815 | 2d40f41bff4135ff7147cc1da8932dada43842ea | 432697d90cdcea35607bcaa970c694c88053222c | 2020-08-20T12:22:31Z | python | 2023-04-23T06:31:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,406 | ["UPDATING.md", "airflow/providers/elasticsearch/log/es_task_handler.py"] | log_id field is missing from log lines (ES remote logging) | **Apache Airflow version**:
apache/airflow:1.10.11
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
v1.16.11-gke.5
**Environment**:
GKE
**What happened**:
Webserver doesn't fetch logs for tasks from elasticsearch
**What you expected to happen**:
task logs will be displayed in the webserver UI
It seems like the webserver is trying to query task logs by the `log_id` field:
https://github.com/apache/airflow/blob/1.10.11/airflow/utils/log/es_task_handler.py#L175
this field is missing from all log lines (which are written to stdout) using the KubernetesExecutor. Example log line:
`{"asctime": null, "filename": "standard_task_runner.py", "lineno": 77, "levelname": "INFO", "message": "Running: ['airflow', 'run', 'hello_world', 'hello_task_3', '2020-08-19T14:26:07.226064+00:00', '--job_id', '158', '--pool', 'default_pool', '--raw', '-sd', '/opt/airflow/dags/repo/dags/hello_world.py', '--cfg_path', '/tmp/tmpt7lafkaf']", "dag_id": "hello_world", "task_id": "hello_task_3", "execution_date": "2020_08_19T14_26_07_226064", "try_number": "1"}`
**How to reproduce it**:
this is the relevant configuration we have, scheduler and webserver running separately and tasks run using KubernetsExecutor (all in the same cluster/namespace):
```
AIRFLOW__CORE__LOGGING_LEVEL: INFO
AIRFLOW__CORE__REMOTE_LOGGING: "True"
AIRFLOW__ELASTICSEARCH__HOST: http://elasticsearch.logging:9200
AIRFLOW__ELASTICSEARCH__JSON_FORMAT: "True"
AIRFLOW__ELASTICSEARCH__WRITE_STDOUT: "True"
```
we are using fluentd (https://github.com/fluent/fluentd-kubernetes-daemonset) to forward log lines to elasticsearch, all task logs are written to stdout + elasticsearch as expected. | https://github.com/apache/airflow/issues/10406 | https://github.com/apache/airflow/pull/10411 | cc551ba793344800d2d396c13d7fd0c8eed97352 | 70f05ac6775152d856d212f845e9561282232844 | 2020-08-19T14:46:07Z | python | 2020-09-01T13:35:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,389 | ["docs/howto/initialize-database.rst"] | Documentation is missing instructions for preparing database for Airflow (during installation) | **Apache Airflow version**: 1.10.11
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): Amazon Linux 2
- **Kernel** (e.g. `uname -a`): N/A
- **Install tools**: N/A
- **Others**: N/A
**What happened**:
Airflow documentation is missing a step -- preparing the database for Airflow initdb.
This includes creating the "airflow" database, and the "airflow" user.
There are multiple different instructions for it outside the documentation, e.g.:
https://medium.com/@srivathsankr7/apache-airflow-a-practical-guide-5164ff19d18b says:
```
mysql -u root -p
mysql> CREATE DATABASE airflow CHARACTER SET utf8 COLLATE utf8_unicode_ci;
mysql> create user 'airflow'@'localhost' identified by 'airflow';
mysql> grant all privileges on * . * to 'airflow'@'localhost';
mysql> flush privileges;
```
http://site.clairvoyantsoft.com/installing-and-configuring-apache-airflow/ says:
```
CREATE DATABASE airflow CHARACTER SET utf8 COLLATE utf8_unicode_ci;
grant all on airflow.* TO βUSERNAME'@'%' IDENTIFIED BY β{password}';
```
https://airflow-tutorial.readthedocs.io/en/latest/first-airflow.html says:
```
MySQL -u root -p
mysql> CREATE DATABASE airflow CHARACTER SET utf8 COLLATE utf8_unicode_ci;
mysql> GRANT ALL PRIVILEGES ON airflow.* To 'airflow'@'localhost';
mysql> FLUSH PRIVILEGES;
```
(This last one seems to be missing the step of creating the `airflow` user.)
**What you expected to happen**:
I would expect https://airflow.apache.org/docs/stable/howto/initialize-database.html to contain complete instructions for preparing the database backend for initialization.
**How to reproduce it**:
Try to install Airflow with a MySQL backend with no prior knowledge by following the Airflow documentation.
**Anything else we need to know**:
Airflow rocks! | https://github.com/apache/airflow/issues/10389 | https://github.com/apache/airflow/pull/10413 | 409ebc10978c40f60cade8d347a6d7f8b7410609 | 7adae240d8e2136991c47d05d2027d4189b5dd2e | 2020-08-18T22:19:40Z | python | 2020-09-09T09:51:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,386 | ["docs/apache-airflow-providers/howto/create-update-providers.rst", "docs/apache-airflow-providers/index.rst"] | Add list of prerequisites that new provider package should fulfill | **Description**
We need to have a folder template (ideally) and list of prerequisites that each provider should fulfill when it is created.
**Use case / motivation**
To make it easy to add new provider and to make new contributors aware what is needed in order to add provider. The following should be added:
- [x] directory structure (possibly automated templates or script to create them)
- [ ] usage scenarios for connections in hooks (see discussion in #12128)
- [ ] Information about what is required:
- [ ] hooks
- [ ] operators/sensors/transfers
- [ ] unit tests with mocks
- [ ] integration tests if applicable
- [ ] example dags for examples and their use (examples, howtos, system tests)
- [ ] howto guides
- [ ] relation with backport providers.
| https://github.com/apache/airflow/issues/10386 | https://github.com/apache/airflow/pull/15061 | de22fc7fae05a4521870869f1035f5e4859e877f | 932f8c2e9360de6371031d4d71df00867a2776e6 | 2020-08-18T19:03:04Z | python | 2021-04-03T13:01:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,385 | ["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | Add on_kill method to DataprocInstantiateWorkflowTemplateOperator | **Description**
This operator should cancel running workflow template `on_kill`. This option probably should be configurable because of request_id prameter in a job definition: https://googleapis.dev/python/dataproc/latest/gapic/v1/api.html#google.cloud.dataproc_v1.WorkflowTemplateServiceClient.instantiate_inline_workflow_template
https://googleapis.dev/python/dataproc/latest/gapic/v1/api.html#google.cloud.dataproc_v1.WorkflowTemplateServiceClient.instantiate_workflow_template
Note this will have to cancle the Google Long Running Operation for the workflow template (which will in turn cancel all children jobs and clean up relevant cluster).
**Use case / motivation**
Remove dangling workflow templates
**Related Issues**
#10381
#6371 | https://github.com/apache/airflow/issues/10385 | https://github.com/apache/airflow/pull/34957 | 105743e14a89508892540353b23f47e26da54f5a | 0b49f338b9e6fd3264bc0099e8879855bf6c60c9 | 2020-08-18T18:43:40Z | python | 2023-10-16T10:44:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,374 | ["airflow/providers/google/cloud/hooks/dataflow.py", "airflow/providers/google/cloud/operators/dataflow.py", "tests/providers/google/cloud/hooks/test_dataflow.py", "tests/providers/google/cloud/operators/test_dataflow.py"] | Dataflow commands always launch in a virtualenvironment | **What happened**:
With the new Backport providers, when calling `DataflowCreatePythonJobOperator` with an empty `py_rerquirments` Dataflow commands always launch inside a virtualenv.
**What you expected to happen**:
The Dataflow command should run locally.
| https://github.com/apache/airflow/issues/10374 | https://github.com/apache/airflow/pull/10373 | 30f46175eef3748df0058b432ab7a31ac17bd514 | 88c7d2e526af4994066f65f830e2fa8edcbbce2e | 2020-08-18T13:19:18Z | python | 2020-08-21T00:28:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 10,367 | ["airflow/www/utils.py", "tests/www/test_utils.py"] | Markdown table is not rendered in doc_md attribute for DAG | **Apache Airflow version**: 1.10.11
**Environment**: Python 3.7
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): PRETTY_NAME="Debian GNU/Linux 10 (buster)"
(from python3.7-slim-buster docker image)
- **Kernel** (e.g. `uname -a`): Linux 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
Using a markdown formatted table does not work.
**What you expected to happen**:
To be able to see a formated table within DAG view
**How to reproduce it**:
Add a tabular formatted code do dag.doc_md.
Example:
> mytable:
>
> | a | b | c |
> | ------------- | ------------- | ------------- |
> | a | b | c |
[Table not formatted](https://i.imgur.com/l6JXuWe.png)
**Anything else we need to know**: N/A
| https://github.com/apache/airflow/issues/10367 | https://github.com/apache/airflow/pull/13533 | dc80fa4cbc070fc6e84fcc95799d185badebaa71 | 3558538883612a10e9ea3521bf864515b6e560c5 | 2020-08-17T22:14:41Z | python | 2021-01-15T11:13:22Z |
closed | pytorch/pytorch | https://github.com/pytorch/pytorch | 100,340 | ["test/test_torch.py"] | DISABLED test_equal (__main__.TestTorch) | Platforms: linux
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_torch.py%3A%3ATestTorch%3A%3Atest_equal)).
The cause seems to point to https://github.com/pytorch/pytorch/pull/100024 (or D45282119 as it was landed internally)
cc @mruberry @houseroad @ezyang | https://github.com/pytorch/pytorch/issues/100340 | https://github.com/pytorch/pytorch/pull/100364 | 66fde107e289574d28a330ac38a71f2e0b24c504 | 429155b3c8b8e64e18b2729298f148a884acbbe5 | 2023-04-30T16:11:59Z | python | 2023-05-01T23:28:12Z |
closed | pytorch/pytorch | https://github.com/pytorch/pytorch | 72,610 | [".circleci/cimodel/data/simple/binary_smoketest.py", ".circleci/config.yml", ".circleci/verbatim-sources/job-specs/binary-build-tests.yml"] | binary_linux_manywheel_3_7m_cu102_devtoolset7_test is broken | As it still using deprecated resource class, see https://hud.pytorch.org/ci/pytorch/pytorch/master?name_filter=binary_linux_manywheel_3_7m_cu102_devtoolset7_test
See example of the failure in https://app.circleci.com/jobs/github/pytorch/pytorch/16987033?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link | https://github.com/pytorch/pytorch/issues/72610 | https://github.com/pytorch/pytorch/pull/72613 | e235e437ac230aa0e6eba40c9dfa379adc6a17ec | 3b1ef1fde8347564f17034a62582bcd91b2e5f59 | 2022-02-09T20:26:10Z | python | 2022-02-09T20:42:34Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,363 | ["changelogs/fragments/82363-multiple-handlers-with-recursive-notification.yml", "lib/ansible/plugins/strategy/__init__.py", "test/integration/targets/handlers/runme.sh", "test/integration/targets/handlers/test_multiple_handlers_with_recursive_notification.yml"] | Only one handler of a `listen` group is run when notified from another handler | ### Summary
Hello!
Since ansible-core 2.15 and later (I think since the changes introduced by #79558), it seems that, when multiple handlers of a same `listen` group are notified by another handler, only the first one in the group will be run.
From what I understand, when Ansible handles a `notify` when it is already iterating handlers, it will only consider the first handler that matches the notification, but will not iterate through all the matching handlers. I believe the `break` in `lib/ansible/plugins/strategy/__init__.py` (line 669) should only apply when when Ansible is not already iterating handlers (i.e., in the `else` branch):
https://github.com/ansible/ansible/blob/6ebefaceb6cd0d4961776a94d63a71fc1fc28bc0/lib/ansible/plugins/strategy/__init__.py#L659-L669
I think there seems to be a simple fix to that, which I'm planning to submit as a PR.
Thank you! :)
snip
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.0]
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = /usr/bin/vim
```
### OS / Environment
Arch Linux, with the following Ansible-related packages :
- ansible 9.0.1-1
- ansible-core 2.16.0-1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: test listen-based handlers with recursive notifications
hosts: localhost
gather_facts: false
tasks:
- name: notify handler 1
command: echo
changed_when: true
notify: handler 1
handlers:
- name: handler 1
debug:
msg: handler 1
changed_when: true
notify: handler_2
- name: handler 2a
debug:
msg: handler 2a
listen: handler_2
- name: handler 2b
debug:
msg: handler 2b
listen: handler_2
```
### Expected Results
All handlers should be run, especially both handlers listening on the `handler_2` notification (i.e., `handler 2a` and `handler 2b`):
```
PLAY [test listen-based handlers with recursive notifications] **************************************************************************************
TASK [notify handler 1] *****************************************************************************************************************************
task path: /tmp/ansible/test.yml:7
Notification for handler handler 1 has been saved.
changed: [localhost] => {"changed": true, "cmd": ["echo"], "delta": "0:00:00.004205", "end": "2023-12-06 10:57:16.556224", "msg": "", "rc": 0, "start": "2023-12-06 10:57:16.552019", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
NOTIFIED HANDLER handler 1 for localhost
RUNNING HANDLER [handler 1] *************************************************************************************************************************
task path: /tmp/ansible/test.yml:13
NOTIFIED HANDLER handler 2a for localhost
NOTIFIED HANDLER handler 2b for localhost
changed: [localhost] => {
"msg": "handler 1"
}
RUNNING HANDLER [handler 2a] ************************************************************************************************************************
task path: /tmp/ansible/test.yml:19
ok: [localhost] => {
"msg": "handler 2a"
}
RUNNING HANDLER [handler 2b] ************************************************************************************************************************
task path: /tmp/ansible/test.yml:24
ok: [localhost] => {
"msg": "handler 2b"
}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [test listen-based handlers with recursive notifications] ***********************************************************
TASK [notify handler 1] **************************************************************************************************
task path: /tmp/ansible/test.yml:7
Notification for handler handler 1 has been saved.
changed: [localhost] => {"changed": true, "cmd": ["echo"], "delta": "0:00:00.003800", "end": "2023-12-06 10:56:26.513556", "msg": "", "rc": 0, "start": "2023-12-06 10:56:26.509756", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
NOTIFIED HANDLER handler 1 for localhost
RUNNING HANDLER [handler 1] **********************************************************************************************
task path: /tmp/ansible/test.yml:13
NOTIFIED HANDLER handler 2a for localhost
changed: [localhost] => {
"msg": "handler 1"
}
RUNNING HANDLER [handler 2a] *********************************************************************************************
task path: /tmp/ansible/test.yml:19
ok: [localhost] => {
"msg": "handler 2a"
}
PLAY RECAP ***************************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82363 | https://github.com/ansible/ansible/pull/82364 | fe81164fe548d79fbcd0024836d5f7474403c95d | 83281531216ee64cd054959f2bfe54c6df498443 | 2023-12-06T10:05:01Z | python | 2023-12-13T09:56:52Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,359 | ["changelogs/fragments/82359_assemble_diff.yml", "lib/ansible/plugins/action/__init__.py", "test/integration/targets/assemble/tasks/main.yml"] | Assemble module doesn't pass `content` arg to `_get_diff_data` | ### Summary
When using the `ansible.builtin.assemble` module with `--diff`, the task fails with the following error:
> Unexpected failure during module execution: ActionBase._get_diff_data() missing 1 required positional argument: 'content'
`ansible-playbook -vvvv` says the relevant function call happens in [ansible/plugins/action/assemble.py, line 143](https://github.com/ansible/ansible/blob/6ebefaceb6cd0d4961776a94d63a71fc1fc28bc0/lib/ansible/plugins/action/assemble.py#L143).
Note: on my local machine that's currently line 144; 143 is on the current devel branch.
The last known working version is 8.6.1
### Issue Type
Bug Report
### Component Name
ansible.builtin.assemble
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.0]
config file = None
configured module search path = ['/Users/albalitz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/9.0.1/libexec/lib/python3.12/site-packages/ansible
ansible collection location = /Users/albalitz/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.12.0 (main, Oct 3 2023, 16:20:33) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/9.0.1/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = /usr/bin/vim
PAGER(env: PAGER) = less
```
### OS / Environment
This happens on my local machine running MacOS Sonoma 14.1.2 (Ansible installed via homebrew) as well as our CI system running in a `python:alpine`-based Docker environment with the same Ansible version as above (Ansible is installed via pip there and updated semi-automatically using renovatebot).
### Steps to Reproduce
This step fails with the error described above:
```yaml
- name: create concatenated file
local_action:
module: assemble
remote_src: false
src: files/some_files/
dest: /tmp/concatenated_file
no_log: true
changed_when: false
check_mode: no
become: no
run_once: true
```
The step works when `--diff` is removed from the `ansible-playbook` command.
### Expected Results
I expected the `assemble` step to run successfully and produce a concatenated file with `--diff` enabled but without printing the diff (due to `no_log: true` - I set that to `false` for debugging purposes to see the error message).
### Actual Results
```console
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: albalitz
<localhost> EXEC /bin/sh -c 'echo ~albalitz && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/albalitz/.ansible/tmp `"&& mkdir "` echo /Users/albalitz/.ansible/tmp/ansible-tmp-1701782995.5552058-23181-40564253413079 `" && echo ansible-tmp-1701782995.5552058-23181-40564253413079="` echo /Users/albalitz/.ansible/tmp/ansible-tmp-1701782995.5552058-23181-40564253413079 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/9.0.1/libexec/lib/python3.12/site-packages/ansible/modules/stat.py
<localhost> PUT /Users/albalitz/.ansible/tmp/ansible-local-23119muc1g04o/tmp8owc8yoz TO /Users/albalitz/.ansible/tmp/ansible-tmp-1701782995.5552058-23181-40564253413079/AnsiballZ_stat.py
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/albalitz/.ansible/tmp/ansible-tmp-1701782995.5552058-23181-40564253413079/ /Users/albalitz/.ansible/tmp/ansible-tmp-1701782995.5552058-23181-40564253413079/AnsiballZ_stat.py && sleep 0'
<localhost> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/9.0.1/libexec/bin/python /Users/albalitz/.ansible/tmp/ansible-tmp-1701782995.5552058-23181-40564253413079/AnsiballZ_stat.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /Users/albalitz/.ansible/tmp/ansible-tmp-1701782995.5552058-23181-40564253413079/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/ansible/9.0.1/libexec/lib/python3.12/site-packages/ansible/executor/task_executor.py", line 165, in run
res = self._execute()
^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/ansible/9.0.1/libexec/lib/python3.12/site-packages/ansible/executor/task_executor.py", line 641, in _execute
result = self._handler.run(task_vars=vars_copy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/ansible/9.0.1/libexec/lib/python3.12/site-packages/ansible/plugins/action/assemble.py", line 144, in run
diff = self._get_diff_data(dest, path, task_vars)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ActionBase._get_diff_data() missing 1 required positional argument: 'content'
fatal: [shorewall-0 -> localhost]: FAILED! => {}
MSG:
Unexpected failure during module execution: ActionBase._get_diff_data() missing 1 required positional argument: 'content'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82359 | https://github.com/ansible/ansible/pull/82360 | a9919dd7f62c9efe17b8acaebf7c627606ae9f66 | 7f2ad7eea673233223948e0d2a9fc5ee683040ce | 2023-12-05T13:53:38Z | python | 2023-12-12T16:22:23Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,353 | ["changelogs/fragments/82353-ansible-sanity-examples.yml", "test/integration/targets/ansible-test-sanity-yamllint/aliases", "test/integration/targets/ansible-test-sanity-yamllint/ansible_collections/ns/col/plugins/inventory/inventory1.py", "test/integration/targets/ansible-test-sanity-yamllint/ansible_collections/ns/col/plugins/modules/module1.py", "test/integration/targets/ansible-test-sanity-yamllint/expected.txt", "test/integration/targets/ansible-test-sanity-yamllint/runme.sh", "test/lib/ansible_test/_util/controller/sanity/yamllint/yamllinter.py"] | ansible-test sanity should allow multiple documents in EXAMPLES | ### Summary
EXAMPLES are intended to be copy-and-paste ready.
While most of the documentation is expected to be a single document, it's reasonable to expect that within examples (especially for inventory plugins), there may be multiple documents in there. If only a single document is permitted, then when multiple examples are added for an inventory plugin, attempting to lint the YAML results in key-duplicates errors.
See also https://github.com/ansible/ansible-lint/issues/3860
### Issue Type
Feature Idea
### Component Name
ansible-test
### Additional Information
```yaml (paste below)
EXAMPLES = r"""
---
# Example using groups to assign the running hosts to a group based on vpc_id
plugin: amazon.aws.aws_ec2
profile: aws_profile
# Populate inventory with instances in these regions
regions:
- us-east-2
filters:
# All instances with their state as `running`
instance-state-name: running
keyed_groups:
- prefix: tag
key: tags
compose:
ansible_host: public_dns_name
groups:
libvpc: vpc_id == 'vpc-####'
---
# Define prefix and suffix for host variables coming from AWS.
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
hostvars_prefix: 'aws_'
hostvars_suffix: '_ec2'
"""
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82353 | https://github.com/ansible/ansible/pull/82355 | 83281531216ee64cd054959f2bfe54c6df498443 | 5346009d2cfab0dcbde675b875a06d2d86b962c5 | 2023-12-05T06:12:52Z | python | 2023-12-13T20:18:35Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,264 | ["changelogs/fragments/delegate_to_invalid.yml", "lib/ansible/executor/task_executor.py", "lib/ansible/vars/manager.py", "test/integration/targets/delegate_to/test_delegate_to.yml"] | delegate_to: "{{var}}" when: var != "" causes a "Supplied entity must be Host or Group, got <class 'ansible.inventory.host.Host'> instead" error on 2.16 | ### Summary
I have a task to conditionally install an SSH key on another machine:
```yaml
- authorized_key: ...
delegate_to: "{{ jenkins_install_key_on }}"
when: jenkins_install_key_on != ""
```
This works fine when `jenkins_install_key_on` is set to a non-blank value, but fails when `jenkins_install_key_on` is set to an empty string:
```
fatal: [jammy -> {{ jenkins_install_key_on }}]: FAILED! => {"msg": "Supplied entity must be Host or Group, got <class 'ansible.inventory.host.Host'> instead"}
```
It used to work with ansible-core 2.15 and older.
### Issue Type
Bug Report
### Component Name
delegate_to
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.0]
config file = /home/mg/src/deployments/provisioning/ansible.cfg
configured module search path = ['/home/mg/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/mg/.local/pipx/venvs/ansible/lib/python3.11/site-packages/ansible
ansible collection location = /home/mg/.ansible/collections:/usr/share/ansible/collections
executable location = /home/mg/.local/bin/ansible
python version = 3.11.6 (main, Oct 8 2023, 05:06:43) [GCC 13.2.0] (/home/mg/.local/pipx/venvs/ansible/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ACTION_WARNINGS(/home/mg/src/deployments/provisioning/ansible.cfg) = False
CACHE_PLUGIN(/home/mg/src/deployments/provisioning/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/mg/src/deployments/provisioning/ansible.cfg) = .cache/facts/
CACHE_PLUGIN_TIMEOUT(/home/mg/src/deployments/provisioning/ansible.cfg) = 86400
CALLBACKS_ENABLED(/home/mg/src/deployments/provisioning/ansible.cfg) = ['fancy_html']
CONFIG_FILE() = /home/mg/src/deployments/provisioning/ansible.cfg
DEFAULT_FORKS(/home/mg/src/deployments/provisioning/ansible.cfg) = 15
DEFAULT_GATHERING(/home/mg/src/deployments/provisioning/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/mg/src/deployments/provisioning/ansible.cfg) = ['/home/mg/src/deployments/provisioning/inventory']
DEFAULT_LOG_PATH(/home/mg/src/deployments/provisioning/ansible.cfg) = /home/mg/src/deployments/provisioning/.cache/ansible.log
DEFAULT_REMOTE_USER(/home/mg/src/deployments/provisioning/ansible.cfg) = root
DEFAULT_STDOUT_CALLBACK(/home/mg/src/deployments/provisioning/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/mg/src/deployments/provisioning/ansible.cfg) = /home/mg/src/deployments/provisioning/askpas>
EDITOR(env: EDITOR) = vim
INTERPRETER_PYTHON(/home/mg/src/deployments/provisioning/ansible.cfg) = python3
RETRY_FILES_ENABLED(/home/mg/src/deployments/provisioning/ansible.cfg) = False
CACHE:
=====
jsonfile:
________
_timeout(/home/mg/src/deployments/provisioning/ansible.cfg) = 86400
_uri(/home/mg/src/deployments/provisioning/ansible.cfg) = /home/mg/src/deployments/provisioning/.cache/facts
CONNECTION:
==========
paramiko_ssh:
____________
remote_user(/home/mg/src/deployments/provisioning/ansible.cfg) = root
ssh:
___
remote_user(/home/mg/src/deployments/provisioning/ansible.cfg) = root
```
### OS / Environment
Ubuntu 23.10
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
gather_facts: no
tasks:
- debug: msg="hello"
delegate_to: "{{ var }}"
when: var != ""
vars:
var: ""
```
### Expected Results
I expect the task to be skipped.
### Actual Results
```console
$ ansible-playbook -i localhost, ansible-delegate-to-blank.yml -vvvv
ansible-playbook [core 2.16.0]
config file = None
configured module search path = ['/home/mg/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/mg/.local/pipx/venvs/ansible/lib/python3.11/site-packages/ansible
ansible collection location = /home/mg/.ansible/collections:/usr/share/ansible/collections
executable location = /home/mg/.local/bin/ansible-playbook
python version = 3.11.6 (main, Oct 8 2023, 05:06:43) [GCC 13.2.0] (/home/mg/.local/pipx/venvs/ansible/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/mg/.local/pipx/venvs/ansible/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: ansible-delegate-to-blank.yml **************************************************************************************
Positional arguments: ansible-delegate-to-blank.yml
verbosity: 4
connection: ssh
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in ansible-delegate-to-blank.yml
PLAY [all] *******************************************************************************************************************
TASK [debug] *****************************************************************************************************************
task path: /home/mg/tmp/ansible-delegate-to-blank.yml:4
fatal: [localhost -> {{ var }}]: FAILED! => {
"msg": "Supplied entity must be Host or Group, got <class 'ansible.inventory.host.Host'> instead"
}
PLAY RECAP *******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82264 | https://github.com/ansible/ansible/pull/82319 | 3a42a0036875c8cab6a62ab9ea67a365e1dd4781 | 6ebefaceb6cd0d4961776a94d63a71fc1fc28bc0 | 2023-11-22T08:22:01Z | python | 2023-12-04T15:19:12Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,257 | ["lib/ansible/plugins/filter/password_hash.yml"] | password_hash docs state salt option considered as type int | ### Summary
The filter password_hash salt option said it is a string but is typed as an int.
https://github.com/ansible/ansible/blob/fbdb666411f0d2c833e2a74cbf35593b22abb69f/lib/ansible/plugins/filter/password_hash.yml#L22
### Issue Type
Documentation Report
### Component Name
plugins filter
### Ansible Version
```console
$ ansible --version
devel branch of ansible
```
### Configuration
```console
Viewed in the code
```
### OS / Environment
Github
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
Possible pass of the string in the salt option.
### Actual Results
```console
fatal: [127.0.0.1]: FAILED! => {"msg": "the field 'args' has an invalid value ({'msg': \"{{ _random_string_base64.stdout | password_hash('sha512','656000','$6$') }}\"}), and could not be converted to an dict.The error was: invalid literal for int() with base 10: '$6$'\n\nThe error appears to be in '/home/alexis/ansible-test/test.yml': line 24, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ _random_string_base64 }}\"\n - name: \"debug 2\"\n ^ here\n"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82257 | https://github.com/ansible/ansible/pull/82274 | 265f5e724cdda586a6f898a9cd69431549f0154c | 322eb0f884882fd47a2beca2569b3727b5ead93b | 2023-11-21T09:17:17Z | python | 2023-11-28T15:23:39Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,244 | ["changelogs/fragments/thread_counts.yml", "lib/ansible/module_utils/facts/hardware/linux.py"] | ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems | ### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82244 | https://github.com/ansible/ansible/pull/82261 | fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82 | e80507af32fad1ccaa62f8e6630f9095fe253004 | 2023-11-20T06:30:45Z | python | 2023-11-28T15:49:52Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,241 | ["changelogs/fragments/82241-handler-include-tasks-from.yml", "lib/ansible/playbook/included_file.py", "test/integration/targets/handlers/82241.yml", "test/integration/targets/handlers/roles/role-82241/handlers/main.yml", "test/integration/targets/handlers/roles/role-82241/tasks/entry_point.yml", "test/integration/targets/handlers/roles/role-82241/tasks/included_tasks.yml", "test/integration/targets/handlers/runme.sh"] | handler include_tasks fails if no `main.yaml` in role tasks in Ansible 2.15 - worked in 2.14 | ### Summary
In Ansible 2.14.3, e.g. package available in Debian Bullseye, `include_tasks` in a handler within a role always found the task file from `tasks` in that role, if it existed. When I install the current version using pip (2.15.6 at time of writing) this stops working if `tasks/main.yaml` does not exist in the role - creating an empty `main.yaml` (e.g. `touch roles/role_name/tasks/main.yaml`) is sufficient for `import_tasks` in a handler to start working again.
I have manually downgraded to 2.14.3 with pip and verified the same code works with 2.14.3 but not 2.15.6 on the same system, so this is not a difference with the OS package compared to the one pip installs and a bug introduced between those two versions.
In my specific use case, I have a role that is designed to be used via several entry points and has no default `main.yaml`. This has been working fine but started erroring in version 2.15.6. It has taken quite a bit of testing to discover what was making it fail (and why other similar roles for which I used the same pattern but did happen to have a `main.yaml`, were still working).
### Issue Type
Bug Report
### Component Name
ansible.builtin.import_tasks
### Ansible Version
```console
$ # Broken version
$ ansible --version
ansible [core 2.15.6]
config file = /home/laurence/Projects/ansible-home/ansible.cfg
configured module search path = ['/home/laurence/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/laurence/venvs/ansible/lib/python3.9/site-packages/ansible
ansible collection location = /home/laurence/.ansible/collections:/usr/share/ansible/collections
executable location = /home/laurence/venvs/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/home/laurence/venvs/ansible/bin/python)
jinja version = 3.1.2
libyaml = True
$ # Working version (after "pip install --upgrade ansible-core==2.14.3")
$ ansible --version
ansible [core 2.14.3]
config file = /home/laurence/Projects/ansible-home/ansible.cfg
configured module search path = ['/home/laurence/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/laurence/venvs/ansible/lib/python3.9/site-packages/ansible
ansible collection location = /home/laurence/.ansible/collections:/usr/share/ansible/collections
executable location = /home/laurence/venvs/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/home/laurence/venvs/ansible/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian 12.0
Ansible installed in Python 3 VirtualEnv with this `requirements.txt` however should be reproducible with just `ansible-core`:
```
ansible-core
# dnspython is required for community.general.dig lookup plugin
dnspython
# hvac required for Hashicorp Vault
hvac
# netaddr required for ansible.utils.ipaddr filter
netaddr
# To manage Windows systems
pywinrm[credssp]
```
### Steps to Reproduce
`site.yaml`:
```yaml
---
- name: Test
hosts: all
gather_facts: false # Not needed for example
tasks:
- ansible.builtin.import_role:
name: test_role
# Custom entry point (no main.yaml required in test_role/tasks)
tasks_from: entry_point.yaml
...
```
`roles/test_role/tasks/entry_point.yaml`:
```yaml
---
# This is not necessary to reproduce the problem, just
# illustrating include_tasks works here without `main.yaml`.
- name: Include tasks works here
ansible.builtin.include_tasks: included_tasks.yaml
# Somehow trigger the role's handler - how is not important,
# but running the handler is necessary to illustrate the problem.
- name: Trigger handler
ansible.builtin.debug:
changed_when: true # Force handler to always be notified
notify: Test handler
...
```
`roles/test_role/tasks/included_tasks.yaml`:
```yaml
---
- name: Included task
ansible.builtin.debug: msg="Included task"
...
```
`roles/test_role/handlers/main.yaml`:
```yaml
---
- name: Test handler
ansible.builtin.include_tasks: included_tasks.yaml
...
```
### Expected Results
When handler runs, `included_tasks` file is found in role's `tasks` folder (expected behaviour can be seen by running with Ansible 2.14.3 or if empty `main.yaml` has been added to `test_role/tasks`):
```
$ ansible-playbook -i localhost, site.yaml
PLAY [Test] ********************************************************************************************************************************************************************************************************************
TASK [test_role : Include tasks works here] ************************************************************************************************************************************************************************************
included: /tmp/test/roles/test_role/tasks/included_tasks.yaml for localhost
TASK [test_role : Included task] ***********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Included task"
}
TASK [test_role : Trigger handler] *********************************************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [test_role : Test handler] *************************************************************************************************************************************************************************************
included: /tmp/test/roles/test_role/tasks/included_tasks.yaml for localhost
RUNNING HANDLER [test_role : Included task] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Included task"
}
PLAY RECAP *********************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
With Ansible 2.15.6, error that `included_tasks.yaml` does not exist - in this example, despite it being included successfully in another task list. It is unexpected that `include_tasks`, even when used from a handler, cannot find a tasks file in the the role's `tasks` directory (i.e. where all of a role's tasks live). This used to work fine (and still does if downgrade `ansible-core` to previous version):
$ ansible-playbook -i localhost, site.yaml
PLAY [Test] ********************************************************************************************************************************************************************************************************************
TASK [test_role : Include tasks works here] ************************************************************************************************************************************************************************************
included: /tmp/test/roles/test_role/tasks/included_tasks.yaml for localhost
TASK [test_role : Included task] ***********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Included task"
}
TASK [test_role : Trigger handler] *********************************************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [test_role : Test handler] *************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "Could not find or access '/tmp/test/included_tasks.yaml' on the Ansible Controller."}
PLAY RECAP *********************************************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Note that creating `roles/test_role/main.yaml` causes the same code to begin working with 2.15.6 as well as 2.14.3. Although `main.yaml` is the default entry point, I am not aware of any other Ansible functionality that breaks if it does not exist and only other entry points are used for a specific role, so I think this is a bug.
```
$ touch roles/test_role/tasks/main.yaml
$ ansible-playbook -i localhost, site.yaml
PLAY [Test] ********************************************************************************************************************************************************************************************************************
TASK [test_role : Include tasks works here] ************************************************************************************************************************************************************************************
included: /tmp/test/roles/test_role/tasks/included_tasks.yaml for localhost
TASK [test_role : Included task] ***********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Included task"
}
TASK [test_role : Trigger handler] *********************************************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [test_role : Test handler] *************************************************************************************************************************************************************************************
included: /tmp/test/roles/test_role/tasks/included_tasks.yaml for localhost
RUNNING HANDLER [test_role : Included task] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Included task"
}
PLAY RECAP *********************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82241 | https://github.com/ansible/ansible/pull/82248 | a4b00793be46f703e32ee4c440f303d19d2c652d | d664f13b4a117b324f107b603e9b8e2bb9af50c5 | 2023-11-18T17:48:36Z | python | 2023-11-22T16:42:51Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,226 | ["lib/ansible/utils/display.py", "test/integration/targets/ansible_log/aliases", "test/integration/targets/ansible_log/logit.yml", "test/integration/targets/ansible_log/runme.sh"] | ANSIBLE_LOG_PATH no longer works since #81692 got merged | ### Summary
If you run `ANSIBLE_LOG_PATH=test ansible localhost -m setup`, `test` is now an empty file. Before #81692 got merged, it contained log output like
```
023-11-16 08:01:52,262 p=313108 u=felix n=ansible | [WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development.
This is a rapidly changing source of code and can become unstable at any point.
2023-11-16 08:01:52,383 p=313108 u=felix n=ansible | [WARNING]: No inventory was parsed, only implicit localhost is available
2023-11-16 08:01:53,511 p=313108 u=felix n=ansible | localhost | SUCCESS => {
"ansible_facts": {
...
```
### Issue Type
Bug Report
### Component Name
logging
### Ansible Version
```console
latest devel branch
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Run `ANSIBLE_LOG_PATH=test ansible localhost -m setup`
### Expected Results
File `test` contains log output.
### Actual Results
```console
File `test` is empty.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82226 | https://github.com/ansible/ansible/pull/82227 | f8cdec632461fbd821050fc584543c1dda6dfc5c | f6d7dd0840c079d0d2c2e3d8852b952462423a78 | 2023-11-16T07:03:17Z | python | 2023-11-16T19:49:40Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,199 | ["lib/ansible/plugins/filter/union.yml"] | ansible.builtin.union documentation not matching implementation | ### Summary
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/union_filter.html#examples shows:
\# return the unique elements of list1 added to list2
\# list1: [1, 2, 5, 1, 3, 4, 10]
\# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | union(list2) }}
\# => [1, 2, 5, 1, 3, 4, 10, 11, 99]
But running a test playbook with the above gives different result:
```yaml
- name: test
vars:
list1: [1, 2, 5, 1, 3, 4, 10]
list2: [1, 2, 3, 4, 5, 11, 99]
debug:
msg: |
{{ list1 | union(list2) | string }}
```
```
ok: [localhost] =>
msg: |-
[1, 2, 5, 3, 4, 10, 11, 99]
```
I think documentation not the code needs to be adjusted here. Thanks.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.6]
config file = /home/testuser/.ansible.cfg
configured module search path = ['/home/testuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/testuser/ansible/ansible-2.15/lib64/python3.9/site-packages/ansible
ansible collection location = /home/testuser/.ansible/collections:/usr/share/ansible/collections
executable location = /home/testuser/ansible/ansible-2.15/bin/ansible
python version = 3.9.18 (main, Sep 7 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/home/testuser/ansible/ansible-2.15/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
ANSIBLE_NOCOWS(/home/testuser/.ansible.cfg) = True
```
### OS / Environment
This is on RHEL 9 with pip installed ansible 2.15.
### Steps to Reproduce
```yaml
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: test
vars:
list1: [1, 2, 5, 1, 3, 4, 10]
list2: [1, 2, 3, 4, 5, 11, 99]
debug:
msg: |
{{ list1 | union(list2) | string }}
```
### Expected Results
Documentation and output match.
### Actual Results
```console
Documentation:
[1, 2, 5, 1, 3, 4, 10, 11, 99]
Actual output:
[1, 2, 5, 3, 4, 10, 11, 99]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82199 | https://github.com/ansible/ansible/pull/82202 | 4a84a9b3db47028c621d04cda8b2d3a3190173cd | 2277d470b38ff239f87b501c385d2af3948bb841 | 2023-11-13T09:57:30Z | python | 2023-11-13T19:59:07Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,179 | ["test/integration/targets/ansible-test-sanity-validate-modules/ansible_collections/ns/col/plugins/modules/invalid_choice_value.py", "test/integration/targets/ansible-test-sanity-validate-modules/expected.txt", "test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py"] | validate-modules does not catch all argument vs docs mismatches, specifically the choices field | ### Summary
At sanity check, if the choices option for a parameter is defined to be the same length as the choices in the document, and the choices for the parameter are repeated. 'ansible-test sanity --test validate-modules ****.py" Can't be checked, It will pass!
sample as:
```e.g.
docuemnt define as:
caching:
description:
- Type of ***** caching.
type: str
choices:
- ReadOnly
- ReadWrite
argument define as: caching=dict(type='str', choices=['ReadOnly', 'ReadOnly'])
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fred/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/fred/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
null
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --test validate-modules ***.py
```
### Expected Results
Can't pass!
### Actual Results
```console
Check pass!
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82179 | https://github.com/ansible/ansible/pull/82266 | 0806da55b13cbec202a6e8581340ce96f8c93ea5 | e6e19e37f729e89060fdf313c24b91f2f1426bd3 | 2023-11-09T10:13:39Z | python | 2023-11-28T15:09:29Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,175 | ["changelogs/fragments/82175-fix-ansible-galaxy-role-import-rc.yml", "lib/ansible/cli/galaxy.py", "test/units/galaxy/test_role_install.py"] | ansible-galaxy role import always exits 0 even if import failed | ### Summary
When a role import fails, the galaxy cli code does not alter the exit code for the command accordingly. It is always zero.
https://github.com/ansible/ansible/blob/devel/lib/ansible/cli/galaxy.py#L1818C1-L1833C17
```
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
```
The code knows the task is "finished" if state is either SUCCESS or FAILED, but the FAILED state does not affect the return value.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
(venv) [jtanner@p1 galaxy_ng.role_exception_logging]$ ansible-galaxy --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible-galaxy [core 2.17.0.dev0] (devel fd009a073a) last updated 2023/11/08 12:01:58 (GMT -400)
config file = /home/jtanner/workspace/github/jctanner.redhat/galaxy_ng.role_exception_logging/ansible.cfg
configured module search path = ['/home/jtanner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jtanner/workspace/github/jctanner.redhat/galaxy_ng.role_exception_logging/ansible.core/lib/ansible
ansible collection location = /home/jtanner/.ansible/collections:/usr/share/ansible/collections
executable location = /home/jtanner/workspace/github/jctanner.redhat/galaxy_ng.role_exception_logging/ansible.core/bin/ansible-galaxy
python version = 3.11.6 (main, Oct 3 2023, 00:00:00) [GCC 13.2.1 20230728 (Red Hat 13.2.1-1)] (/home/jtanner/workspace/github/jctanner.redhat/galaxy_ng.role_exception_logging/venv/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
Fedora 38
### Steps to Reproduce
1. Setup a galaxy server
2. ansible-galaxy role import -vvvv nephelaiio ansible-role-packetbeat
### Expected Results
The ansible-galaxy command should exit non-zero if the task is in a FAILED state.
### Actual Results
```console
exit 0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82175 | https://github.com/ansible/ansible/pull/82193 | 7f2ad7eea673233223948e0d2a9fc5ee683040ce | fe81164fe548d79fbcd0024836d5f7474403c95d | 2023-11-08T17:14:35Z | python | 2023-12-12T18:59:19Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,146 | ["lib/ansible/modules/user.py"] | Confusing wording of default shell determination in ansible.builtin.user | ### Summary
The documentation about how systems determine the default shell is confusing.
At the [shell parameter](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/user_module.html#parameter-shell) is written: "See notes for details on how other operating systems determine the default shell by the underlying tool."
The [Notes section](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/user_module.html#notes) does not contain "details on how other operating systems determine the default shell".
If I read this, I would expect to find something along the lines of "On FreeBSD, the default shell is set by `dscl`".
Instead, it contains notes on which underlying tools are used by the module to create, modify and remove accounts. This is only helpful if the reader is already aware of the fact that these underlying tools _also_ set the default shell.
Furthermore, since the notes at the shell parameter explicitly spell out what the default shell on macOS is, the user has a reasonable expectation to find similarly worded notes for other operating systems in the Notes.
A possible solution: replace "See notes for details on how other operating systems determine the default shell by the underlying tool." with "On other operating systems, the default shell is determined by the underlying tool invoked by this module. See Notes for a per platform list of invoked tools."
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/user.py
### Ansible Version
```console
N/a
```
### Configuration
```console
N/a
```
### OS / Environment
N/a
### Additional Information
Previous bug report and attempted (but IMO inadequate) fix here: https://github.com/ansible/ansible/issues/59796.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82146 | https://github.com/ansible/ansible/pull/82147 | 40baf5eace3848cd99b43a7c6732048c6072da60 | d46b042a9475d177b2ebd69ff3d6f22f702ff323 | 2023-11-06T19:31:08Z | python | 2023-11-07T15:20:46Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,142 | ["changelogs/fragments/copy_keep_suffix_temp.yml", "lib/ansible/plugins/action/copy.py", "test/integration/targets/callback_default/callback_default.out.result_format_yaml_lossy_verbose.stdout", "test/integration/targets/callback_default/callback_default.out.result_format_yaml_verbose.stdout", "test/integration/targets/callback_default/runme.sh"] | Cannot use ansible.builtin.copy with /bin/ansible-config list -c %s | ### Summary
I should be able to manage the `/etc/ansible/ansible.cfg` file on my ansible control host with ansible. So, when I try this:
```
- name: Install ansible hosts and ansible.cfg
copy:
src: "{{ file_directory }}/etc/ansible/{{ item.file }}"
dest: '/etc/ansible/{{ item.file }}'
owner: root
group: root
mode: '0644'
backup: yes
force: yes
validate: "{{ item.validate }}"
loop:
- file: hosts
validate: /bin/ansible-inventory --list --inventory %s
- file: ansible.cfg
validate: /bin/ansible-config list -c %s
```
the validation fails because the file indicated by %s does not have a .cfg extension.
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/steve/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/steve/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.5 (main, Aug 28 2023, 00:00:00) [GCC 13.2.1 20230728 (Red Hat 13.2.1-1)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora 38
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Install ansible hosts and ansible.cfg
copy:
src: "{{ file_directory }}/etc/ansible/{{ item.file }}"
dest: '/etc/ansible/{{ item.file }}'
owner: root
group: root
mode: '0644'
backup: yes
force: yes
validate: "{{ item.validate }}"
loop:
- file: hosts
validate: /bin/ansible-inventory --list --inventory %s
- file: ansible.cfg
validate: /bin/ansible-config list -c %s
```
### Expected Results
I expected ansible-config to behave similarly to ansible-inventory, or, failing that to (a) have a different return code for the file extension "error" (not 5) and to test the content before the extension.
### Actual Results
```console
[steve@jabberwock ~]$ ansible-playbook -i shared/ansible/server_IaC/files/arthur/etc/ansible/hosts temp.yml
PLAY [arthur] ****************************************************************************************************************************
TASK [Install ansible hosts and ansible.cfg] *********************************************************************************************
ok: [arthur] => (item={'file': 'hosts', 'validate': '/bin/ansible-inventory --list --inventory %s'})
failed: [arthur] (item={'file': 'ansible.cfg', 'validate': '/bin/ansible-config list -c %s'}) => {"ansible_loop_var": "item", "changed": false, "checksum": "d17e17e9639b5df25890b6ecbab867b9e329f40f", "exit_status": 5, "item": {"file": "ansible.cfg", "validate": "/bin/ansible-config list -c %s"}, "msg": "failed to validate", "stderr": "ERROR! Unsupported configuration file extension for /home/ansible/.ansible/tmp/ansible-tmp-1699292001.685196-111356-55146881241278/source: \n", "stderr_lines": ["ERROR! Unsupported configuration file extension for /home/ansible/.ansible/tmp/ansible-tmp-1699292001.685196-111356-55146881241278/source: "], "stdout": "usage: ansible-config [-h] [--version] [-v] {list,dump,view,init} ...\n\nView ansible configuration.\n\npositional arguments:\n {list,dump,view,init}\n list Print all config options\n dump Dump configuration\n view View configuration file\n init Create initial configuration\n\noptions:\n --version show program's version number, config file location,\n configured module search path, module location,\n executable location and exit\n -h, --help show this help message and exit\n -v, --verbose Causes Ansible to print more debug messages. Adding\n multiple -v will increase the verbosity, the builtin\n plugins currently evaluate up to -vvvvvv. A reasonable\n level to start is -vvv, connection debugging might\n require -vvvv.\n", "stdout_lines": ["usage: ansible-config [-h] [--version] [-v] {list,dump,view,init} ...", "", "View ansible configuration.", "", "positional arguments:", " {list,dump,view,init}", " list Print all config options", " dump Dump configuration", " view View configuration file", " init Create initial configuration", "", "options:", " --version show program's version number, config file location,", " configured module search path, module location,", " executable location and exit", " -h, --help show this help message and exit", " -v, --verbose Causes Ansible to print more debug messages. Adding", " multiple -v will increase the verbosity, the builtin", " plugins currently evaluate up to -vvvvvv. A reasonable", " level to start is -vvv, connection debugging might", " require -vvvv."]}
PLAY RECAP *******************************************************************************************************************************
arthur : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/82142 | https://github.com/ansible/ansible/pull/82158 | a8b6ef7e7cbabaf87e57ea7df9df75eb7e7d1ab5 | 4a84a9b3db47028c621d04cda8b2d3a3190173cd | 2023-11-06T17:39:49Z | python | 2023-11-13T15:03:58Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,024 | ["changelogs/fragments/ansible-test-remove-rhel-9_2-remote.yml", "test/lib/ansible_test/_data/completion/remote.txt"] | Remove RHEL 9.2 from ansible-test | ### Summary
Remove RHEL 9.2 from ansible-test after a 2-week transition period following the addition of RHEL 9.3 to ansible-test. This is a remote VM removal.
### Issue Type
Feature Idea
### Component Name
`ansible-test` | https://github.com/ansible/ansible/issues/82024 | https://github.com/ansible/ansible/pull/82211 | e0bf76e3db3e007d039a0086276d35c28b90ff04 | afd45aca6ada1dd21fc34a9ccb206ba1e185c883 | 2023-10-18T19:38:37Z | python | 2023-11-27T09:03:42Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,020 | [".azure-pipelines/azure-pipelines.yml", "changelogs/fragments/ansible-test-rhel-9.3.yml", "test/lib/ansible_test/_data/completion/remote.txt"] | Add RHEL 9.3 to ansible-test | ### Summary
This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test` | https://github.com/ansible/ansible/issues/82020 | https://github.com/ansible/ansible/pull/82178 | 2277d470b38ff239f87b501c385d2af3948bb841 | 0bab08ee33a1aad1908f54534b48ece66cff7c50 | 2023-10-18T19:38:29Z | python | 2023-11-14T07:23:44Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,018 | [".azure-pipelines/azure-pipelines.yml", "changelogs/fragments/ansible-test-added-fedora-39.yml", "test/lib/ansible_test/_data/completion/docker.txt", "test/lib/ansible_test/_data/completion/remote.txt"] | Add Fedora 39 to ansible-test | ### Summary
This is a remote VM and container addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test` | https://github.com/ansible/ansible/issues/82018 | https://github.com/ansible/ansible/pull/82218 | 8fd1aa0d2e205ed9836fa2d4ea566faed8b857de | fbdb666411f0d2c833e2a74cbf35593b22abb69f | 2023-10-18T19:38:26Z | python | 2023-11-17T02:30:13Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,977 | ["changelogs/fragments/ansible-test-cgroup-split.yml", "test/lib/ansible_test/_internal/cgroup.py"] | Failed test when the path in cgroups informations have ":" | ### Summary
Get `ValueError: too many values to unpack (expected 3)` in line `cid, subsystem, path = value.split(':')` with ansible test
at https://github.com/ansible/ansible/blob/5812cabaf53a7c972c73a4e45faa57032e5c1186/test/lib/ansible_test/_internal/cgroup.py#L47
data come from https://github.com/ansible/ansible/blob/5812cabaf53a7c972c73a4e45faa57032e5c1186/test/lib/ansible_test/_internal/docker_util.py#L303
Is it possible to change the line to have something like :
```
cid, subsystem, path = value.split(':', maxsplit=2)
```
(edited after a better analysis of the problem)
### Issue Type
Bug Report
### Component Name
cgroup.py
### Ansible Version
```console
ansible-8.5.0
ansible-core-2.15.5
```
### Configuration
```console
problem and solution already in Summary
```
### OS / Environment
cluster K8S/CRI-O with Screwdriver
### Steps to Reproduce
problem and solution already in Summary
### Expected Results
problem and solution already in Summary
### Actual Results
```console
problem and solution already in Summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81977 | https://github.com/ansible/ansible/pull/82040 | 09d943445c49c119e90787a5d28703c0d70a9271 | e933d9d8a6155478ce99518d111220e680201ca2 | 2023-10-15T14:08:23Z | python | 2023-10-19T22:30:32Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,965 | ["changelogs/fragments/ansible-galaxy-role-install-symlink.yml", "lib/ansible/galaxy/role.py", "test/integration/targets/ansible-galaxy-role/files/create-role-archive.py", "test/integration/targets/ansible-galaxy-role/tasks/dir-traversal.yml", "test/integration/targets/ansible-galaxy-role/tasks/main.yml", "test/integration/targets/ansible-galaxy-role/tasks/valid-role-symlinks.yml"] | TypeError: join() missing 1 required positional argument: 'a' in ansible-galaxy | ### Summary
```plaintext
# ansible-galaxy install willshersystems.sshd --force -vvv
ansible-galaxy [core 2.15.5]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.9.16 (main, Sep 8 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Starting galaxy role install process
Processing role willshersystems.sshd
Opened /root/.ansible/galaxy_token
- downloading role 'sshd', owned by willshersystems
- downloading role from https://github.com/willshersystems/ansible-sshd/archive/v0.21.0.tar.gz
- extracting willshersystems.sshd to /root/.ansible/roles/willshersystems.sshd
[WARNING]: Illegal filename '..': '..' is not allowed
ERROR! Unexpected Exception, this is probably a bug: join() missing 1 required positional argument: 'a'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 719, in run
return context.CLIARGS['func']()
File "/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 119, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1370, in execute_install
self._execute_install_role(role_requirements)
File "/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1469, in _execute_install_role
installed = role.install()
File "/usr/local/lib/python3.9/site-packages/ansible/galaxy/role.py", line 426, in install
setattr(member, attr, os.path.join(*n_final_parts))
TypeError: join() missing 1 required positional argument: 'a'
```
For some reason, the list `n_final_parts` doesn't contain any entries which makes this call crash:
```python
os.path.join(*n_final_parts)
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.16 (main, Feb 23 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
```
### OS / Environment
Amazon Linux 2023
### Steps to Reproduce
not applicable
### Expected Results
I expect the ansible role to be installed
### Actual Results
```console
The installation process crashes, as mentioned above.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81965 | https://github.com/ansible/ansible/pull/82165 | b405958f7998efc2e1d03ecf2d22bcd9276b2533 | 3a42a0036875c8cab6a62ab9ea67a365e1dd4781 | 2023-10-13T08:08:47Z | python | 2023-11-30T23:05:48Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,901 | ["changelogs/fragments/81901-galaxy-requirements-format.yml", "lib/ansible/cli/galaxy.py", "test/units/cli/test_galaxy.py"] | ansible-galaxy failed with AttributeError | ### Summary
While specifying requirements.yml to install roles like -
```
ansible-galaxy role install -r requirements.yml -vvvv
```
With `requirements.yml` (I understand this file syntax is wrong)
```yaml
---
community.vmware
```
results in
```
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.17.0.dev0] (i81713 310625996d) last updated 2023/10/04 11:27:33 (GMT -400)
config file = /Volumes/data/src/playbooks/ansible.cfg
configured module search path = ['/Users/akasurde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Volumes/data/src/ansible/lib/ansible
ansible collection location = /Users/akasurde/.ansible/collections:/usr/share/ansible/collections
executable location = /Volumes/data/src/ansible/bin/ansible
python version = 3.11.3 (main, May 10 2023, 12:50:08) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/Users/akasurde/.pyenv/versions/3.11.3/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Red Hat Enterprise Linux release 8.2 (Ootpa)
### Steps to Reproduce
Trying installing role/collection with the above requirements.yml file.
### Expected Results
Installation successful.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81901 | https://github.com/ansible/ansible/pull/81917 | 976067c15fea8c416fc41d264a221535c6f38872 | 8a5ccc9d63ab528b579c14c4519c70c6838c7d6c | 2023-10-04T19:38:12Z | python | 2023-10-05T19:03:01Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,897 | ["changelogs/fragments/j2_load_fix.yml", "lib/ansible/plugins/loader.py", "test/integration/targets/plugin_loader/file_collision/play.yml", "test/integration/targets/plugin_loader/file_collision/roles/r1/filter_plugins/custom.py", "test/integration/targets/plugin_loader/file_collision/roles/r1/filter_plugins/filter1.yml", "test/integration/targets/plugin_loader/file_collision/roles/r1/filter_plugins/filter3.yml", "test/integration/targets/plugin_loader/file_collision/roles/r2/filter_plugins/custom.py", "test/integration/targets/plugin_loader/file_collision/roles/r2/filter_plugins/filter2.yml", "test/integration/targets/plugin_loader/runme.sh"] | Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name | ### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81897 | https://github.com/ansible/ansible/pull/82002 | e933d9d8a6155478ce99518d111220e680201ca2 | b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc | 2023-10-04T16:35:38Z | python | 2023-10-20T23:00:41Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,782 | ["changelogs/fragments/urls-tls13-post-handshake-auth.yml", "lib/ansible/module_utils/urls.py"] | get_url with client_key/client_cert fails with 403 forbidden on centos stream 8 | ### Summary
We have a web server that requires a client cert for access. We use get_url to retrieve a file with client_key/client_cert. This appears to be working everywhere except on my CentOS Stream 8 machine.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Aug 11 2023, 13:46:19) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
CentOS Stream 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible -m ansible.builtin.get_url -a 'url=https://microsoft.cora.nwra.com/keys/microsoft.asc dest=/etc/pki/rpm-gpg/microsoft.asc client_key=/etc/pki/tls/private/rufous.cora.nwra.com.key client_cert=/etc/pki/tls/certs/rufous.cora.nwra.com.crt mode="0644"'
```
### Expected Results
File is successfully downloaded. I works fine with curl:
```
# curl --cert /etc/pki/tls/certs/rufous.cora.nwra.com.crt --key /etc/pki/tls/private/rufous.cora.nwra.com.key https://microsoft.cora.nwra.com/keys/microsoft.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (GNU/Linux)
mQENBFYxWIwBCADAKoZhZlJxGNGWzqV+1OG1xiQeoowKhssGAKvd+buXCGISZJwT
...
```
### Actual Results
```console
ansible [core 2.15.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Aug 11 2023, 13:46:19) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.11/site-packages/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720 `" && echo ansible-tmp-1695744556.844725-24991-30365752363720="` echo /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720 `" ) && sleep 0'
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/arg_spec.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/locale.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/errors.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
Including module_utils file ansible/module_utils/urls.py
Including module_utils file ansible/module_utils/compat/typing.py
Using module file /usr/lib/python3.11/site-packages/ansible/modules/get_url.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-249871ktvk910/tmp73jre98z TO /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/ /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3.11 /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/ > /dev/null 2>&1 && sleep 0'
localhost | FAILED! => {
"changed": false,
"dest": "/etc/pki/rpm-gpg/microsoft.asc",
"elapsed": 0,
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"attributes": null,
"backup": false,
"checksum": "",
"ciphers": null,
"client_cert": "/etc/pki/tls/certs/rufous.cora.nwra.com.crt",
"client_key": "/etc/pki/tls/private/rufous.cora.nwra.com.key",
"decompress": true,
"dest": "/etc/pki/rpm-gpg/microsoft.asc",
"force": false,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": "0644",
"owner": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"timeout": 10,
"tmp_dest": null,
"unredirected_headers": [],
"unsafe_writes": false,
"url": "https://microsoft.cora.nwra.com/keys/microsoft.asc",
"url_password": null,
"url_username": null,
"use_gssapi": false,
"use_netrc": true,
"use_proxy": true,
"validate_certs": true
}
},
"mode": "0644",
"msg": "Request failed",
"owner": "root",
"response": "HTTP Error 403: Forbidden",
"secontext": "system_u:object_r:cert_t:s0",
"size": 983,
"state": "file",
"status_code": 403,
"uid": 0,
"url": "https://microsoft.cora.nwra.com/keys/microsoft.asc"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81782 | https://github.com/ansible/ansible/pull/82063 | f5a0c0dfc8b1aa885536cc59d848698d28042ca3 | b34f4a559ff3b4521313f5832f93806d1db853c8 | 2023-09-26T16:28:42Z | python | 2023-10-27T02:00:34Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,722 | ["changelogs/fragments/81722-handler-subdir-include_tasks.yml", "lib/ansible/playbook/included_file.py", "test/integration/targets/handlers/roles/include_role_include_tasks_handler/handlers/include_handlers.yml", "test/integration/targets/handlers/roles/include_role_include_tasks_handler/handlers/main.yml", "test/integration/targets/handlers/roles/include_role_include_tasks_handler/tasks/main.yml", "test/integration/targets/handlers/runme.sh", "test/integration/targets/handlers/test_include_tasks_in_include_role.yml"] | include_tasks within handler called within include_role doesn't work | ### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81722 | https://github.com/ansible/ansible/pull/81733 | 86fd7026a88988c224ae175a281e7e6e2f3c5bc3 | 1e7f7875c617a12e5b16bcf290d489a6446febdb | 2023-09-19T08:28:30Z | python | 2023-09-21T19:12:04Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,716 | ["changelogs/fragments/81716-ansible-doc.yml", "lib/ansible/cli/doc.py", "test/sanity/ignore.txt"] | Remove deprecated functionality from ansible-doc for 2.17 | ### Summary
ansible-doc contains deprecated calls to be removed for 2.17
### Issue Type
Feature Idea
### Component Name
`lib/ansible/cli/doc.py` | https://github.com/ansible/ansible/issues/81716 | https://github.com/ansible/ansible/pull/81729 | 3ec7a6e0db53b254fde26abc190fcb2f4af1ce88 | 4b7705b07a64408515d0e164b62d4a8f814918db | 2023-09-18T21:01:45Z | python | 2023-09-19T23:48:33Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,714 | ["changelogs/fragments/81714-remove-deprecated-jinja2_native_warning.yml", "lib/ansible/config/base.yml"] | Remove deprecated JINJA2_NATIVE_WARNING.env.0 | ### Summary
The config option `JINJA2_NATIVE_WARNING.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.17.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.17
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A | https://github.com/ansible/ansible/issues/81714 | https://github.com/ansible/ansible/pull/81720 | ab6a544e8626eb6767e9578d63b41313f287c796 | e756e359e0c1946fe5a6e9059a3108d20e32440d | 2023-09-18T20:49:14Z | python | 2023-10-02T19:57:17Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,710 | ["changelogs/fragments/log_id.yml", "lib/ansible/config/base.yml", "lib/ansible/module_utils/basic.py", "lib/ansible/module_utils/common/parameters.py", "lib/ansible/module_utils/csharp/Ansible.Basic.cs", "lib/ansible/plugins/action/__init__.py", "test/integration/targets/module_utils_Ansible.Basic/library/ansible_basic_tests.ps1"] | Configurable sampling/transfer of control-side task context metadata to targets | ### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81710 | https://github.com/ansible/ansible/pull/81711 | 4208bdbbcd994251579409ad533b40c9b0543550 | 1dd0d6fad70d7d4f423dac41822da65ff9ec95ef | 2023-09-18T16:35:01Z | python | 2023-11-30T18:12:55Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,699 | ["changelogs/fragments/81699-zip-permission.yml", "lib/ansible/modules/unarchive.py"] | unarchive skipping valid archives due to pcs check | ### Summary
The unarchive is skipping valid archives due to the pcs check found [here](https://github.com/ansible/ansible/blob/6f65397871d089681fec5380b9ac17b62fb4e8e1/lib/ansible/modules/unarchive.py#L502C1-L504C25):
```
# Check first and seventh field in order to skip header/footer
if len(pcs[0]) != 7 and len(pcs[0]) != 10:
continue
```
The zipinfo output of the zip in use by this playbook:
```
zipinfo -T -s /tmp/t/1.zip
Archive: /tmp/t/1.zip
Archive size: 14848 bytes; Members: 4
-rw-a--- 2.0 fat 2538 t- defN 20230913.162426 deployment/scripts/setup.sh
-rw-a--- 2.0 fat 743 t- defN 20230522.135636 1.ansible.vault
-rw-a--- 2.0 fat 873 t- defN 20230911.104146 2.ansible.vault
-rw-a--- 2.0 fat 12816 b- stor 20230913.162542 scripts.zip
Members: 4; Bytes uncompressed: 16970, compressed: 14362, 15.4%
Directories: 0, Files: 4, Links: 0
```
Notice the first field (pcs[0]) is 8 characters (not 7 or 10 as expected). This zip extracts just fine using unzip.
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /root/.ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /bin/ansible
python version = 3.9.16 (main, May 31 2023, 12:21:58) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/root/.ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/root/.ansible.cfg) = /tmp/facts_cache
CACHE_PLUGIN_TIMEOUT(/root/.ansible.cfg) = 7200
DEFAULT_FORKS(/root/.ansible.cfg) = 50
DEFAULT_GATHERING(/root/.ansible.cfg) = smart
DEFAULT_REMOTE_USER(/root/.ansible.cfg) = ansible
HOST_KEY_CHECKING(/root/.ansible.cfg) = False
CACHE:
=====
jsonfile:
________
_timeout(/root/.ansible.cfg) = 7200
_uri(/root/.ansible.cfg) = /tmp/facts_cache
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/root/.ansible.cfg) = False
remote_user(/root/.ansible.cfg) = ansible
ssh:
___
host_key_checking(/root/.ansible.cfg) = False
pipelining(/root/.ansible.cfg) = True
remote_user(/root/.ansible.cfg) = ansible
ssh_args(/root/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o PreferredAuthentications=publickey
```
### OS / Environment
RHEL 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: unarchive
ansible.builtin.unarchive:
src: /tmp/t/1.zip
dest: /tmp/t
remote_src= yes
```
### Expected Results
changed: [HOSTNAME]
### Actual Results
```console
ok: [HOSTNAME]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81699 | https://github.com/ansible/ansible/pull/81705 | ce9d268ab88eee1e69dfdd6bf853d021d2b7d13d | 7dde4901d42e4c043adbd980c941b97cd3237bb6 | 2023-09-14T19:20:51Z | python | 2023-11-22T00:48:31Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,666 | ["changelogs/fragments/81666-handlers-run_once.yml", "lib/ansible/playbook/handler.py", "lib/ansible/plugins/strategy/__init__.py", "lib/ansible/plugins/strategy/free.py", "lib/ansible/plugins/strategy/linear.py", "test/integration/targets/handlers/runme.sh", "test/integration/targets/handlers/test_run_once.yml"] | Handler that should run once is played twice | ### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81666 | https://github.com/ansible/ansible/pull/81667 | 000cf1dd468a1b8db2f7db723377bd8efa909b95 | 2d5861c185fb24441e3d3919749866a6fc5c12d7 | 2023-09-08T07:56:43Z | python | 2023-10-03T18:43:46Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,659 | ["changelogs/fragments/81659_varswithsources.yml", "lib/ansible/utils/vars.py", "lib/ansible/vars/manager.py", "test/units/utils/test_vars.py"] | ANSIBLE_DEBUG causes add_host to fail | ### Summary
Saw this happening with ansible 2.15.3
When using ANSIBLE_DEBUG=1 with a add_host task, the task fails with
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
Same bug with ANSIBLE_DEBUG enabled: https://github.com/ansible/ansible/issues/79763
Where fix is needed: https://github.com/ansible/ansible/blob/3ec0850df9429f4b1abc78d9ba505df12d7dd1db/lib/ansible/utils/vars.py#L91
### Issue Type
Bug Report
### Component Name
add_host
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-config
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
Loading collection ansible.builtin from
```
### OS / Environment
Debian GNU/Linux 10 (buster)
Linux 3.10.0-1127.el7.x86_64 x86_64
### Steps to Reproduce
```
- name: create inventory
hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
groups: resource_types
with_items:
- node
- pod
- namespace
- ResourceQuota
```
### Expected Results
successfull result of playbook
### Actual Results
```console
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81659 | https://github.com/ansible/ansible/pull/81700 | f7234968d241d7171aadb1e873a67510753f3163 | 0ea40e09d1b35bcb69ff4d9cecf3d0defa4b36e8 | 2023-09-07T15:36:13Z | python | 2023-09-19T15:03:58Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 82,057 | ["lib/ansible/config/base.yml"] | Improve documentation of BECOME_ALLOW_SAME_USER | https://docs.ansible.com/ansible/latest/reference_appendices/config.html#become-allow-same-user
The documentation of BECOME_ALLOW_SAME_USER is quite ambiguous.
> This setting controls if become is skipped when remote user and become user are the same.
Does setting it to true, or to false, skip it? It can be read as "allowing to become the same user", or "allowing to run `become` on the same user".
> If executable, it will be run and the resulting stdout will be used as the password.
If *what* is executable? Can you set BECOME_ALLOW_SAME_USER to a file path pointing to an executable file? The next part seems to say no: `Type: boolean` | https://github.com/ansible/ansible/issues/82057 | https://github.com/ansible/ansible/pull/82059 | b34f4a559ff3b4521313f5832f93806d1db853c8 | 2908a2c32a81fca78277a22f15fa8e3abe75e092 | 2023-09-07T11:26:10Z | python | 2023-10-27T07:21:30Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,656 | ["changelogs/fragments/81656-cf_readfp-deprecated.yml", "lib/ansible/module_utils/facts/system/local.py", "lib/ansible/plugins/lookup/ini.py"] | ''ansible.builtin.ini'' uses removed function | ### Summary
lookup function fails - lookup('ansible.builtin.ini', ...)
Ansible ini is using the function 'readfp' on Configmap this function was [removed ](https://issues.apache.org/jira/browse/SVN-4899#:~:text=Description,3.12%20(due%20October%202023).) on python 3.12
### Issue Type
Bug Report
### Component Name
ansible.builtin.ini
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/liss/webnews/pyramid-rt4/ansible/ansible.cfg
configured module search path = ['/home/liss/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/liss/.local/lib/python3.12/site-packages/ansible
ansible collection location = /home/liss/webnews/pyramid-rt4/ansible/collections
executable location = /home/liss/.local/bin/ansible
python version = 3.12.0rc1 (main, Aug 6 2023, 17:56:34) [GCC 9.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
"Ubuntu 20.04.6 LTS" on WSL
### Steps to Reproduce
- name:
set_fact:
planning_service_monitor_consul_token: "{{ lookup('ansible.builtin.ini', 'consul_token', file='/tmp/pyr_tmp', section='Consul', allow_no_value=True) }}"
### Expected Results
Success
### Actual Results
```console
fatal: [localhost]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''ansible.builtin.ini''. Error was a <class ''AttributeError''>, original message: ''ConfigParser'' object has no attribute ''readfp''. ''ConfigParser'' object has no attribute ''readfp'''
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81656 | https://github.com/ansible/ansible/pull/81657 | a65c331e8e035bfaa5361895dafae020799f81f7 | a861b1adba5d4a12f61ed268f67a224bdaa5f835 | 2023-09-07T07:17:45Z | python | 2023-09-07T19:24:50Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,618 | ["changelogs/fragments/fix-build-files-manifest-walk.yml", "lib/ansible/galaxy/collection/__init__.py", "test/units/galaxy/test_collection.py"] | Ansible galaxy collection build build_collection fails creating the files to be added to the release tarball | ### Summary
When using:
```python
from ansible.galaxy.collection import build_collection
```
And calling it like:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
It fails with:
```python-traceback
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 165, in run
res = self._execute()
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 660, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/action/collection_build.py", line 89, in run
galaxy.collection_build()
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/plugin_utils/automationhub.py", line 64, in collection_build
build_collection(self.options.input_path, self.options.output_path, True)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 514, in build_collection
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 1395, in _build_collection_tar
tar_file.add(
File "/usr/lib/python3.10/tarfile.py", line 2157, in add
tarinfo = self.gettarinfo(name, arcname)
File "/usr/lib/python3.10/tarfile.py", line 2030, in gettarinfo
statres = os.lstat(name)
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'",
"stdout": ""
}
```
### Issue Type
Bug Report
### Component Name
[`lib/ansible/galaxy/collection/__init__.py`](https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/__init__.py)
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/ccamacho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /home/ccamacho/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
Nothing custom or special
### OS / Environment
CentOS Stream9
### Steps to Reproduce
Using python directly:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
Testing the collection:
```shell
git clone https://github.com/ccamacho/automationhub
cd automationhub/ccamacho/automationhub/
ansible-galaxy collection build -v --force --output-path releases/
ansible-galaxy collection install releases/ccamacho-automationhub-1.0.0.tar.gz --force
ansible-playbook ./playbooks/collection_build.yml -vvvvv
```
### Expected Results
The collection is built correctly
### Actual Results
```python-traceback
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
```
As you can see it should be "playbooks" instead of "laybooks"
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81618 | https://github.com/ansible/ansible/pull/81619 | e4b9f9c6ae77388ff0d0a51d4943939636a03161 | 9244b2bff86961c896c2f2325b7c7f30b461819c | 2023-09-02T15:50:54Z | python | 2023-09-20T18:18:37Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,613 | ["changelogs/fragments/81613-remove-unusued-private-lock.yml", "lib/ansible/utils/encrypt.py"] | Possible multiprocessing semaphore leak on now-unused `_LOCK` | ### Summary
I'm in the middle of writing some Python commands around a bunch of playbooks I have, one of the commands interacts with some YAML files that potentially include Ansible Vault encrypted strings and generating passwords so I'm shortcutting and importing the various classes and functions from the `ansible` package for use in my own code.
One command imports `ansible.utils.encrypt:random_password` then eventually calls `os.execlp`. When the exec call happens I then get a warning from Python:
```
.../3.11.4/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```
I managed to track it down the culprit to this object: https://github.com/ansible/ansible/blob/devel/lib/ansible/utils/encrypt.py#L46
which after a quick search no longer appears to be used anywhere.
I've updated my code with the following to get rid of the issue. Ideally this lock is either removed or relocated out of the global scope.
```
import ansible.utils.encrypt
del ansible.utils.encrypt._LOCK
```
### Issue Type
Bug Report
### Component Name
ansible utils
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = .../ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = .../lib/python3.11/site-packages/ansible
ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
executable location = .../bin/ansible
python version = 3.11.4 (main, Aug 31 2023, 14:29:13) [Clang 14.0.3 (clang-1403.0.22.14.1)] (.../bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = ./ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(./ansible.cfg) = ['./ansible/filters']
DEFAULT_ROLES_PATH(/opt/dev/superna/seed/ansible.cfg) = ['./ansible/roles']
EDITOR(env: EDITOR) = mvim -f
```
### OS / Environment
macOS 13.4.1
### Steps to Reproduce
```python
import os
import ansible.utils.encrypt
os.execlp('true', 'true')
```
### Expected Results
Expected no warnings.
### Actual Results
```console
.../lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81613 | https://github.com/ansible/ansible/pull/81614 | 786a8abee6b9e5af0ee557f4c794ea46a33e8922 | 24aac5036934f65e2cb0b0e1a30f306c6b1f24e6 | 2023-08-31T19:36:33Z | python | 2023-09-05T15:02:56Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,608 | ["lib/ansible/plugins/action/__init__.py"] | Incorrect link for βRisks of becoming an unprivileged userβ | ### Summary
In the file `lib/ansible/plugins/action/__init__.py` thereβs a link to document _Understanding privilege escalation: become_. The link has an anchor that refers to the section _Risks of becoming an unprivileged user_. However, there is an error in the code that makes the URL incorrect.
The correct URL is: https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user
The URL that is displayed to the user is: https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user#risks-of-becoming-an-unprivileged-user
The error is that the anchor part of the URL gets repeated.
### Suggested solution
`become_link` is currently defined like this:
`become_link = get_versioned_doclink('playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user')`
One option could be to instead define it like this:
`become_link = get_versioned_doclink('playbook_guide/playbooks_privilege_escalation.html')`
I believe this would work since the anchor part (currently) gets added when `become_link` is referenced.
Another solution could be to omit the anchor part when referencing `become_link`. Example:
```
display.warning(
'Using world-readable permissions for temporary files Ansible '
'needs to create when becoming an unprivileged user. This may '
'be insecure. For information on securing this, see %s'
% become_link)
```
### Issue Type
Bug Report
### Component Name
lib
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/Users/carl.winback/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.3.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/carl.winback/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.3.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = emacsclient
```
### OS / Environment
macOS Ventura 13.5.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
- name: foo
ansible.builtin.shell: echo "hello"
become: yes
become_user: johndoe
```
### Expected Results
I expected the documentation link to look like this: https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user
### Actual Results
```console
fatal: [foo.example.com]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chmod: invalid mode: βA+user:johndoe:rx:allowβ\nTry 'chmod --help' for more information.\n}). For information on working around this, see https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user#risks-of-becoming-an-unprivileged-user"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81608 | https://github.com/ansible/ansible/pull/81623 | 24aac5036934f65e2cb0b0e1a30f306c6b1f24e6 | 48d8e067bf6c947a96750b8a61c7d6ef8cad594b | 2023-08-31T07:16:49Z | python | 2023-09-05T15:45:57Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,574 | ["changelogs/fragments/reboot.yml", "lib/ansible/plugins/action/reboot.py"] | reboot module times out without useful error when SSH connection fails with "permission denied" | ### Summary
The reboot module waits until it can SSH to the server again. It keeps trying until either it succeeds or for the specified timeout.
When the SSH connection fails because the user does not have access to the server it just keeps waiting until the timeout expires and eventually reports a generic "timeout expired" message, which is unhelpful. It would be helpful if the module:
1. Reported the actual error it got on the last attempt before it gave up.
2. Ideally, distinguished permission errors like "Permission denied (publickey)." or "Your account has expired; please contact your system administrator." from the server not being up yet and gave up without waiting out the timeout for permission errors. (Or maybe those errors should have a separate timeout, lower by default?)
### Issue Type
Bug Report
### Component Name
ansible.builtin.reboot
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.9]
config file = /media/sf_work/lops/ansible/ansible.cfg
configured module search path = ['/home/em/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/em/.direnv/python-3.9.5/lib/python3.9/site-packages/ansible
ansible collection location = /home/em/.ansible/collections:/usr/share/ansible/collections
executable location = /home/em/.direnv/python-3.9.5/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/em/.direnv/python-3.9.5/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /media/sf_work/lops/ansible/ansible.cfg
DEFAULT_BECOME(/media/sf_work/lops/ansible/ansible.cfg) = True
DEFAULT_FILTER_PLUGIN_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/filter_plugins']
DEFAULT_HASH_BEHAVIOUR(/media/sf_work/lops/ansible/ansible.cfg) = merge
DEFAULT_LOG_PATH(/media/sf_work/lops/ansible/ansible.cfg) = /media/sf_work/lops/ansible/ansible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/media/sf_work/lops/ansible/ansible.cfg) = debug
DEFAULT_TRANSPORT(/media/sf_work/lops/ansible/ansible.cfg) = smart
DEFAULT_VAULT_PASSWORD_FILE(/media/sf_work/lops/ansible/ansible.cfg) = /media/sf_work/lops/ansible/tools/vault-keyring.sh
DIFF_ALWAYS(/media/sf_work/lops/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/media/sf_work/lops/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/media/sf_work/lops/ansible/ansible.cfg) = ignore
CONNECTION:
==========
ssh:
___
pipelining(/media/sf_work/lops/ansible/ansible.cfg) = True
```
### OS / Environment
Ubuntu 22.04 target, Linux Mint 20 control machine
### Steps to Reproduce
Run a playbook that somehow disables the user it connects as (in my case I added `AllowGroups` to /etc/ssh/sshd_config and the user it connected as was not a member of that group) and then runs the `reboot` module.
### Expected Results
The target machine reboots and the `reboot` module then fails with an error like "Connection failed due to SSH error: Permission denied (publickey)." - preferably without waiting 10 minutes (the default timeout).
### Actual Results
The target machine reboots as expected, but the Ansible play appears to hang in the reboot module.
With ANSIBLE_DEBUG=1 I can see the real problem:
```console
148477 1692889630.14999: reboot: last boot time check fail 'Failed to connect to the host via ssh: ssh: connect to host MYHOST port 22: Connection refused', retrying in 12.3 seconds...
148477 1692889643.07223: reboot: last boot time check fail 'Failed to connect to the host via ssh: ubuntu@MYHOST: Permission denied (publickey).', retrying in 12.84 seconds...
... then more of the same until the timeout expires
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81574 | https://github.com/ansible/ansible/pull/81578 | 81c83c623cb78ca32d1a6ab7ff8a3e67bd62cc54 | 2793dfa594765d402f61d80128e916e0300a38fc | 2023-08-24T15:55:11Z | python | 2023-09-15T17:50:26Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,553 | ["changelogs/fragments/81555-add-warning-for-illegal-filenames-in-roles.yaml", "lib/ansible/galaxy/role.py", "test/integration/targets/ansible-galaxy-role/tasks/main.yml"] | ansible-galaxy install of roles with Java inner classes fails due to $ in the file name | ### Summary
When I try to install a role that contains files with `$` in their names (such as Java class names), it fails because we don't allow `$` in the file names because file names could get evaluated via `os.path.expandvars`.
See https://github.com/ansible/galaxy/issues/271 for the original context
The error message is "Not a directory" (errno 20).
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.5]
config file = /Users/atsalolikhin/.ansible.cfg
configured module search path = ['/Users/atsalolikhin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/atsalolikhin/py-3.9.13/lib/python3.9/site-packages/ansible
ansible collection location = /Users/atsalolikhin/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/atsalolikhin/py-3.9.13/bin/ansible
python version = 3.9.13 (main, Jul 20 2022, 17:45:35) [Clang 13.0.0 (clang-1300.0.29.3)] (/Users/atsalolikhin/py-3.9.13/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
Add this to your `requirements.yaml`:
```yaml
- src: https://gitlab.com/atsaloli/ansible-galaxy-issue-271.git
version: main
scm: git
```
and then try to install roles, with e.g., `ansible-galaxy install -r requirements.yml`
### Expected Results
I expected the role to be installed.
### Actual Results
```console
The role install failed.
$ ansible-galaxy install -r requirements.yml --force
Starting galaxy role install process
- extracting ansible-galaxy-issue-271 to /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271
[WARNING]: - ansible-galaxy-issue-271 was NOT installed successfully: Could not update files in /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271:
[Errno 20] Not a directory: '/home/atsaloli/.ansible/roles/ansible-galaxy-issue-271/files/udf/CMC_VWAP.class'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
$
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81553 | https://github.com/ansible/ansible/pull/81555 | 4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4 | bdaa091b33f0ebb273c6ad99b3835530ba2b5a30 | 2023-08-21T18:54:45Z | python | 2023-08-28T18:54:08Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,533 | ["changelogs/fragments/any_errors_fatal-fixes.yml", "lib/ansible/plugins/strategy/linear.py", "test/integration/targets/any_errors_fatal/31543.yml", "test/integration/targets/any_errors_fatal/36308.yml", "test/integration/targets/any_errors_fatal/73246.yml", "test/integration/targets/any_errors_fatal/80981.yml", "test/integration/targets/any_errors_fatal/runme.sh", "test/integration/targets/handlers/force_handlers_blocks_81533-1.yml", "test/integration/targets/handlers/force_handlers_blocks_81533-2.yml", "test/integration/targets/handlers/runme.sh"] | Block with run_once leads to no more hosts error on failure earlier of first host | ### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81533 | https://github.com/ansible/ansible/pull/78680 | c827dc0dabff8850a73de9ca65148a74899767f2 | fe94a99aa291d129aa6432e5d50e7117d9c6aae3 | 2023-08-17T15:13:47Z | python | 2023-10-25T07:42:13Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,532 | ["changelogs/fragments/81532-fix-nested-flush_handlers.yml", "lib/ansible/executor/play_iterator.py", "test/integration/targets/handlers/nested_flush_handlers_failure_force.yml", "test/integration/targets/handlers/runme.sh"] | Handler triggered in block does not run rescue/always tasks | ### Summary
When handlers are run from within a block using `meta: flush_handlers` in the block, then tasks in the `rescue:` and `always:` sections are not executed.
If the block has a `rescue:` section then the failure of a handler triggered in the block will cause the host to be rescued, but the rescue task is not actually executed.
Whether the handlers are defined somewhere else in the play (outside of the block) or defined from within a role that was included in the block does not influence the behavior.
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
The new behavior is possibly related to the changes introduced to address the following issues:
- #65067
- #52561
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo handler in block rescue/always error
become: false
gather_facts: false
hosts: all
tasks:
- block:
- name: Block debug
debug:
msg: "debug 1 - notify handler in block"
changed_when: True
notify: Handler
- meta: flush_handlers
rescue:
- name: Rescue debug
debug:
msg: "debug 2 - rescue failed hosts"
always:
- name: Always debug
debug:
msg: "debug 3 - run on all hosts"
handlers:
- name: Handler
fail:
when: inventory_hostname == 'host1'
```
### Expected Results
The handler triggered in the block should fail on host1.
The rescue task should run on host1 and the host should be marked as rescued instead of failed.
The always task should run on all hosts (also on the failed host1).
```console
# Ansible 2.12.5
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo handler in block rescue/always error] ******************************************************
TASK [Block debug] ************************************************************************************
changed: [host1] => {
"msg": "debug 1 - notify handler in block"
}
changed: [host2] => {
"msg": "debug 1 - notify handler in block"
}
TASK [meta] *******************************************************************************************
RUNNING HANDLER [Handler] *****************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Rescue debug] ***********************************************************************************
ok: [host1] => {
"msg": "debug 2 - rescue failed hosts"
}
TASK [Always debug] ***********************************************************************************
ok: [host1] => {
"msg": "debug 3 - run on all hosts"
}
ok: [host2] => {
"msg": "debug 3 - run on all hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo handler in block rescue/always error] ******************************************************
TASK [Block debug] ************************************************************************************
changed: [host1] => {
"msg": "debug 1 - notify handler in block"
}
changed: [host2] => {
"msg": "debug 1 - notify handler in block"
}
TASK [meta] *******************************************************************************************
TASK [meta] *******************************************************************************************
RUNNING HANDLER [Handler] *****************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Always debug] ***********************************************************************************
ok: [host2] => {
"msg": "debug 3 - run on all hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81532 | https://github.com/ansible/ansible/pull/81572 | 9c09ed73928272f898d18a2eada21f7357b418e4 | a8b6ef7e7cbabaf87e57ea7df9df75eb7e7d1ab5 | 2023-08-17T14:50:08Z | python | 2023-11-13T08:57:43Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,486 | ["changelogs/fragments/role-deduplication-condition.yml", "lib/ansible/plugins/strategy/__init__.py", "test/integration/targets/roles/role_complete.yml", "test/integration/targets/roles/roles/failed_when/tasks/main.yml", "test/integration/targets/roles/roles/recover/tasks/main.yml", "test/integration/targets/roles/roles/set_var/tasks/main.yml", "test/integration/targets/roles/roles/test_connectivity/tasks/main.yml", "test/integration/targets/roles/runme.sh"] | Role dependencies: change in comportement when role run conditionally | ### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81486 | https://github.com/ansible/ansible/pull/81565 | 3eb96f2c68826718cda83c42cb519c78c0a7a8a8 | 8034651cd2626e0d634b2b52eeafc81852d8110d | 2023-08-10T17:14:55Z | python | 2023-09-08T16:11:48Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,474 | ["changelogs/fragments/restore_role_param_precedence.yml", "lib/ansible/vars/manager.py", "test/integration/targets/var_precedence/test_var_precedence.yml"] | Role parameter: change in variable precedence | ### Summary
Hi ;
Since Ansible 8.0.0 (I think, I didn't check every ansible-core version), it seams that role parameters do not take precedence over already define facts.
According to the [documentation](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence), role (and include_role) params (line 20) should still take precedence over set_facts / registered vars (line 19). I also checked [Ansible 8.x changelogs](https://github.com/ansible-community/ansible-build-data/blob/main/8/CHANGELOG-v8.rst) but I didn't see anything about that, except maybe [this bug fix](https://github.com/ansible-community/ansible-build-data/blob/a8cf3895cd98246316ab6172ec684935e0013b45/8/CHANGELOG-v8.rst#L3397), I'm not sure what `Also adjusted the precedence to act the same as inline params` means and what are the expected impacts. But if this is a wanted new behavior, I feal like it should be details in major changes, not in the bug fix section.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Tested on Archlinux and also in python:3.11 docker container.
### Steps to Reproduce
Create `roles/test_set/tasks/main.yml` with:
```
---
- name: "Set test fact."
ansible.builtin.set_fact:
test_set_one: "set by test_set"
```
Create `roles/test_debug/tasks/main.yml` with:
```
---
- name: "Debug the variable."
ansible.builtin.debug:
var: test_set_one
```
Create `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
roles:
- test_set
- test_debug
- role: test_debug
test_set_one: "Set as role parameter"
```
Run: `ansible-playbook test.yml`.
### Expected Results
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "Set as role parameter"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
This is a the result with ansible 7.x, here the corresponding version used to obtain it:
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81474 | https://github.com/ansible/ansible/pull/82106 | 5ac62473b09405786ca08e00af4da6d5b3a8103d | 20a54eb236a4f77402daa0d7cdaede358587c821 | 2023-08-09T09:06:12Z | python | 2023-11-06T14:18:35Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,457 | ["changelogs/fragments/inventory_ini.yml", "lib/ansible/plugins/inventory/ini.py", "test/integration/targets/inventory_ini/inventory.ini", "test/integration/targets/inventory_ini/runme.sh"] | INI format inventory throws "invalid decimal literal" | ### Summary
The following minimal inventory
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
```
Makes ansible complain
```text
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
```
But I fail to understand why.
As soon as I adjust the entries as follows (non numerical char right before the `.`), it stops complaining but of course that does not help me as the DNS names are then wrong.
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01foo.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01bar.internal.pcfe.net ansible_user=ansible
```
### Issue Type
Bug Report
### Component Name
ini plugin
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
pcfe@t3600 ~ $ cat /etc/os-release
NAME="Fedora Linux"
VERSION="38 (KDE Plasma)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (KDE Plasma)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="KDE Plasma"
VARIANT_ID=kde
pcfe@t3600 ~ $ rpm -qf $(which ansible)
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
pcfe@t3600 ~ $ pwd
/home/pcfe
pcfe@t3600 ~ $ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
pcfe@t3600 ~ $ rpm -qf /etc/ansible/ansible.cfg
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
gitlab-runner-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
zimaboard-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
pcfe@t3600 ~ $
```
### Expected Results
I would like to understand what makes Ansible complain about this minimal INI type inventory.
If there is nothing I did wrong in the inventory, then I would like Ansible to please not complain about invalid decimal literals.
Additionally, if there is something invalid in my INI format inventory, then it would be nice if Ansible was more specific (affected line number, pointer to which part of some spec I violate) towards the user.
Not every user might be comfortable zeroing in on such inventory lines with for example `git bisect` (which is how I found the two offending lines in my full inventory).
### Actual Results
```console
pcfe@t3600 ~ $ ls -l /home/pcfe/.ansible/plugins /home/pcfe/.ansible/collections
ls: cannot access '/home/pcfe/.ansible/plugins': No such file or directory
ls: cannot access '/home/pcfe/.ansible/collections': No such file or directory
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all -vvvv
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
script declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
auto declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
Parsed /home/pcfe/tmp/inventory.ini inventory source with ini plugin
[β¦]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81457 | https://github.com/ansible/ansible/pull/81707 | 2793dfa594765d402f61d80128e916e0300a38fc | a1a6550daf305ec9815a7b12db42c68b63426878 | 2023-08-07T15:22:39Z | python | 2023-09-18T14:50:50Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,455 | ["changelogs/fragments/ansible-vault.yml", "lib/ansible/parsing/vault/__init__.py", "test/integration/targets/ansible-vault/runme.sh"] | Ansible-vault encrypt corrupts file if directory is not writeable | ### Summary
If you have write permissions for the file you want to encrypt, but not for the directory it's in, `ansible-vault encrypt` corrupts the file.
My colleague originally encountered this problem on `ansible [core 2.13.1]` with `python version = 3.8.10`, which I believe is the version from the Ubuntu 20.04 package repo, but I managed to reproduce it on a fresh install of ansible core 2.15.2.
I'd upload the original and corrupted file but my corporate firewall stupidly blocks gists and that's too annoying to work around right now. If you do think the files would help, just let me know and I'll make it work.
### Issue Type
Bug Report
### Component Name
ansible-vault
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['<homedir>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = <homedir>/test_ansible/lib/python3.10/site-packages/ansible
ansible collection location = <homedir>/.ansible/collections:/usr/share/ansible/collections
executable location = <homedir>/test_ansible/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (<homedir>/test_ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04 LTS
### Steps to Reproduce
Consider the following directory and file.
```bash
> ls testdir/
drwxr-sr-x 2 root some_group 4,0K 2023 Aug 07 (Mo) 11:34 .
-rw-rw---- 1 root some_group 3,7K 2023 Aug 07 (Mo) 11:34 .bashrc
```
I'm not the owner of the file or directory but I'm part of that group. Because of a config error on my part, the directory is missing group write permissions. (I'm not actually trying to encrypt my .bashrc, that's just a convenient file for reproducing the problem.) I _am_ able to write to the existing file but I'm not able to create new files in that directory without using sudo. I think ansible-vault tries to move the new, encrypted file into the directory. That fails of course, and somehow corrupts the file in the process.
```bash
> ansible-vault encrypt testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
to see the full traceback, use -vvv
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
.bashrc is now in a corrupted state, as indicated by `less` recognizing it as binary.
If the group doesn't have write permissions on the file, ansible-vault fails with the same error message and the file is, correctly, not touched. The corruption happens because ansible-vault is able to write to the file.
### Expected Results
Ansible-vault should either correctly encrypt and write the file or should fail and leave the file untouched. Under no circumstance should it corrupt my file.
### Actual Results
```console
> ansible-vault encrypt -vvv testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
the full traceback was:
Traceback (most recent call last):
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 248, in run
context.CLIARGS['func']()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 261, in execute_encrypt
self.editor.encrypt_file(f, self.encrypt_secret,
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 906, in encrypt_file
self.write_data(b_ciphertext, output_file or filename)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 1083, in write_data
self._shred_file(thefile)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 837, in _shred_file
os.remove(tmp_path)
PermissionError: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81455 | https://github.com/ansible/ansible/pull/81660 | a861b1adba5d4a12f61ed268f67a224bdaa5f835 | 6177888cf6a6b9fba24e3875bc73138e5be2a224 | 2023-08-07T10:30:13Z | python | 2023-09-07T19:30:05Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,446 | ["lib/ansible/plugins/filter/core.py", "lib/ansible/plugins/filter/path_join.yml"] | path_join filter documentation missing important detail | ### Summary
There is an undocumented gotcha in path_join. It is probably just a side effect of os.path.join's behaviour.
For a sparse backup task I had two tasks:
```
- ansible.builtin.file:
state: directory
path: "{{ (backup_dir, item) | path_join }}"
loop: "{{template_dirs}}"
- ansible.builtin.copy:
remote_src: true
src: "{{item}}"
dest: "{{ (backup_dir, item) | path_join }}"
loop: "{{ templates | map(attribute='dest') }}
```
`template_dirs` is a list of relative paths, `templates.dest` is the absolute path of the destination config. For the first task, path_join resulted in `/tmp/somedir/directory/in/list`; for the second, it produced `/directory/in/list/filename`. `(backup_dir, item[1:]) | path_join` produced the expected result of `/tmp/somedir/directory/in/list/filename`.
The documentation for the path_join filter should be clear that a list entry with an absolute path overrides earlier elements in the list, and there should be an example of it doing that.
### Issue Type
Documentation Report
### Component Name
lib/ansible/plugins/filter/core.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['/home/rorsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rorsten/miniconda3/lib/python3.11/site-packages/ansible
ansible collection location = /home/rorsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rorsten/miniconda3/bin/ansible
python version = 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (/home/rorsten/miniconda3/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 16.04, 20.04; MacOS 12.6.7
### Additional Information
It is not unreasonable to expect that the output of list | path_join would be the concatenation of all of the path elements in the list, but this seems not to be the actual behaviour. The clarification could prevent quite a bit of frustration.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81446 | https://github.com/ansible/ansible/pull/81544 | 863e2571db5d70b03478fd18efef15c4bde88c10 | 4a96b3d5b41fe9f0d12d899234b22e676d82e804 | 2023-08-04T18:41:17Z | python | 2023-08-22T15:12:21Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,404 | ["changelogs/fragments/dpkg_selections.yml", "lib/ansible/modules/dpkg_selections.py", "test/integration/targets/dpkg_selections/tasks/dpkg_selections.yaml"] | Module ansible.builtin.dpkg_selections not idempotent. | ### Summary
The module `ansible.builtin.dpkg_selections` keeps on changing with the same arguments.
### Issue Type
Bug Report
### Component Name
dpkg_selections
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.1]
config file = None
configured module search path = ['/Users/OBFUSCATED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/OBFUSCATED/venv/lib/python3.9/site-packages/ansible
ansible collection location = /Users/OBFUSCATED/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/OBFUSCATED/venv/bin/ansible
python version = 3.9.17 (main, Jun 6 2023, 14:33:55) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/Users/OBFUSCATED/venv/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Controller: Mac OS X
Target: A debian container.
### Steps to Reproduce
The simplest form I could create:
```shell
ansible -m ansible.builtin.dpkg_selections -a "name=kernel selection=hold" -i 2ca39af7901a, -c docker all
```
### Expected Results
I was expecting a `changed: true` once, but actually it's always changed.
### Actual Results
```console
2ca39af7901a | CHANGED => {
"after": "hold",
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"before": "not present",
"changed": true
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81404 | https://github.com/ansible/ansible/pull/81406 | 95fdd555b38f4fa885f46454675b293e8021cd85 | f10d11bcdc54c9b7edc0111eb38c59a88e396d0a | 2023-08-02T14:47:46Z | python | 2023-08-03T21:01:20Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,376 | ["changelogs/fragments/dnf-update-only-latest.yml", "lib/ansible/modules/dnf.py", "test/integration/targets/dnf/tasks/dnf.yml"] | dnf module failure for a package from URI with state=latest update_only=true | ### Summary
Unable using `latest` together with `update_only` option when installing a package using DNF module from an URL.
Tested on [https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm](url) and [https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm](url)
The states `latest` or `present` without specifying `update_only` works as expected.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
AlmaLinux release 9.2 (Turquoise Kodkod)
### Steps to Reproduce
Cleran setup Alma linux (using Vagrant)
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# install certificaters
sudo dnf -y install ca-certificates
# install ansible itself
sudo dnf -y install ansible-core
# install a package form URL
ansible localhost --become -m ansible.builtin.rpm_key -a "key='https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9' state=present"
# Update the package to the latest version
ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=present"
# fails with Error: AttributeError: 'Package' object has no attribute 'rpartition'
```
Based on Ansible-lint [https://ansible.readthedocs.io/projects/lint/rules/package-latest/#correct-code](package-latest) I'd expect that the correct flow is to install a package *state=present* and afterward (if requested) update to the latest version by using *state=present* together with *update_only=true*.
The intention is to be sure, that the package is updated if a playbook is executed after several months from initial installation time.
### Expected Results
I expect package installation with both options *state=latest* and *update_only=true* will work or report a misuse instead of failing with an exception.
### Actual Results
```console
$ ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
$ ansible -vvvv localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.11/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: zelenya
<127.0.0.1> EXEC /bin/sh -c 'echo ~zelenya && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zelenya/.ansible/tmp `"&& mkdir "` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" && echo ansible-tmp-1690820028.8473563-4456-128814682130530="` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" ) && sleep 0'
Using module file /usr/lib/python3.11/site-packages/ansible/modules/dnf.py
<127.0.0.1> PUT /home/zelenya/.ansible/tmp/ansible-local-4452uveazbbj/tmpl3btemgd TO /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-zrmfzgclfrznlmkupsgkhvmruypmywyc ; /usr/bin/python3.11 /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 16, in <module>
File "/usr/lib64/python3.9/runpy.py", line 225, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1460, in <module>
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1449, in main
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1423, in run
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1134, in ensure
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 982, in _install_remote_rpms
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 949, in _update_only
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 795, in _is_installed
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 481, in _split_package_arch
AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81376 | https://github.com/ansible/ansible/pull/81568 | da63f32d59fe882bc77532e734af7348b65cb6cb | 4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4 | 2023-07-31T16:14:35Z | python | 2023-08-28T08:48:45Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,349 | ["lib/ansible/modules/command.py", "lib/ansible/modules/script.py", "lib/ansible/plugins/action/script.py", "test/integration/targets/script/tasks/main.yml"] | ansible.builtin.script creates field behavior and documentation | ### Summary
* `creates` field in ansible documentation should specify `type: str`
* `creates` field should be type checked. For example, passing a `bool` triggers vague stack trace
### Issue Type
Bug Report
### Component Name
ansible.builtin.script
### Ansible Version
```console
ansible [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Example Playbook
hosts: localhost
gather_facts: false
tasks:
- name: Run Script
ansible.builtin.script:
cmd: pwd
chdir: "/usr/bin"
creates: true
register: script_output
- name: Display script output
debug:
var: script_output.stdout
```
### Expected Results
Expected the module to report that `str` value should be provided instead of `bool`.
### Actual Results
```console
ansible-playbook [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible-playbook
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
script declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
auto declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
yaml declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
Parsed /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yaml ************************************************************
Positional arguments: test.yaml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini',)
forks: 5
1 plays in test.yaml
PLAY [Example Playbook] ********************************************************
TASK [Run Script] **************************************************************
task path: /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/test.yaml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: john
<127.0.0.1> EXEC /bin/sh -c 'echo ~john && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/john/.ansible/tmp `"&& mkdir "` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" && echo ansible-tmp-1690392695.3185742-1664374-141461953585361="` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" ) && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 633, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/script.py", line 52, in run
if self._remote_file_exists(creates):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 204, in _remote_file_exists
cmd = self._connection._shell.exists(path)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/shell/__init__.py", line 138, in exists
cmd = ['test', '-e', shlex.quote(path)]
File "/home/john/miniconda3/envs/3.10/lib/python3.10/shlex.py", line 329, in quote
if _find_unsafe(s) is None:
TypeError: expected string or bytes-like object
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: expected string or bytes-like object",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81349 | https://github.com/ansible/ansible/pull/81469 | 37cb44ec37355524ca6a9ec6296e19e3ee74ac98 | da63f32d59fe882bc77532e734af7348b65cb6cb | 2023-07-26T17:32:53Z | python | 2023-08-25T17:27:26Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,332 | ["changelogs/fragments/81332-fix-pkg-mgr-in-kylin.yml", "lib/ansible/module_utils/facts/system/pkg_mgr.py", "test/units/module_utils/facts/system/test_pkg_mgr.py"] | `ansible_pkg_mgr` is unknown in Kylin Linux | ### Summary
Use the setup module in the kylin linux to get the `ansible_pkg_mgr` variable with the value "unknown".
### Issue Type
Bug Report
### Component Name
pkg_mgr.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['/Users/jiang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/8.2.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/jiang/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.11.4 (main, Jun 20 2023, 16:52:35) [Clang 13.0.0 (clang-1300.0.29.30)] (/usr/local/Cellar/ansible/8.2.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
PAGER(env: PAGER) = less
```
### OS / Environment
[Kylin Linux](https://www.kylinos.cn/scheme/server.html)
### Steps to Reproduce

### Expected Results
In Kylin Linux, the expected value of `ansible_pkg_mgr` should be `dnf`.
### Actual Results
```console
In Kylin Linux, the actual value of `ansible_pkg_mgr` is `unknown`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81332 | https://github.com/ansible/ansible/pull/81314 | 6a8c51bb9c7aacef2a781106deb556982577f50f | a5ccc0124f4677eb55a90a1e2e53b6984b2b140d | 2023-07-23T13:46:10Z | python | 2023-07-25T21:45:30Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,188 | ["changelogs/fragments/81188_better_error.yml", "lib/ansible/module_utils/basic.py"] | yum module fails with Error: Module unable to decode valid JSON on stdin. | ### Summary
I have a role that hasn't been changed recently and still works correctly on most hosts (CentOS, Oracle Linux en Suse).
This role, if it detects a Redhat, will call ansible.builtin.yum
And this has always worked .. until suddenly one Oracle Linux 7.9 hosts fails this task with the error: "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"
```
TASK [zabbix-agent : Install package zabbix_agent2] ****************************
fatal: [xxx]: FAILED! => {"changed": false, "msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"}
```
In the system logging of that host I see no problems, and I see the parameters parsed just fine for as far as I understand it:
```
Jul 7 17:39:20 xxx ansible-ansible.legacy.yum: Invoked with name=['zabbix-agent2'] state=latest update_cache=True enablerepo=['\\*zabbix\\*'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
```
I don't see this problem on other Oracle Linux or CentOS hosts. I have no clue on how to debug this? Or what could be the cause for this ?
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.16 (main, Dec 8 2022, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
NAME="Oracle Linux Server"
VERSION="7.9"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"
PRETTY_NAME="Oracle Linux Server 7.9"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:9:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.9
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package {{ zbx_agent_generation }}
become: true
ansible.builtin.yum:
name: "{{ __zbx_agent_packages[zbx_agent_generation] }}"
state: latest
update_cache: true
enablerepo: "\\*zabbix\\*"
retries: 3
notify: restart zabbix-agent
```
### Expected Results
The requested package to be installed
### Actual Results
```console
<xxx> ESTABLISH SSH CONNECTION FOR USER: ops
<xxx> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ops"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/28b2e02311"' -tt xxx '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-cceddylaroytysqkptljeuuefrajsuvd ; /usr/bin/python3.6 /home/ops/.ansible/tmp/ansible-tmp-1688745858.7127638-451-200846650582138/AnsiballZ_yum.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<xxx> (1, b'\r\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}\r\n', b'Shared connection to xxx closed.\r\n')
<xxx> Failed to connect to the host via ssh: Shared connection to xxx closed.
<xxx> ESTABLISH SSH CONNECTION FOR USER: ops
<xxx> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ops"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/28b2e02311"' xxx '/bin/sh -c '"'"'rm -f -r /home/ops/.ansible/tmp/ansible-tmp-1688745858.7127638-451-200846650582138/ > /dev/null 2>&1 && sleep 0'"'"''
<xxx> (0, b'', b'')
fatal: [xxx]: FAILED! => {
"changed": false,
"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81188 | https://github.com/ansible/ansible/pull/81554 | d67d8bd823d588e5f617aba25ed43e96ee32466f | c0eefa955a7292ba61fe6656eba51ebbf97e553e | 2023-07-07T16:09:00Z | python | 2023-10-04T14:49:03Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,163 | ["changelogs/fragments/a-g-col-prevent-reinstalling-satisfied-req.yml", "lib/ansible/cli/galaxy.py", "lib/ansible/galaxy/collection/__init__.py", "test/integration/targets/ansible-galaxy-collection-scm/tasks/multi_collection_repo_all.yml", "test/units/galaxy/test_collection_install.py"] | ansible-galaxy install - don't install if requirements are met | ### Summary
I'm trying to optimise my builds by avoiding unnecessary downloads.
ansible-galaxy install always downloads the latest version even if the existing collection already meets the requirements. This behaviour appears to be consistent with the documentation (https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#installing-an-older-version-of-a-collection), but why has it been implemented to always download ?
e.g. I have
```
# /usr/local/lib/python3.8/site-packages/ansible_collections
Collection Version
----------------------------- -------
amazon.aws 3.5.0
...
```
and requirements.yml
```
---
collections:
- name: amazon.aws
version: '>=1.5.0'
```
if I run ansible-galaxy install -r requirements.yml I get amazon.aws 6.1.0 installed, even though 3.5.0 already meets my 1.5.0 requirement. I would expect nothing to happen because my requirements were already met.
### Issue Type
Feature Idea
### Component Name
ansible galaxy
### Additional Information
People who like the existing behaviour could perhaps use the recently added --upgrade switch to continue to get the upgrade to the lastest version.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81163 | https://github.com/ansible/ansible/pull/81243 | c5d18c39d81e2b3b10856b2fb76747230e4fac4a | efbc00b6e40789e8a152f9265e3b31b047deed84 | 2023-06-30T18:43:52Z | python | 2023-07-20T22:30:59Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,105 | ["lib/ansible/plugins/filter/core.py", "lib/ansible/plugins/filter/mandatory.yml"] | ansible.builtin.mandatory msg argument not documented | ### Summary
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/mandatory_filter.html makes no mention of the `msg` argument that was introduced in 0e7e3c0ae89a70c386168827b0c9defb956bee7e
### Issue Type
Documentation Report
### Component Name
lib/ansible/plugins/filter/mandatory.yml
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
n/a
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81105 | https://github.com/ansible/ansible/pull/81110 | b93a628aed2feb1a0ff68858d356895a79578149 | 8edba0bb72aa81464dfd55ac4fed6ca9f9f81972 | 2023-06-22T06:57:42Z | python | 2023-07-12T23:02:56Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,103 | ["changelogs/fragments/81104-inventory-script-plugin-raise-execution-error.yml", "lib/ansible/plugins/inventory/script.py", "test/units/plugins/inventory/test_script.py"] | Inventory scripts parser not treat exception when getting hostsvar | ### Summary
When the. in my case Python, script --host [inventory_host] raises an error it's doesn't catch on exception treatment as it is don on --list
I already made a fix and will create a PR to fix it.
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/inventory/script.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.10]
config file = None
configured module search path = ['/home/rundeck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rundeck/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/rundeck/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rundeck/.local/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Empty
```
### OS / Environment
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Steps to Reproduce
- hosts: all
gather_facts: false
tasks:
- win_ping:
### Expected Results
When the Inventory script ends with status code different than 0, the stderr must be printed as an Warning on the playbook execution
```
ansible-playbook [core 2.13.10]
config file = None
configured module search path = ['/home/rundeck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rundeck/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/rundeck/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rundeck/.local/bin/ansible-playbook
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /home/rundeck/server/data/ansible/inventory.py as it did not pass its verify_file() method
auto declined parsing /home/rundeck/server/data/ansible/inventory.py as it did not pass its verify_file() method
toml declined parsing /home/rundeck/server/data/ansible/inventory.py as it did not pass its verify_file() method
```
**[WARNING]: * Failed to parse /home/rundeck/server/data/ansible/inventory.py with script plugin: Inventory script (/home/rundeck/server/data/ansible/inventory.py) had an execution error: Error
while retrieving password from Password Safe**
```
File "/home/rundeck/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/rundeck/.local/lib/python3.8/site-packages/ansible/plu
```
### Actual Results
```console
ansible-playbook [core 2.13.10]
config file = None
configured module search path = ['/home/rundeck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rundeck/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/rundeck/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rundeck/.local/bin/ansible-playbook
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /home/rundeck/server/data/ansible/intel as it did not pass its verify_file() method
auto declined parsing /home/rundeck/server/data/ansible/intel as it did not pass its verify_file() method
toml declined parsing /home/rundeck/server/data/ansible/intel as it did not pass its verify_file() method
[WARNING]: * Failed to parse /home/rundeck/server/data/ansible/intel with script plugin: Invalid data from file, expected dictionary and got: None
File "/home/rundeck/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/rundeck/.local/lib/python3.8/site-packages/ansible/plugins/inventory/script.py", line 149, in parse
raise AnsibleParserError(to_native(e))
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81103 | https://github.com/ansible/ansible/pull/81104 | ed8a404f4a736bba2dc6b32f0c79f8b7e5dabf61 | 2f820381ea9126c6a38ee70c2afbeac8ee7894fb | 2023-06-22T06:17:34Z | python | 2023-06-26T21:29:59Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,053 | ["changelogs/fragments/81053-templated-tags-inheritance.yml", "lib/ansible/playbook/taggable.py", "test/integration/targets/tags/runme.sh", "test/integration/targets/tags/test_template_parent_tags.yml", "test/units/playbook/test_taggable.py"] | Jinja in tags resolve issue | ### Summary
For example take this task:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
#tags: "{{ item.tags }}"
when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
Here assume the following types for the jinja templates:
item.tags: list
item.name: string
cfg.value.data: object with attributes: name and tags
When ran like this (because of the commented tags) it will work as intended with a little less functionality.
When ran as follows:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
It will error out saying that tags waits a list of class string or class int and got class list.
In normal tasks this issue does not appear, only on include_tasks and on task blocks.
The scenario where this is a drawback:
Assume we want to include generic tasks from a directory. The tasks include other tasks tagged with something. If we use the "when" directive to check tags then we have to add the include task's tag into the attribute list cause if we do not, then the include will not get evaluated which means the tasks won't get included and therefore the subtags will not be part of the playbook and it won't run as expected.
### Issue Type
Bug Report
### Component Name
task.py, block.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/levi/ansible/ansible.cfg
configured module search path = ['/home/levi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config dump --only-changed
[DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible, use mostly the same config will work, but now controlled
from the plugin itself and not using the general constant. instead. This feature will be removed from ansible-base in version 2.14. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ALLOW_WORLD_READABLE_TMPFILES(/home/levi/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/levi/ansible/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/levi/ansible/ansible.cfg) = ['/home/levi/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/levi/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/levi/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/levi/ansible/ansible.cfg) = 40
DISPLAY_SKIPPED_HOSTS(/home/levi/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/home/levi/ansible/ansible.cfg) = False
```
### OS / Environment
ubuntu 20.04
But this issue is really platform independent.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
vars:
cfg:
value:
data:
- tags:
- TagA
- TagB
name: "something"
```
### Expected Results
Apply the array of tags to the included packages normally.
### Actual Results
```console
Error tags must be a list of (class string or class int) but got class list
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81053 | https://github.com/ansible/ansible/pull/81624 | 304e63d76e725e8e277fe208d26fb45ca2ff903d | 9b3ed5ec68a6edde5b061b18b9ebc603c3b87cc8 | 2023-06-13T17:28:44Z | python | 2023-10-03T19:07:26Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 81,013 | ["changelogs/fragments/81013-handlers-listen-last-defined-only.yml", "lib/ansible/plugins/strategy/__init__.py", "test/integration/targets/handlers/roles/test_listen_role_dedup_global/handlers/main.yml", "test/integration/targets/handlers/roles/test_listen_role_dedup_role1/meta/main.yml", "test/integration/targets/handlers/roles/test_listen_role_dedup_role1/tasks/main.yml", "test/integration/targets/handlers/roles/test_listen_role_dedup_role2/meta/main.yml", "test/integration/targets/handlers/roles/test_listen_role_dedup_role2/tasks/main.yml", "test/integration/targets/handlers/runme.sh", "test/integration/targets/handlers/test_listen_role_dedup.yml"] | Handlers from dependencies are not deduplicated in ansible-core 2.15.0 | ### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/81013 | https://github.com/ansible/ansible/pull/81358 | bd3ffbe10903125993d7d68fa8cfd687124a241f | 0cba3b7504c1aabe0fc3773e3ff3ac024edeb308 | 2023-06-09T11:47:43Z | python | 2023-08-15T13:03:56Z |
closed | ansible/ansible | https://github.com/ansible/ansible | 80,992 | ["changelogs/fragments/display_proxy.yml", "lib/ansible/executor/task_queue_manager.py", "lib/ansible/plugins/strategy/__init__.py", "lib/ansible/utils/display.py", "test/units/utils/test_display.py"] | Preserve context for display method called from forks | ### Summary
When we updated `Display` to proxy over the queue for dispatch by the main process in https://github.com/ansible/ansible/pull/77056, we ended up losing some context, since we only proxy the `Display.display` method, and not all methods individually.
We need to add some functionality to `Display` to preserve this context, whether that means shipping additional information across the queue, or proxying all methods instead of just `Display.display`.
This has impacts on ansible-runner, and input should be taken to evaluate the best option.
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
### Additional Information
TBD
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | https://github.com/ansible/ansible/issues/80992 | https://github.com/ansible/ansible/pull/81060 | 38067860e271ce2f68d6d5d743d70286e5209623 | a7d2a4e03209cff1e97e59fd54bb2b05fdbdbec6 | 2023-06-07T14:29:14Z | python | 2023-06-22T17:57:59Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.