package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
ansible-galaxy-local-deps
ansible-galaxy-local-depsA bunch of python scripts supporting ansible role testing in CI
ansible-galaxy-outdated
ansible-galaxy-outdatedansible-galaxy-outdatedshows currently installedAnsiblecollections with updated version inAnsible Galaxy.This is a workaround untilansible-galaxythemself have such a function. See GitHub issuehttps://github.com/ansible/ansible/issues/75632.
ansible-gen
No description available on PyPI.
ansible-gendoc
Ansible-GendocInspired by Felix Archambault'sansidocproject.Anexamplegenerated withansible-gendoc.FeaturesGenerate the documentation for a role located in a directoryCan use a personal templateREADME.j2present in foldertemplatesQuickstartIf you have an existing README.md file in your role, backup it before !Run From dockerClone this project and build the image :gitcloneexportDOCKER_BUILDKIT=1dockerbuild.-tansible-gendoc:0.1.0-tansible-gendoc:latest dockerrun--user$(id-u):$(id-g)-itansible-gendoc:latesthelpInstall python packageInstall the latest versionansible-gendocwithpiporpipxpipinstallansible-gendocUsageansible-gendoc--helpUsage:ansible-gendoc[OPTIONS]COMMAND[ARGS]... โ•ญโ”€Optionsโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚--version-vShowtheapplication'sversionandexit.โ”‚ โ”‚--install-completionInstallcompletionforthecurrentshell.โ”‚ โ”‚--show-completionShowcompletionforthecurrentshell,tocopyโ”‚ โ”‚itorcustomizetheinstallation.โ”‚ โ”‚--helpShowthismessageandexit.โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ•ญโ”€Commandsโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚initCopytemplatesREADME.j2frompackagesintemplates/rolefolder.โ”‚ โ”‚renderBuildtheDocumentationโ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏBuild your first documentation of a roleTo build the documentation roles, you can run these commands :with package installed with pipansible-gendoc render.with docker imagesdocker run --user $(id -u):$(id -g) -v <path_role>:/role -it ansible-gendoc:latest render role.Use your personal templateTo use a personal template, you need toinitthe template in the templates folder of your role. Ifansible-gendocfind an existing filetemplates/README.j2, it will use it to render the README.md file.ansible-gendocinit lstemplates README.j2The template usejinjaas templating language.Modify it, for example replacehtmlorRestructuredtextor another language. You can remove some variables too.Documentation of vars templateThe documentation of vars coming soon.
ansible-generator
Ansible GeneratorDescriptionAnsible Generator is a python program designed to simplify creating a new ansible playbook by creating the necessary directory structure for the user based on ansible's best practices, as outlined incontent organization best practices.InstallationPIP (recommended)pip install -U ansible-generatorSourcegit clone https://github.com/kkirsche/ansible-generator.git cd ansible-generator curl -sSL https://install.python-poetry.org | python3 - poetry buildUsageHelp Textusage: ansible-generate [-h] [-a] [-i INVENTORIES [INVENTORIES ...]] [-r ROLES [ROLES ...]] [-v] [-p PROJECTS [PROJECTS ...]] [--version] Generate an ansible playbook directory structure optional arguments: -h, --help show this help message and exit -a, --alternate-layout -i INVENTORIES [INVENTORIES ...], --inventories INVENTORIES [INVENTORIES ...] -r ROLES [ROLES ...], --roles ROLES [ROLES ...] -v, --verbose -p PROJECTS [PROJECTS ...], --projects PROJECTS [PROJECTS ...] --version show program's version number and exitDefaultsalternate-layout---Falseverbose---Falseinventories---['production', 'staging']roles---[]projects---[]ExampleCurrent directoryansible-generateNew-projectansible-generate -p playbook_nameAlternate Layoutansible-generate -aCustom Inventoriesansible-generate -i production staging labRolesThis portion of the tool relies on Ansible'sansible-galaxycommand line applicationansible-generate -r role1 role2Output~/Downloads โฏโฏโฏ ansible-generate -i production staging lab -r common ubuntu centos -a -p network_security_baseline creating directory /Users/example_user/Downloads/network_security_baseline/roles creating directory /Users/example_user/Downloads/network_security_baseline/inventories/production/group_vars creating directory /Users/example_user/Downloads/network_security_baseline/inventories/production/host_vars creating directory /Users/example_user/Downloads/network_security_baseline/inventories/staging/group_vars creating directory /Users/example_user/Downloads/network_security_baseline/inventories/staging/host_vars creating directory /Users/example_user/Downloads/network_security_baseline/inventories/lab/group_vars creating directory /Users/example_user/Downloads/network_security_baseline/inventories/lab/host_vars creating file /Users/example_user/Downloads/network_security_baseline/inventories/production/hosts creating file /Users/example_user/Downloads/network_security_baseline/inventories/staging/hosts creating file /Users/example_user/Downloads/network_security_baseline/inventories/lab/hosts creating file /Users/example_user/Downloads/network_security_baseline/site.yml ansible galaxy output for role common: - common was created successfully ansible galaxy output for role ubuntu: - ubuntu was created successfully ansible galaxy output for role centos: - centos was created successfully
ansible-golovan-alert
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
ansibleguy-nftables
NFTablesThis is a copy of thepython3-nftables module(apt-package version 1.0.2-1ubuntu3) that is used to interact withlibnftables.The repository is used to serve it usingPyPI.Only minor changes were made to fix linting errors!DocumentationlibnftablesNFTables
ansibleguy-webui
Basic WebUI for using AnsibleDISCLAIMER: This is anunofficial community project! Do not confuse it with the vanillaAnsibleproduct!The goal is to allow users to quickly install & run a WebUI for using Ansible locally.Keep it simple.This project is still in early development! DO NOT USE IN PRODUCTION!SetupLocal - PIPRequires Python >=3.10# installpython3-mpipinstallansibleguy-webui# runpython3-mansibleguy-webuiDockerdockerimagepullansible0guy/webui:latest dockerrun-d--nameansible-webui--publish127.0.0.1:8000:8000ansible0guy/webui:latest# or with persistent data (volumes: /data = storage for logs & DB, /play = ansible playbook base-directory)dockerrun-d--nameansible-webui--publish127.0.0.1:8000:8000--volume$(pwd)/ansible/data:/data--volume$(pwd)/ansible/play:/playansible0guy/webui:latestDemoCheck out the demo at:demo.webui.ansibleguy.netLogin: Userdemo, PasswordAnsible1337UsageDocumentationContributeFeel free to contribute to this project usingpull-requests,issuesanddiscussions!Testers are also very welcome! Pleasegive feedbackSee also:ContributingRoadmapAnsible ConfigStatic Playbook-DirectoryGit Repository supportUsersManagement interface (Django built-in)Groups & Job PermissionsLDAP integrationSAML SSO integrationJobsExecute Ansible usingansible-runnerScheduled execution (Cron-Format)Manual/immediate executionSupport forad-hoc commandsJob LoggingWrite job metadata to databaseWrite full job-logs to FilesystemSecret handling (Connect, Become, Vault)User-specific job credentialsAlerting on FailureWebUIJob DashboardStatus, Execute, Time of last & next execution, Last run User, Links to Warnings/ErrorsJob OutputFollow the jobs output in realtimeJob ErrorsUI that allows for easy error analysis. Access to logs and provide links to possible solutionsShow Ansible Running-ConfigShow Ansible CollectionsCheck Collections for available updates (Galaxy + GitHub releases)Mobile SupportMulti-Language SupportAPIManage and execute JobsDatabaseSupport for MySQLTestingUnit TestsIntegration TestsBasic WebUI checksAPI EndpointsPermission system
ansible-hostmanager
ansible-hostmanagerCLI script to work with Ansible hosts fileTo install:sudo pip3 install autopip app install ansible-hostmanagerTo show hosts:$ahlist/etc/ansible/hosts exists and will be used. To change, run: ah set-hosts <PATH> Inventory has 4 host(s) app-server1 1.2.3.4 [app, all] app-server2 1.2.3.6 [app, all] web-server 1.2.3.5 [web, all] db-server 1.2.3.7 [db, all]$ahlistappapp-server1 1.2.3.4 [app, all] app-server2 1.2.3.6 [app, all]To ssh to a host:$ahsshdb#Runs`ssh1.2.3.7`$ahsshappFound multiple matches and will use first one: app-server1, app-server2#Runs`ssh1.2.3.4`$ahsshserver1ls/bin boot dev ...$ahssh-i~/.ssh/alternative_iduser@app1#Runs`ssh-i~/.ssh/[email protected]`#Aslongashostnameisfirst,orlast,argument,itwillgettranslated.#Toavoidhavingtoremember`ahssh`vs`ssh`,justcreatean`ssh`alias#asanynon-Ansiblehost/argswouldjustbepassedto`ssh`withoutchange.$aliasssh=`ahssh`$sshuser@not_ansible_hostLinks & Contact InfoPyPI Package:https://pypi.python.org/pypi/ansible-hostmanagerGitHub Source:https://github.com/maxzheng/ansible-hostmanagerReport Issues/Bugs:https://github.com/maxzheng/ansible-hostmanager/issuesFollow:https://twitter.com/MaxZhengXConnect:https://www.linkedin.com/in/maxzhengContact: maxzheng.os @t gmail.com
ansible_importer
ansible_importerAnsible modules and plugins are typically pure Python, but often lack the structure necessary to import as Python code. This package is meant to allow programmers to import module code for testing purposes.Usage..code-block:: pythonimport ansible_importer ansible_importer.install(โ€˜/abs/path/to/ansible/codeโ€™)# Assuming above path has a playbooks/plugins/actions/my_plugin.py module from playbooks.plugins.actions.my_plugin import ActionModule# Assuming playbooks/library/glance from playbooks.library.glance import ManageGlance# Assuming playbooks/inventory/dynamic_inventory.py from playbooks.inventory import dynamic_inventory
ansible-inventory
Script to manage your Ansible Inventory and also can be used by ansible as a dynamic inventory source Author: Diego Blanco <[email protected]> Project:https://github.com/diego-treitos/ansible-inventory
ansible_inventory_creator
No description available on PyPI.
ansible-inventory-grapher
No description available on PyPI.
ansible-inventory-manage
This is a library to manipulate Ansible Inventories.It's independent of ansible by design, so that it doesn't need to trackansible code and is version independent.It only relies on what ansible expects as inventories (the json structure),which hasn't changed for a long time now.A standard inventory json is like this:```{_meta:{hostvars:{<hostname>:{var1: value1}}},all:{children: [],vars:{var2: value2},},groupname1:{children:[],hosts:[],}groupname2:{children:[],vars:{var3: value3},}}```This lib allows the loading of multiple jsons into memory, and manipulate theinventory objects, and vars precedence.Usage will explained in documentation soon.You can submit patches with git-review and gerrithub (``pip install git-review``)# git clone https://review.gerrithub.io/evrardjp/ansible-inventory-manage# git review -s# git checkout -b my_super_patch# git review -f
ansible-inventory-to-ssh-config
No description available on PyPI.
ansible-iplb
This is an ansible module for handling ovh iplb
ansible-juggler
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
ansible-juggler2
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
ansible-jupyter-widgets
No description available on PyPI.
ansible-kernel
An Ansible kernel for Jupyter notebooks
ansible-keyring
Python CLI: Ansible Keyring - A System Keyring Integration CLIA Python CLI created byMegabyte Labssubheader_descriptionTable of ContentsOverviewInstallationPyPiInstall DoctorHomebrewChocolateyBinary ReleasesRequirementsContributingLicenseOverviewThis repository is home toansible-keyringa CLI that extends theansible,ansible-playbook, andansible-vaultcommand to retrieve vault passwords from the system keyring. It is based on the work ofansible-toolswith a couple usability improvements. The features it adds are:Does not have to be run only in directories whereansible.cfgis presentNew, shorter, more intuitive command aliases{{ load:docs/partials/guide.md }}InstallationTo accomodate everyone, this CLI can be installed using a variety of methods.PyPiIf you already have Python 3 and pip3 installed, you can install the CLI by running:pip3install{{(ifcustomPyPiPackageNamecustomPyPiPackageName(appendrepository.prefix.githubslug))}}Install DoctorOnmacOS or Linux, you can run:bash-sShttps://install.doctor/py/{{(ifcustomPyPiPackageNamecustomPyPiPackageName(appendrepository.prefix.githubslug))}}And onWindows, you can run:Set-ExecutionPolicyBypass-ScopeProcess-Force;[System.Net.ServicePointManager]::SecurityProtocol=[System.Net.ServicePointManager]::SecurityProtocol-bor3072;iex((New-ObjectSystem.Net.WebClient).DownloadString('https://install.doctor/py/{{(if customPyPiPackageName customPyPiPackageName (append repository.prefix.github slug))}}?os=win'))HomebrewIf you haveHomebrewinstalled, you can install the package by running:brewinstallinstalldoc/py/{{(ifcustomPyPiPackageNamecustomPyPiPackageName(appendrepository.prefix.githubslug))}}Or if you prefer to keep Python off your system, and install a binary, you can run:brewinstallinstalldoc/py/{{(ifcustomPyPiPackageNamecustomPyPiPackageName(appendrepository.prefix.githubslug))}}-binaryChocolateyIf you are on Windows, you can install a binary version (without the Python dependency), usingChocolatey:chocoinstall{{(ifcustomPyPiPackageNamecustomPyPiPackageName(appendrepository.prefix.githubslug))}}Binary ReleasesThere are also binaries (in various formats) available for download on bothGitHubandGitLab.RequirementsTo run this project, all you need isPython 3andpip3. See theInstallation sectionfor instructions that include alternate installation methods that do not require Python to be installed.If you are interested in contributing or would like to make some modifications, please see theCONTRIBUTINGguide. There are a handful of build tools we incorporate into the development process. All of them are installed automatically via our Taskfile system. You can get started customizing this project by running:bash.config/scripts/start.sh taskstart task--listContributingContributions, issues, and feature requests are welcome! Feel free to check theissues page. If you would like to contribute, please take a look at thecontributing guide.SponsorshipDear Awesome Person,I create open source projects out of love. Although I have a job, shelter, and as much fast food as I can handle, it would still be pretty cool to be appreciated by the community for something I have spent a lot of time and money on. Please consider sponsoring me! Who knows? Maybe I will be able to quit my job and publish open source full time.Sincerely,Brian ZalewskiLicenseCopyright ยฉ 2020-2021Megabyte LLC. This project isMITlicensed.
ansible-kkvesper
No description available on PyPI.
ansible-kobe-plugin
No description available on PyPI.
ansible-later
ansible-laterAnother best practice scanner for Ansible roles and playbooksansible-later is a best practice scanner and linting tool. In most cases, if you write Ansible roles in a team, it helps to have a coding or best practice guideline in place. This will make Ansible roles more readable for all maintainers and can reduce the troubleshooting time. While ansible-later aims to be a fast and easy to use linting tool for your Ansible resources, it might not be that feature completed as required in some situations. If you need a more in-depth analysis you can take a look atansible-lint.ansible-later doesnotensure that your role will work as expected. For deployment tests you can use other tools likemolecule.You can find the full documentation athttps://ansible-later.geekdocs.de.CommunityGitHub Actionby@patrickjahnsContributorsSpecial thanks to allcontributors. If you would like to contribute, please see theinstructions.ansible-later is a fork of Will Thamesansible-review. Thanks for your work on ansible-review and ansible-lint.LicenseThis project is licensed under the MIT License - see theLICENSEfile for details.
ansible-library
UNKNOWN
ansible-lint
Ansible-lintansible-lintchecks playbooks for practices and behavior that could potentially be improved. As a community-backed project ansible-lint supports only the last two major versions of Ansible.Visit the Ansible Lint docs siteUsing ansible-lint as a GitHub ActionThis action allows you to runansible-linton your codebase without having to install it yourself.# .github/workflows/ansible-lint.ymlname:ansible-linton:pull_request:branches:["main","stable","release/v*"]jobs:build:name:Ansible Lint# Naming the build is important to use it as a status checkruns-on:ubuntu-lateststeps:-uses:actions/checkout@v4-name:Run ansible-lintuses:ansible/ansible-lint@main# or version tag instead of 'main'For more details, seeansible-lint-action.ContributingPlease readContribution guidelinesif you wish to contribute.LicensingThe ansible-lint project is distributed asGPLv3due to use ofGPLv3runtime dependencies, likeansibleandyamllint.For historical reasons, its own code-base remains licensed under a more liberalMITlicense and any contributions made are accepted as being made under originalMITlicense.Authorsansible-lint was created byWill Thamesand is now maintained as part of theAnsiblebyRed Hatproject.
ansiblelint-custom-rules-zjleblanc
Useful rules to extend ansible-lint built-in functionality based on a style guide developed by Red Hat energy pod.
ansible-lint-custom-strict-naming
ansible-lint-custom-strict-namingAnsible is a powerful tool for configuration management. But it is difficult to maintain the YAML playbook quality. Variable maintenance is one of the difficult tasks because they can be overwritten unexpectedly, if you don't care about such likeprecedenceand position where variables are defined.This is a strict rule for variable naming, usingansible-lint. Strict naming rule is useful to avoid name collision and to search defined position.Rulesvar_name_prefix<role_name>_role__,<task_name>_tasks__prefixVariables defined in<role_name>_role__roles/<role_name>/tasks/<role_name>_tasks__<not_roles>/**/tasks/In ansible-lint,var-naming[no-role-prefix]require to use<role_name>_as prefix. But it is not enough to avoid name collision or search defined position. So, I add_role__or_tasks__to the prefix.var__,const__var__prefixVariables dynamically defined byansible.builtin.set_factorregisterconst__prefixVariables dynamically defined byansible.builtin.set_factorregisterVariables statically defined in such like inventory's vars, group_vars, host_vars and etc.Vars intasks/<name>.ymlorroles/<name>/tasks/main.yml<name>_role__var__prefixThese variables are dynamically defined inroles/<name>/tasks/main.yml.<name>_role__const__prefixThese variables are defined inroles/<name>/vars/main.ymland shouldn't be changed dynamically.some_role__arg__prefixThese variables are defined byansible.builtin.include_role'svarskey and shouldn't be changed dynamically.some_role__argsThese variables are defined byansible.builtin.include_role'svarskey and shouldn't be changed dynamically.This is useful when you want to send vars as dict.-name:Sampleansible.builtin.include_role:name:some_rolevars:some_role__args:key1:value1key2:value2examplestasks:-name:Some taskansible.builtin.include_role:name:<role_name>vars:some_role__const__one:value1some_role__const__two:value2OthersDouble underscores?Single underscore (_) is used to separate words. Double underscores (__) are used to separate chunks for readability.examplesvar__send_message__user_idvar__send_message__contentsome_role__const__app_config__namesome_role__const__app_config__tokensome_role__const__app_config__versionDocsArticlesansible-lint ใฎใ‚ซใ‚นใ‚ฟใƒ ใƒซใƒผใƒซใ‚’ๅˆฉ็”จใ—ใฆ Ansible ๅ†…ใงใฎๅค‰ๆ•ฐๅ‘ฝๅ่ฆๅ‰‡ใ‚’็ธ›ใฃใฆใฟใŸ่ฉฑ
ansible-lint-junit
Ansible-lint-junitTheansible-lintto JUnit converter.Installationvia pip:pipinstallansible-lint-junitUpdatingvia pip:pipinstallansible-lint-junit--upgradeUsage:You can runansible-linton your playbook(s) and redirect output to pipeansible-lintplaybook.yml-p--nocolor|ansible-lint-junit-oansible-lint.xmlYou can use a temporary file to store the output ofansible-lint. After that runansible-lint-junitand pass generated file to itansible-lint-p--nocoloryour_fancy_playbook.yml>ansible-lint.txt ansible-lint-junitansible-lint.txt-oansible-lint.xmlOutputIf there are any lint errors, full JUnit XML will be created.If there are no errors, empty JUnit XML will be created, this is for i.e.BambooJUnit parser plugin compatibility.It will break build if XML is missing or incorrect, and there is really no way of generating XML with"PASSED"tests in case of linter.LicenseThe ansible-lint-junit project is distributed under the [MIT] license.
ansible-lint-nunit
No description available on PyPI.
ansible-lint-to-junit-xml
Convert ansible-lint outputs to a jUnit valid xml tests result file.QuickstartInstallansible-lint-to-junit-xmlin your preferred Python envpipinstallansible-lint-to-junit-xmlRunansible-linton the desired files and pipe toansible-lint-to-junit-xmlansible-lint-q-p<fileordirectly>|ansible-lint-to-junit-xml>results/ansible-lint-results.xmlAlternatively you can runansible-lintseparately fromansible-lint-to-junit-xmland use a file to pass the outputansible-lint-q-p<fileordirectly>>ansible-lint-results.txtansible-lint-to-junit-xmlansible-lint-results.txt>results/ansible-lint-results.xmlNote:ansible-lintmust run with-pfor the output to be machine parsableFeaturesPipe output directly fromansible-lintcallOutput XML file is compliant withjenkins junit5 Schema.Built usingNekroze/cookiecutter-pypackageThis project appeared as an alternative towasilakโ€™s ansible-lint-junit.ExampleRunningansible-linton a file results in:playbooks/test_playbook.yml:41: [E303] curl used in place of get_url or uri module playbooks/tasks/example_task.yml:28: [E601] Don't compare to literal True/FalseRunningansible-lintand piping the output toansible-lint-to-junit-xmllooks line this:ansible-lint-q-pplaybooks/test_playbook.yml|ansible-lint-to-junit-xmlWould result in:<?xml version="1.0" ?><testsuites><testsuiteerrors="2"name="ansible-lint"tests="2"><testcasename="[E303] curl used in place of get_url or uri module"><failuremessage="playbooks/test_playbook.yml:41: [E303] curl used in place of get_url or uri module"type="ansible-lint">ansible-linterror:[E303]curlusedinplaceofget_urlorurimoduleansible-linterrordescription:[E303]curlusedinplaceofget_urlorurimodulefilename:playbooks/test_playbook.ymllinenr:41</failure></testcase><testcasename="[E601] Don't compare to literal True/False"><failuremessage="playbooks/tasks/example_task.yml:28: [E601] Don't compare to literal True/False"type="ansible-lint">ansible-linterror:[E601]Don'tcomparetoliteralTrue/Falseansible-linterrordescription:[E601]Don'tcomparetoliteralTrue/Falsefilename:playbooks/tasks/example_task.ymllinenr:28</failure></testcase></testsuite></testsuites>DocumentationThe full documentation is athttp://ansible-lint-to-junit-xml.rtfd.org.History0.1.0 (2019-07-30)First release on PyPI.
ansible-maas-dynamic-inventory
No description available on PyPI.
ansible-marathon
UNKNOWN
ansible-mdgen
ansible-mdgenAnsible-mdgen is a package used to auto generate documentation for an ansible role.To installpip install ansible-mdgenTo runCall ansible-mdgen passing in the path to the roleansible-mdgen <path_to_role>DocumentationSeeherefor full documentation.CreditsThe idea for this project is based on (and includes some code from)ansible-autodocby Andres Bott so credit to him for his work.
ansible-merge-vars
ansible_merge_vars: An action plugin for AnsibleAn Ansible plugin to merge all variables in context with a certain suffix (lists or dicts only) and create a new variable that contains the result of this merge. This is an Ansible action plugin, which is basically an Ansible module that runs on the machine running Ansible rather than on the host that Ansible is provisioning.InstallationUsageMerging dictsMerging listsVerbosityExample PlaybooksContributingCompatibilityThis plugin is tested with the latest release of each minor version of Ansible >=2.1. Earlier releases of some minor versions may not be compatible. This plugin is not compatible with combinations of older versions of Ansible and newer versions of Python. The following combinations are tested:PythonAnsible2.7>= 2.1>= 3.5, < 3.8>= 2.5>= 3.8>= 2.8InstallationPick a name that you want to use to call this plugin in Ansible playbooks. This documentation assumes you're using the namemerge_vars.pip install ansible_merge_varsCreate anaction_pluginsdirectory in the directory in which you run Ansible.By default, Ansible will look for action plugins in anaction_pluginsfolder adjacent to the running playbook. For more information on this, or to change the location where ansible looks for action plugins, seethe Ansible docs.Create a file calledmerge_vars.py(or whatever name you picked) in theaction_pluginsdirectory, with one line:from ansible_merge_vars import ActionModuleFor Ansible less than 2.4:Create thelibrarydirectory if it's not created yet:mkdir -p libraryCreate an emptymerge_vars(or whatever name you picked) file in yourlibrarydirectory:touch library/merge_varsAnsible action plugins are usually paired with modules (which run on the hosts being provisioned), and Ansible will automatically run an action plugin when you call of a module of the same name in a task. Prior to Ansible 2.4, if you want to call an action plugin by its name (merge_vars) in our tasks, you need an empty file calledmerge_varsin the place where ansible checks for custom modules; by default, this is alibrarydirectory adjacent to the running playbook.UsageThe variables that you want to merge must be suffixed with__to_merge. They can be defined anywhere in the inventory, or by any other means; as long as they're in the context for the running play, they'll be merged.Merging dictsLet's say we've got a groupsomeenvironmentingroup_varswith a fileusers.yml, with these contents:users__someenvironment_users__to_merge:user1:bobuser2:henryand a groupsomedatacenteringroups_varswith a fileusers.yml, with these contents:users__somedatacenter_users__to_merge:user3:sallyuser4:janeand we're running a play against hosts that are in both of those groups. Then this task:name:Merge user varsmerge_vars:suffix_to_merge:users__to_mergemerged_var_name:merged_usersexpected_type:'dict'will set amerged_usersvar (fact) available to all subsequent tasks that looks like this (if it were to be declared in raw yaml):merged_users:user1:bobuser2:henryuser3:sallyuser4:janeNote that the variables get merged in alphabetical order of their names, with values from later dicts replacing values from earlier dicts. So this setup:users__someenvironment_users__to_merge:user1:bobuser2:jekyllusers__somedatacenter_users__to_merge:user2:hydeuser3:sallyname:Merge user varsmerge_vars:suffix_to_merge:users__to_mergemerged_var_name:merged_usersexpected_type:'dict'would set amerged_usersvar that looks like this (if it were to be declared in raw yaml):merged_users:user1:bobuser2:jekylluser3:sallyWith great power comes great responsibility...Merging listsLet's say we've got asomeenvironmentgroup with anopen_ports.ymlfile that looks like this:open_ports__someenvironment_open_ports__to_merge:-1-2-3and asomedatacentergroup with anopen_ports.ymlfile that looks like this:open_ports__somedatacenter_open_ports__to_merge:-3-4-5Then this task:name:Merge open portsmerge_vars:suffix_to_merge:open_ports__to_mergemerged_var_name:merged_portsexpected_type:'list'will set amerged_portsfact that looks like this (because the variables are merged in alphabetical order):merged_ports:-3-4-5-1-2Notice that3only appears once in the merged result. By default, thismerge_varsplugin will de-dupe the resulting merged value. If you don't want to de-dupe the merged value, you have to declare thededupargument:name:Merge open portsmerge_vars:suffix_to_merge:open_ports__to_mergemerged_var_name:merged_portsdedup:falseexpected_type:'list'which will set this fact:merged_ports:-3-4-5-1-2-3A note aboutdedup:It has no effect when the merged vars are dictionaries.Recursive mergingWhen dealing with complex data structures, you may want to do a deep (recursive) merge.Suppose you have variables that define lists of users to add and select who should have admin privileges:users__someenvironment_users__to_merge:users:-bob-henryadmins:-bobandusers__somedatacenter_users__to_merge:users:-sally-janeadmins:-sallyYou can request a recursive merge with:name:Merge user varsmerge_vars:suffix_to_merge:users__to_mergemerged_var_name:merged_usersexpected_type:'dict'recursive_dict_merge:Trueand get:merged_users:users:-sally-jane-bob-henryadmins:-sally-bobWhen merging dictionaries and the same key exists in both, the recursive merge checks the type of the value:if the entry value is a list, it merges the values as lists (merge_list)if the entry value is a dict, it merges the values (recursively) as dicts (merge_dict)any other values: just replace (use last)Module optionsparameterrequireddefaultchoicescommentssuffix_to_mergeyesSuffix of variables to merge. Must end with__to_merge.merged_var_nameyesName of the target variable.expected_typeyesdict, listExpected type of the merged variable (one of dict or list)dedupnoyesyes / noWhether to remove duplicates from lists (arrays) after merging.recursive_dict_mergenonoyes / noWhether to do deep (recursive) merging of dictionaries, or just merge only at top level and replace valuesVerbosityRunning ansible-playbook with-vwill cause this plugin to output the order in which the keys are being merged:PLAY [Example of merging lists] ************************************************ TASK [Merge port vars] ********************************************************* Merging vars in this order: [ u'group1_ports__to_merge', u'group2_ports__to_merge', u'group3_ports__to_merge'] ok: [localhost] => {"ansible_facts": {"merged_ports": [22, 1111, 443, 2222, 80]}, "changed": false} TASK [debug] ******************************************************************* ok: [localhost] => { "merged_ports": [ 22, 1111, 443, 2222, 80 ] } PLAY RECAP ********************************************************************* localhost : ok=6 changed=0 unreachable=0 failed=0Example PlaybooksThere are some example playbooks in theexamplesdirectory that show how the various features work in the context of an actual Ansible playbook. These example playbooks are run as part of the test suite for this plugin; if you would like to run them yourself, please see theContributingsection for instructions on how to run the test suite.ContributingPlease note that this project is released with aContributor Code of Conduct. By participating in this project you agree to abide by its terms.There is only one prerequisite to working on this project locally:You have the Python versions in the.python-versioninstalled and on your path (probably withpyenvA development workflow may look like this:Clone this repositoryRunmake dev-depsThis will create a virtualenvvenvin the root of this project and install all of the dependencies needed to build a release and run tests.Runmake test-allThis will usetoxto run the tests against different combinations of python versions and ansible releases.It will also usea scriptto queryPyPIfor the latest versions of Ansible, and add them to thetox.inifile if they're not there.Updating thetox.inifile and running all the tests against all of the combinations of Ansible releases and Python versions takes a lot of time. To run aginst just one combination, you can list all of the combinations available and tell tox to only run the tests for one combination:$ venv/bin/tox -l py27-ansible-2.1 py27-ansible-2.2 py27-ansible-2.3 py27-ansible-2.4 py27-ansible-2.5 py27-ansible-2.6 ... py35-ansible-2.5 py35-ansible-2.6 py36-ansible-2.7 py36-ansible-2.8 ... $ venv/bin/tox -e py36-ansible-2.5 ... ```If you have any ideas about things to add or improve, or find any bugs to fix, we're all ears! Just a few guidelines:Please write or update tests (either example-based tests, property-based tests, or both) for any code that you add, change, or remove.Please add an example playbook or update an existing example playbook in theexamplesfolder. These example playbooks serve as the integration tests for this plugin.Please make sure thatmake test-allexits zero. This runs a code linter, all of the tests, and all of the examples against all supported versions of Python and Ansible.If the linting seems too annoying, it probably is! Feel free to do what you need to do in the.pylintrcat the root of this repository to maintain sanity. Add it to your PR, and we'll most likely take it.Happy merging!
ansiblemetrics
The static source code measurement tool for AnsibleAnsibleMetricsis a Python-based static source code measurement tool to characterize Infrastructure-as-Code. It helps quantify the characteristics of infrastructure code to support DevOps engineers when maintaining and evolving it. It currently supports 46 source code metrics, though other metrics can be derived by combining the implemented ones.How to cite AnsibleMetricsIf you use AnsibleMetrics in a scientific publication, we would appreciate citations to the following paper:@article{DALLAPALMA2020100633, title = "AnsibleMetrics: A Python library for measuring Infrastructure-as-Code blueprints in Ansible", journal = "SoftwareX", volume = "12", pages = "100633", year = "2020", issn = "2352-7110", doi = "https://doi.org/10.1016/j.softx.2020.100633", url = "http://www.sciencedirect.com/science/article/pii/S2352711020303460", author = "Stefano {Dalla Palma} and Dario {Di Nucci} and Damian A. Tamburri", keywords = "Infrastructure as Code, Software metrics, Software quality", abstract = "Infrastructure-as-Code (IaC) has recently received increasing attention in the research community, mainly due to the paradigm shift it brings in software design, development, and operations management. However, while IaC represents an ever-increasing and widely adopted practice, concerns arise about the need for instruments that help DevOps engineers efficiently maintain, speedily evolve, and continuously improve Infrastructure-as-Code. In this paper, we present AnsibleMetrics, a Python-based static source code measurement tool to characterize Infrastructure-as-Code. Although we focus on Ansible, the most used language for IaC, our tool could be easily extended to support additional formats. AnsibleMetrics represents a step forward towards software quality support for DevOps engineers developing and maintaining infrastructure code." }How to installInstallation is made simple by thePyPI repository. Download the tool and install it with:pip install ansiblemetricsor, alternatively from the source code project directory:pip install -r requirements.txt pip install .How to useCommand-lineRunansible-metrics --helpfor instructions about the usage:usage: ansible-metrics [-h] [--omit-zero-metrics] [-d DEST] [-o] [-v] src Extract metrics from Ansible scripts. positional arguments: src source file (playbook or tasks file) or directory optional arguments: -h, --help show this help message and exit --omit-zero-metrics omit metrics with value equal 0 -d DEST, --dest DEST destination path to save results -o, --output shows output -v, --version show program's version number and exitAssume that the following example is namedplaybook1.yml:----hosts:webserversvars:http_port:80remote_user:roottasks:-name:ensure apache is at the latest versionyum:name:httpdstate:latest-hosts:databasesremote_user:roottasks:-name:ensure postgresql is at the latest versionyum:name:postgresqlstate:latest-name:ensure that postgresql is startedservice:name:postgresqlstate:startedand is located within the folderplaybooksas follows:playbooks|- playbook1.yml|- playbook2.yml|- playbook3.ymlAlso, assume the user's working directory is theplaybooksfolder. Then, it is possible to extract source code characteristics from that blueprint by running the following command:ansible-metrics --omit-zero-metrics playbook1.yml --dest report.jsonFor this example, the report.json will result in{ "filepath": "playbook1.yml", "avg_play_size": 10, "avg_task_size": 4, "lines_blank": 4, "lines_code": 20, "num_keys": 20, "num_parameters": 6, "num_plays": 2, "num_tasks": 3, "num_tokens": 50, "num_unique_names": 3, "num_vars": 1, "text_entropy": 4.37 }PythonAnsibleMetricscurrently supports up to 46 source code metrics, implemented in Python. To extract the value for a given metric follow this pattern:fromansiblemetrics.<general|playbook>.metricimportMetricscript='a valid yaml script'value=Metric(script).count()wheremetricandMetrichave to be replaced with the name of the desired metric module to compute the value of a specific metric.The difference between thegeneraland theplaybookmodules lies in the fact that theplaybookmodule contains metrics specific to playbooks (for example, the number of plays and tasks), while thegeneralmodule contains metrics that can be generalized to other languages (for example, the lines of code).For example, to count the number of lines of code:fromansiblemetrics.general.lines_codeimportLinesCodescript="""---- hosts: alltasks:- name: This is a task!debug:msg: "Hello World""""print('Lines of executable code:',LinesCode(script).count())To extract the value for the 46 metrics at once, import theansiblemetrics.metrics_extractorpackage and call the methodextract_all()(in this case the return value will be a json object):fromansiblemetrics.metrics_extractorimportextract_allscript="""---- hosts: alltasks:- name: This is a task!debug:msg: "Hello World""""metrics=extract_all(script)print('Lines of executable code:',metrics['lines_code'])
ansible-mikrotik-utils
UNKNOWN
ansible-mkdocs
ansible-documentAutomatically document ansible roles.ConceptGenerate documentation automatically by looking up a role's content.Usage$ansible-mkdocspath/to/role#ex:$ansible-mkdocsexamples/install_gitlabname | value | location------|------|------gitlab_package_script_url | https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | vars/main.ymlgitlab_interface | {{ ansible_default_ipv4['interface'] }} | defaults/main.ymlgitlab_addr | {{ hostvars[inventory_hostname]['ansible_' + gitlab_interface]['ipv4']['address'] }} | defaults/main.ymlgitlab_install | yes | defaults/main.ymlHow does it work?Generate a list with modules and their valuesExample: copy will be used, register the mode, required, ...Lookup every directory (files, tasks, vars, ...) and fetch informationFor every directory, generate the associated templateAggregate every generated templatesAdd metadataHas testsHas moleculeMeta from meta/Output markdown
ansible-modules-consul-acl
ansible-modules-consul-aclAnsible modules for theConsul ACL system:consul_acl_policyconsul_acl_tokenInstallationInstall using pip:pip install ansible-modules-consul-aclThe modules have no external dependencies except Ansible.UsageThe documentation for each module is mostly complete - useansible-docto view it.Example-name:Create ACL policyconsul_acl_policy:name:example# Rules specified as an HCL stringrules:|service "example" {policy = "write"}state:presenturl:https://localhost:8500token:a22c5e4f-0f48-4907-82db-843c6baf75be# Requires acl:writeregister:consul_acl_policy-name:Create ACL tokenconsul_acl_token:description:Example token# Policies specified as a list of PolicyLink objects: https://www.consul.io/api/acl/tokens.html#policiespolicies:-id:"{{consul_acl_policy.id}}"local:truestate:presenturl:https://localhost:8500token:a22c5e4f-0f48-4907-82db-843c6baf75be# Requires acl:writeregister:consul_acl_tokenEnvironment variablesSome of the environment variables for theConsul CLIwill be used if they are defined:CONSUL_HTTP_ADDRfor theurlparameter. Prefix withhttps://instead of settingCONSUL_HTTP_SSL=trueCONSUL_HTTP_TOKENfor thetokenparameterCONSUL_CLIENT_CERTfor theclient_certparameterCONSUL_CLIENT_KEYfor theclient_keyparameterTesting locallyTo run the functional tests, set the following environment variables from the project root directory:exportANSIBLE_LIBRARY="$PWD/ansible/modules/consul_acl"exportANSIBLE_MODULE_UTILS="$PWD/ansible/module_utils"Then run the test playbooks in a Python environment withoutansible-modules-consul-aclinstalled.
ansible-modules-dcos
Ansible modules for DC/OS.UsageCreate a user:- hosts: localhost tasks: - dcos_user: uid: "bobslydell" description: 'bobslydell' password: 'fooBar123ASDF' state: present dcos_credentials: "{{ dcos_facts.ansible_facts.dcos_credentials }}"Create a group:- dcos_group: gid="bobs" description='the bobs'Create a ACL:- dcos_acl: rid: "dcos:adminrouter:service:marathon-bobs" description: "Bob acl"Add user to ACL:- dcos_acl_user: rid: "dcos:adminrouter:service:marathon-bobs" uid: "bobslydell" permission: "read"Add group to ACL:- dcos_acl_group: rid: "dcos:adminrouter:service:marathon-bobs" gid: "bobs" permission: "read"Print the DC/OS token:- debug: msg="{{lookup('dcos_token')}}"Print the DC/OS token header:- debug: msg="{{lookup('dcos_token_header')}}"Get marathon leader:- dcos_marathon_leader: register: marathonLicenseMIT
ansible-modules-hashivault
Ansible modules for Hashicorp Vault.Install this Ansible module:viapip:pip install ansible-modules-hashivaultviaansible-galaxy(requireshvac>=0.7.2):ansible-galaxy install 'git+https://github.com/TerryHowe/ansible-modules-hashivault.git'Note: Thehashicorplookup plugin does not work with this last install method (ansible/ansible#28770). You can fallback to the build-in lookup plugin:hashi_vaultIn most cases the Hashicorp Vault modules should be run on localhost.Environmental VariablesThe following variables need to be exported to the environment where you run ansible in order to authenticate to your HashiCorp Vault instance:VAULT_ADDR: url for vaultVAULT_SKIP_VERIFY=true: if set, do not verify presented TLS certificate before communicating with Vault server. Setting this variable is not recommended except during testingVAULT_AUTHTYPE: authentication type to use:token,userpass,github,ldap,approleVAULT_LOGIN_MOUNT_POINT: mount point for login defaults to auth typeVAULT_TOKEN: token for vaultVAULT_ROLE_ID: (required byapprole)VAULT_SECRET_ID: (required byapprole)VAULT_USER: username to login to vaultVAULT_PASSWORD: password to login to vaultVAULT_CLIENT_KEY: path to an unencrypted PEM-encoded private key matching the client certificateVAULT_CLIENT_CERT: path to a PEM-encoded client certificate for TLS authentication to the Vault serverVAULT_CACERT: path to a PEM-encoded CA cert file to use to verify the Vault server TLS certificateVAULT_CAPATH: path to a directory of PEM-encoded CA cert files to verify the Vault server TLS certificateVAULT_AWS_HEADER: X-Vault-AWS-IAM-Server-ID Header value to prevent replay attacksVAULT_NAMESPACE: specify the Vault Namespace, if you have oneDocumentationThere are a few simple examples in this document, but the full documentation can be found at:https://terryhowe.github.io/ansible-modules-hashivault/modules/list_of_hashivault_modules.htmlReading and WritingThe following example writes the giant secret with two values and then reads the fie value. Thehashivault_secretmodule is kv2 by default:--- - hosts: localhost tasks: - hashivault_secret: secret: giant data: foo: foe fie: fum - hashivault_read: secret: giant key: fie version: 2 register: vault_readThe lookup plugin:- set_fact: looky: "{{lookup('hashivault', 'giant', 'foo', version=2)}}"The hashivault_write, hashivault_read and the lookup plugin assume the /secret mount point. If you are accessing another mount point, usemount_point:--- - hosts: localhost tasks: - hashivault_secret_engine: name: stories backend: generic - hashivault_write: mount_point: /stories secret: stuart data: last: 'little' - hashivault_read: mount_point: /stories secret: stuart key: last - set_fact: book: "{{lookup('hashivault', 'stuart', 'last', mount_point='/stories')}}"Version 2 of KV secret engine is also supported, just addversion: 2:--- - hashivault_read: mount_point: /stories version: 2 secret: stuart key: last - set_fact: book: "{{lookup('hashivault', 'stuart', 'last', mount_point='/stories', version=2)}}"Initialization, Seal, and UnsealThe real strength of this module is all the administrative functions you can do. See the documentation mentioned above for more, but here is a small sample.You may init the vault:--- - hosts: localhost tasks: - hashivault_init: register: vault_initYou may also seal and unseal the vault:--- - hosts: localhost vars: vault_keys: "{{ lookup('env','VAULT_KEYS') }}" tasks: - hashivault_status: register: vault_status - block: - hashivault_seal: register: vault_seal when: "{{vault_status.status.sealed}} == False" - hashivault_unseal: keys: '{{vault_keys}}'Action PluginIf you are not using the VAULT_ADDR and VAULT_TOKEN environment variables, you may be able to simplify your playbooks with an action plugin. This can be some somewhat similar to thisexample action plugin.Developer NoteOne of the complicated problems with development and testing of this module isansible/module_utils/hashivault.pyis not a directory in itself which in my opinion is a problem with ansible. Because of this limitation with ansible,pip install -e .does not work like it would for other projects. Two potential ways to work around this issue are either use thelink.shscript in the top level directory or run for every change:rm -rf dist; python setup.py sdist pip install ./dist/ansible-modules-hashivault-*.tar.gzLicenseMIT.
ansible-modules-idcf-dns
Ansible modules for IDCF-cloud API operationsInstall$ pip install ansible-modules-idcf-dnsIncludesDNSManage DNS zone and recordSeehttp://idcf.jp/cloud/
ansible-modules-morpheus
Ansible Modules MorpheusInstall this module:viapippipinstallansible-modules-morpheusviaansible-galaxyansible-galaxyinstall'git+https://github.com/gomorpheus/ansible-modules-morpheus.git'Environment VariablesIf you choose to use env vars the following variables can be exported to the environment you are controlling with ansible in order to authenticate to your Morpheus Appliance:MORPH_ADDR : url for Morpheus ApplianceMORPH_AUTHTYPE: authorization type for Morpheus (token or userpass)MORPH_USER: Morpheus appliance username for userpass authtypeMORPH_PASSWORD: Morpheus appliance user password for userpass authtypeMORPH_TOKEN: Morpheus api token for token authtypeMORPH_SSL_VERIFY: Boolean for verifying sslAddition variables for specific modules:MORPH_SECRET: Morpheus secret key for Cypher value reads in morph_cypher moduleArgumentsAlternatively you can pass arguments to the module by using discrete variables in your task module. Args that are supported are:baseurl: url for Morpheus Applianceauthtype: authorization type for Morpheus (token or userpass)api_token: Morpheus api token for token authtypeusername: Morpheus appliance username for userpass authtypepassword: Morpheus appliance user password for userpass authtypessl_verify: Boolean for verifying SSLFor specific modulessecret_key: Morpheus secret key for Cypher value reads in morph_cypher moduleModule Examplesmorph_cypher-hosts:footasks:-name:gettokenmorph_cypher:baseurl:"https://sandbox.morpheusdata.com"secret_key:"password/spark"authtype:tokenssl_verify:Falseregister:results-debug:var=results.secretor explicitly passing the api_token as a var:-hosts:footasks:-name:gettokenmorph_cypher:baseurl:"https://sandbox.morpheusdata.com"secret_key:"secret/nooneknows"authtype:tokenapi_token:"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"register:resultsLookup Plugin Examplesmorph_cypher-debug:msg:"{{ lookup('morph_cypher', 'baseurl=https://sandbox.morpheusdata.com authtype=token api_token=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx secret_key=password/spark')}}"-debug:msg:"{{ lookup('morph_cypher', 'baseurl=https://sandbox.morpheusdata.com authtype=userpass username=slim_shady password=password secret_key=secret/hello') }}"-debug:msg:"{{ lookup('morph_cypher', 'baseurl=https://sandbox.morpheusdata.com authtype=token api_token=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ssl_verify=False secret_key=key/256/myKey') }}"LicenseMIT
ansible-modules-pm2
ansible-modules-pm2Ansible Module to Manage Processes viaPM2Manage the state of processes via pm2 process managerStart/Stop/Restart/Reload/Delete applicationsTested on:Host Python: 3.8Target host Python: 2.7, 3.5, 3.6, 3.7, 3.8Ansible: 2.8.10, 2.9.6 (Should work with older versions)InstallationInstall via pip:pipinstallansible-modules-pm2PM2 package have to be installed to target hosts. For example, add following to your playbook to install pm2 globally:-npm:name:pm2global:yesUsageBasic usage is similar toserviceorsupervisorctlmodule: specify the name and its state. To start an app, give eitherscriptorconfig.Examples----name:Start myapp with process config file, if not runningpm2:name:myappconfig:/path/to/myapp/myapp.jsonstate:started-name:Start myapp.js, if not runningpm2:name:myappscript:/path/to/myapp/myapp.jsstate:started-name:Stop process named myapp, if runningpm2:name:myappstate:stopped-name:Restart myapp, in all casespm2:name:myappstate:restarted-name:Reload myapp, in all casespm2:name:myappstate:reloaded-name:Delete myapp, if existspm2:name:myappstate:absent-name:Specify pm2 executable pathpm2:name:myappstate:startedconfig:/path/to/myapp/myapp.jsonexecutable:/path/to/myapp/node_modules/.bin/pm2-name:Also specify working directory where running pm2 commandpm2:name:myappstate:startedconfig:/path/to/myapp/myapp.jsonexecutable:/path/to/myapp/node_modules/.bin/pm2chdir:/path/to/working/directoryArgumentsParametersChoicesCommentsname (required)Name of the application.Required for all cases to check current status of appstatestarted(default)stoppedrestartedreloadedabsentdeletedstarted/stopped/absent/deletedare idempotent actions that will not run commands unless necessary.restartedwill always restart the process.reloadedwill always reload.Note thatrestartedwill fail when the process does not exist (action does not start it automatically).configProcess configuration file, in JSON or YAML format.Eitherconfigorscriptis required whenstate=started.scriptExecutalbe file to start.Eitherconfigorscriptis required whenstate=started.executablePath to pm2 executable.chdirChange into this directory before running pm2 start command.Whenstate=startedand this option is omitted, use the directory whereconfigorscriptexists.LicenseThis software is licensed under GPLv3. SeeLICENSEfor details.
ansible-mongodb-store
ansible_mongodb_storeSome code to be able to write and read data into mongodb direct from ansible playbooks
ansible-navigator
ansible-navigatorA text-based user interface (TUI) for Ansible.A demo of the interface can be foundon YouTube.ContributingAny kind of contribution to this project is very welcome and appreciated, whether it is a documentation improvement,bug report,pull requestreview, or a patch.See theContributing guidelinesfor details.Quick startInstallingGetting started with ansible-navigator is as simple as:pip3 install 'ansible-navigator[ansible-core]' ansible-navigator --help(Users wishing to install within a virtual environment might find the relevantPython documentationuseful.)By default, ansible-navigator uses a container runtime (podmanordocker, whichever it finds first) and runs Ansible within an execution environment (a pre-built container image which includesansible-corealong with a set of Ansible collections.)This default behavior can be disabled by starting ansible-navigator with--execution-environment false. In this case, Ansible and any collections needed must be installed manually on the system.AdditionalLinux,macOSandWindows with WSL2installation instructions are available in theInstallation guide.WelcomeWhen runningansible-navigatorwith no arguments, you will be presented with thewelcome page. From this page, you can run playbooks, browse collections, explore inventories, read Ansible documentation, and more.A full list of key bindings can be viewed by typing:help.Output modesThere are two modes in which ansible-navigator can be run:Theinteractivemode, which provides a curses-based user interface and allows you to "zoom in" on data in real time, filter it, and navigate between various Ansible components; andThestdoutmode, which doesnotuse curses, and simply returns the output to the terminal's standard output stream, as Ansible's commands would.Theinteractivemode is the default and this default can be overwritten by passing--mode stdout(-m stdout) or settingmodeinconfiguration.Example commandsAll of ansible-navigator's features can be accessed from thewelcome pagedescribed above, but as a shortcut, commands can also be provided directly as command-line arguments.Some examples:Review and explore available collections:ansible-navigator collectionsReview and explore current Ansible configuration:ansible-navigator configReview and explore Ansible documentation:ansible-navigator doc ansible.netcommon.cli_commandReview execution environment images available locally:ansible-navigator imagesReview and explore an inventory:ansible-navigator inventory -i inventory.yamlRun and explore a playbook:ansible-navigator run site.yaml -i inventory.yamlOr using thestdoutmode described above:Show the current Ansible configuration:ansible-navigator config dump -m stdoutShow documentation:ansible-navigator doc sudo -t become -m stdout... and so on. A full list of subcommands and their relation to Ansible commands can be found in thesubcommand documentation.Configuring ansible-navigatorThere are several ways to configure ansible-navigator and users and projects are free to choose the most convenient method for them. The full hierarchy of how various configuration sources are applied can be found in the FAQ mentioned below.Of note, projects making use of ansible-navigator can include a project-wide configuration file with the project. If one is not found, ansible-navigator will look for a user-specific configuration file in the user's home directory. Details about this can be found in thesettings documentation.Frequently Asked Questions (FAQ)We maintain alist of common questionswhich provides a good resource to check if something is tripping you up. We also encourage additions to this document for the greater community!Licenseansible-navigator is released under the Apache License version 2. See theLICENSEfile for more details.
ansible-netbox-inventory
ToCIntroCompatibilityGroupingHosts variablesOptionsUsageIntroThis is a Netbox dynamic inventory script for Ansible.Netboxis an IP address management (IPAM) and data center infrastructure management (DCIM) tool. Itโ€™s nice, modern, and has good APIs โ€ฆ so itโ€™s a pretty nice option to serve as a โ€œSource of Truthโ€.You can group servers as you want and based on what you have in Netbox, you can select fields as groups or as vars for hosts. And you can use default fields or custom fields.CompatibilityThe script tested withnetbox = v1.6andnetbox = v2.0.4, but most probably it will work with all netbox v1.0 and above.GroupingServers could be grouped by any section in Netbox. e.g. you can group hosts by โ€œsite, โ€œrackโ€, โ€œroleโ€, โ€œplatformโ€, or any other section in Netbox.Please remember: For grouping, API names should be used not UI names.So if you have a โ€œsiteโ€ called โ€œUS-Eastโ€, in Ansible you will get a hosts group is called โ€œUS-Eastโ€ has all hosts in that site.If that section is adefaultsection you need to put it undergroup_by.defaultif itโ€™s a custom section (custom fields), then put it undergroup_by.custom.Here is an example how servers will be grouped based on theirplatform.group_by: default: - platformSo if you have โ€œUbuntuโ€ and โ€œCentOSโ€ as platforms in Netbox, you will have 2 groups of servers that using that systems.Hosts variablesNetbox sections could be used as variables for hosts! e.g. you could use the IP of the host in Netbox asansible_ssh_host, or use a custom field as well.There are 3 sections here, first type isIP, second one isGeneral, and finallyCustom.Variables are defined asKey: Value. The key is what will be in Ansible and value comes from Netbox.hosts_vars: ip: ansible_ssh_host: primary_ipHereprimary_ipwill be used as value foransible_ssh_host.Options$ netbox.py -h usage: netbox.py [-h] [-c CONFIG_FILE] [--list] [--host HOST] optional arguments: -h, --help show this help message and exit -c CONFIG_FILE, --config-file CONFIG_FILE Path for script's configuration. Also "NETBOX_CONFIG_FILE" could be used as env var to set conf file path. (default: netbox.yml) --list Print all hosts with vars as Ansible dynamic inventory syntax. (default: False) --host HOST Print specific host vars as Ansible dynamic inventory syntax. (default: None)You can also set config file path through environment variableNETBOX_CONFIG_FILE.Usage$ ansible all -i netbox.py -m ping
ansible-nwd
Ansible Never Write the DocIntroductionAnsible Never Write the Docprovides you an automatic way to create Ansible roles documentation.It parses all Ansible roles folders and gathers information to write them into aReadme.mdfor you.Ansible-nwdis also compatible withmolecule, it will parse your different scenarios and write them into your documentation file.ResourcesDocumentation :https://docs.ansible-nwd.comPip download :https://pypi.org/project/ansible-nwd/Docker image :https://hub.docker.com/repository/docker/laurentvasseur/ansible-nwdExampleIn this repo, you can find an example of Ansible role and allAnsible-nwdpattern available at the following path :examples/python3
ansible-output-parser
The Ansible Parser is intended to parse the output that Ansible returns.InstallationSimply install using:pipinstallansible-output-parserUsagefromansible_parser.playimportPlayplay=""# populate with play outputansible=Play(play_output=play)failures=ansible.failures()Alternatively the following can be executed to read from a log file:fromansible_parser.logsimportLogslog_file=""# path to log filelog_plays=Logs(log_file=log_file)Reading from a log file will result in multiple plays as it may process plays with the same name.
ansible-parallel
ansible-parallelTL;DR:pipinstallansible-parallel ansible-parallel*.ymlExecutes multiple ansible playbooks in parallel.For my usage, running sequentially (using asite.ymlcontaining multipleimport_playbook) takes 30mn, running in parallel takes 10mn.Usageansible-parallelruns likeansible-playbookbut accepts multiple playbooks. All remaining options are passed toansible-playbookso feel free to runansible-parallel --check *.ymlfor example.ExampleIt's easy to start:$ansible-parallel*.ymlWhen it runs, it display a live update of what's going on, one line per playbook:web.yml: TASK [common : Configure Debian repositories] ***************************** gitlab.yml: TASK [common : Configure IP failover] ************************************* staging.yml: TASK [common : Configure Debian repositories] ***************************** dev.yml: Done.And when it's done, it prints a full report like:# Playbook playbook-webs.yml, ran in 123s web1.meltygroup.com : ok=51 changed=0 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 web2.meltygroup.com : ok=51 changed=0 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 web3.meltygroup.com : ok=51 changed=0 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 # Playbook playbook-staging.yml, ran in 138s staging1.meltygroup.com : ok=64 changed=6 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 # Playbook playbook-gitlab.yml, ran in 179s gitlab-runner1.meltygroup.com : ok=47 changed=0 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 gitlab-runner2.meltygroup.com : ok=47 changed=0 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 gitlab-runner3.meltygroup.com : ok=47 changed=0 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 gitlab.meltygroup.com : ok=51 changed=0 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 # Playbook playbook-devs.yml, ran in 213s dev1.meltygroup.com : ok=121 changed=0 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 dev2.meltygroup.com : ok=121 changed=0 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0Known alternativesansible-pullansible-parallel is only good if you want to keep the push behavior of Ansible, but if you're here you may have a lot of playbooks, and switching toansible-pullwith a proper reporting system likeARAxargsA quick and dirty way of doing it in 3 lines of bash:ls -1 *.yml | xargs -n1 -P16 sh -c 'ansible-playbook "$$0" > "$$0.log"' ||: grep -B1 "^\(changed\|fatal\|failed\):" *.log echo *.yml.log | xargs -n1 sed -n -e '/^PLAY RECAP/,$$p'
ansible-playbook-debugger
ansible-playbook-debugger is the tool to debug a playbook. This debugger is invoked when the task in the playbook fails, and enables you to check actually used moduleโ€™s args, variables, facts, and so on. Also, you can fix moduleโ€™s args in the debugger, and re-run the failed task (and if it is successful, the remaining part of the playbook runs).Seehttps://github.com/ks888/ansible-playbook-debuggerfor more details.
ansible-playbook-grapher
Ansible Playbook Grapheransible-playbook-grapheris a command line tool to create a graph representing your Ansible playbook plays, tasks and roles. The aim of this project is to have an overview of your playbook.Inspired byAnsible Inventory Grapher.FeaturesThe following features are available when opening the SVGs in a browser (recommended) or a viewer that supports JavaScript:Highlighting of all the related nodes of a given node when clicking or hovering. Example: Click on a role to select all its tasks when--include-role-tasksis set.A double click on a node opens the corresponding file or folder depending whether if it's a playbook, a play, a task or a role. By default, the browser will open folders and download files since it may not be able to render the YAML file.Optionally, you can setthe open protocol to use VSCodewith--open-protocol-handler vscode: it will open the folders when double-clicking on roles (notinclude_role) and the files for the others nodes. The cursor will be at the task exact position in the file.Lastly, you can provide your own protocol formats with--open-protocol-handler custom --open-protocol-custom-formats '{}'. See the help andan example..Filer tasks based on tagsExport the dot file used to generate the graph with Graphviz.PrerequisitesPython 3.10 at least. Might work with some previous versions but the code is NOT tested against them. Seesupport matrix.A virtual environment from which to run the grapher. This ishighly recommendedbecause the grapher depends on some versions of ansible-core which are not necessarily installed in your environment and may cause issues if you use some older versions of Ansible ( sinceansiblepackage has been split).Graphviz: The tool used to generate the graph in SVG.sudoapt-getinstallgraphviz# or yum install or brew installI try to respectRed Hat Ansible Engine Life Cyclefor the supported Ansible version.Installationpipinstallansible-playbook-grapherRenderersAt the time of writing, two renderers are supported:graphviz(default): Generate the graph in SVG. It has more features and is more tested: open protocol, highlight linked nodes...mermaid-flowchart: Generate the graph inMermaidformat. You can directly embed the graph in your markdown and GitHub ( andother integrations) will render it.Early support.If you are interested to support more renderers, feel free to create an issue or raise a PR based on the existing renderers.Usageansible-playbook-graphertests/fixtures/example.ymlansible-playbook-grapher--include-role-taskstests/fixtures/with_roles.ymlansible-playbook-graphertests/fixtures/with_block.ymlansible-playbook-grapher--include-role-tasks--renderermermaid-flowcharttests/fixtures/multi-plays.yml--- title: Ansible Playbook Grapher --- %%{ init: { "flowchart": { "curve": "bumpX" } } }%% flowchart LR %% Start of the playbook 'tests/fixtures/multi-plays.yml' playbook_34b89e53("tests/fixtures/multi-plays.yml") %% Start of the play 'Play: all (0)' play_8c4134b8["Play: all (0)"] style play_8c4134b8 fill:#656f5d,color:#ffffff playbook_34b89e53 --> |"1"| play_8c4134b8 linkStyle 0 stroke:#656f5d,color:#656f5d pre_task_dd2c1b7d["[pre_task] Pretask"] style pre_task_dd2c1b7d stroke:#656f5d,fill:#ffffff play_8c4134b8 --> |"1"| pre_task_dd2c1b7d linkStyle 1 stroke:#656f5d,color:#656f5d pre_task_bc33639f["[pre_task] Pretask 2"] style pre_task_bc33639f stroke:#656f5d,fill:#ffffff play_8c4134b8 --> |"2"| pre_task_bc33639f linkStyle 2 stroke:#656f5d,color:#656f5d %% Start of the role 'fake_role' play_8c4134b8 --> |"3"| role_f4e6fb4d linkStyle 3 stroke:#656f5d,color:#656f5d role_f4e6fb4d("[role] fake_role") style role_f4e6fb4d fill:#656f5d,color:#ffffff,stroke:#656f5d task_94f7fc58[" fake_role : Debug 1"] style task_94f7fc58 stroke:#656f5d,fill:#ffffff role_f4e6fb4d --> |"1 [when: ansible_distribution == 'Debian']"| task_94f7fc58 linkStyle 4 stroke:#656f5d,color:#656f5d task_bd56c6b5[" fake_role : Debug 2"] style task_bd56c6b5 stroke:#656f5d,fill:#ffffff role_f4e6fb4d --> |"2 [when: ansible_distribution == 'Debian']"| task_bd56c6b5 linkStyle 5 stroke:#656f5d,color:#656f5d task_4f51a1cc[" fake_role : Debug 3 with double quote &#34;here&#34; in the name"] style task_4f51a1cc stroke:#656f5d,fill:#ffffff role_f4e6fb4d --> |"3 [when: ansible_distribution == 'Debian']"| task_4f51a1cc linkStyle 6 stroke:#656f5d,color:#656f5d %% End of the role 'fake_role' %% Start of the role 'display_some_facts' play_8c4134b8 --> |"4"| role_497b8470 linkStyle 7 stroke:#656f5d,color:#656f5d role_497b8470("[role] display_some_facts") style role_497b8470 fill:#656f5d,color:#ffffff,stroke:#656f5d task_984b3c44[" display_some_facts : ansible_architecture"] style task_984b3c44 stroke:#656f5d,fill:#ffffff role_497b8470 --> |"1"| task_984b3c44 linkStyle 8 stroke:#656f5d,color:#656f5d task_3cb4a46c[" display_some_facts : ansible_date_time"] style task_3cb4a46c stroke:#656f5d,fill:#ffffff role_497b8470 --> |"2"| task_3cb4a46c linkStyle 9 stroke:#656f5d,color:#656f5d task_715c2049[" display_some_facts : Specific included task for Debian"] style task_715c2049 stroke:#656f5d,fill:#ffffff role_497b8470 --> |"3"| task_715c2049 linkStyle 10 stroke:#656f5d,color:#656f5d %% End of the role 'display_some_facts' task_d8b579ea["[task] Add backport {{backport}}"] style task_d8b579ea stroke:#656f5d,fill:#ffffff play_8c4134b8 --> |"5"| task_d8b579ea linkStyle 11 stroke:#656f5d,color:#656f5d task_99117197["[task] Install packages"] style task_99117197 stroke:#656f5d,fill:#ffffff play_8c4134b8 --> |"6"| task_99117197 linkStyle 12 stroke:#656f5d,color:#656f5d post_task_f789bda0["[post_task] Posttask"] style post_task_f789bda0 stroke:#656f5d,fill:#ffffff play_8c4134b8 --> |"7"| post_task_f789bda0 linkStyle 13 stroke:#656f5d,color:#656f5d post_task_08755b4b["[post_task] Posttask 2"] style post_task_08755b4b stroke:#656f5d,fill:#ffffff play_8c4134b8 --> |"8"| post_task_08755b4b linkStyle 14 stroke:#656f5d,color:#656f5d %% End of the play 'Play: all (0)' %% Start of the play 'Play: database (0)' play_40fea3c6["Play: database (0)"] style play_40fea3c6 fill:#2370a9,color:#ffffff playbook_34b89e53 --> |"2"| play_40fea3c6 linkStyle 15 stroke:#2370a9,color:#2370a9 %% Start of the role 'fake_role' play_40fea3c6 --> |"1"| role_38fdd7bb linkStyle 16 stroke:#2370a9,color:#2370a9 role_38fdd7bb("[role] fake_role") style role_38fdd7bb fill:#2370a9,color:#ffffff,stroke:#2370a9 task_54a811a1[" fake_role : Debug 1"] style task_54a811a1 stroke:#2370a9,fill:#ffffff role_38fdd7bb --> |"1 [when: ansible_distribution == 'Debian']"| task_54a811a1 linkStyle 17 stroke:#2370a9,color:#2370a9 task_0400749b[" fake_role : Debug 2"] style task_0400749b stroke:#2370a9,fill:#ffffff role_38fdd7bb --> |"2 [when: ansible_distribution == 'Debian']"| task_0400749b linkStyle 18 stroke:#2370a9,color:#2370a9 task_e453cadd[" fake_role : Debug 3 with double quote &#34;here&#34; in the name"] style task_e453cadd stroke:#2370a9,fill:#ffffff role_38fdd7bb --> |"3 [when: ansible_distribution == 'Debian']"| task_e453cadd linkStyle 19 stroke:#2370a9,color:#2370a9 %% End of the role 'fake_role' %% Start of the role 'display_some_facts' play_40fea3c6 --> |"2"| role_b05b7094 linkStyle 20 stroke:#2370a9,color:#2370a9 role_b05b7094("[role] display_some_facts") style role_b05b7094 fill:#2370a9,color:#ffffff,stroke:#2370a9 task_153db06e[" display_some_facts : ansible_architecture"] style task_153db06e stroke:#2370a9,fill:#ffffff role_b05b7094 --> |"1"| task_153db06e linkStyle 21 stroke:#2370a9,color:#2370a9 task_13df99ce[" display_some_facts : ansible_date_time"] style task_13df99ce stroke:#2370a9,fill:#ffffff role_b05b7094 --> |"2"| task_13df99ce linkStyle 22 stroke:#2370a9,color:#2370a9 task_369b5720[" display_some_facts : Specific included task for Debian"] style task_369b5720 stroke:#2370a9,fill:#ffffff role_b05b7094 --> |"3"| task_369b5720 linkStyle 23 stroke:#2370a9,color:#2370a9 %% End of the role 'display_some_facts' %% End of the play 'Play: database (0)' %% Start of the play 'Play: webserver (0)' play_a68ff4e7["Play: webserver (0)"] style play_a68ff4e7 fill:#a905c7,color:#ffffff playbook_34b89e53 --> |"3"| play_a68ff4e7 linkStyle 24 stroke:#a905c7,color:#a905c7 %% Start of the role 'nested_include_role' play_a68ff4e7 --> |"1"| role_8bcf64e2 linkStyle 25 stroke:#a905c7,color:#a905c7 role_8bcf64e2("[role] nested_include_role") style role_8bcf64e2 fill:#a905c7,color:#ffffff,stroke:#a905c7 task_bd87cdf3[" nested_include_role : Ensure postgresql is at the latest version"] style task_bd87cdf3 stroke:#a905c7,fill:#ffffff role_8bcf64e2 --> |"1"| task_bd87cdf3 linkStyle 26 stroke:#a905c7,color:#a905c7 task_d7674c4b[" nested_include_role : Ensure that postgresql is started"] style task_d7674c4b stroke:#a905c7,fill:#ffffff role_8bcf64e2 --> |"2"| task_d7674c4b linkStyle 27 stroke:#a905c7,color:#a905c7 %% Start of the role 'display_some_facts' role_8bcf64e2 --> |"3 [when: x is not defined]"| role_806214e1 linkStyle 28 stroke:#a905c7,color:#a905c7 role_806214e1("[role] display_some_facts") style role_806214e1 fill:#a905c7,color:#ffffff,stroke:#a905c7 task_b1fb63fd[" display_some_facts : ansible_architecture"] style task_b1fb63fd stroke:#a905c7,fill:#ffffff role_806214e1 --> |"1"| task_b1fb63fd linkStyle 29 stroke:#a905c7,color:#a905c7 task_4a1319fd[" display_some_facts : ansible_date_time"] style task_4a1319fd stroke:#a905c7,fill:#ffffff role_806214e1 --> |"2"| task_4a1319fd linkStyle 30 stroke:#a905c7,color:#a905c7 task_175005a1[" display_some_facts : Specific included task for Debian"] style task_175005a1 stroke:#a905c7,fill:#ffffff role_806214e1 --> |"3"| task_175005a1 linkStyle 31 stroke:#a905c7,color:#a905c7 %% End of the role 'display_some_facts' %% Start of the role 'fake_role' role_8bcf64e2 --> |"4"| role_557d6933 linkStyle 32 stroke:#a905c7,color:#a905c7 role_557d6933("[role] fake_role") style role_557d6933 fill:#a905c7,color:#ffffff,stroke:#a905c7 task_1fa41f3c[" fake_role : Debug 1"] style task_1fa41f3c stroke:#a905c7,fill:#ffffff role_557d6933 --> |"1"| task_1fa41f3c linkStyle 33 stroke:#a905c7,color:#a905c7 task_2841d72b[" fake_role : Debug 2"] style task_2841d72b stroke:#a905c7,fill:#ffffff role_557d6933 --> |"2"| task_2841d72b linkStyle 34 stroke:#a905c7,color:#a905c7 task_e5fef12a[" fake_role : Debug 3 with double quote &#34;here&#34; in the name"] style task_e5fef12a stroke:#a905c7,fill:#ffffff role_557d6933 --> |"3"| task_e5fef12a linkStyle 35 stroke:#a905c7,color:#a905c7 %% End of the role 'fake_role' %% End of the role 'nested_include_role' %% Start of the role 'display_some_facts' play_a68ff4e7 --> |"2"| role_2720d5bc linkStyle 36 stroke:#a905c7,color:#a905c7 role_2720d5bc("[role] display_some_facts") style role_2720d5bc fill:#a905c7,color:#ffffff,stroke:#a905c7 task_4d8d8def[" display_some_facts : ansible_architecture"] style task_4d8d8def stroke:#a905c7,fill:#ffffff role_2720d5bc --> |"1"| task_4d8d8def linkStyle 37 stroke:#a905c7,color:#a905c7 task_58aea4f6[" display_some_facts : ansible_date_time"] style task_58aea4f6 stroke:#a905c7,fill:#ffffff role_2720d5bc --> |"2"| task_58aea4f6 linkStyle 38 stroke:#a905c7,color:#a905c7 task_800f91e9[" display_some_facts : Specific included task for Debian"] style task_800f91e9 stroke:#a905c7,fill:#ffffff role_2720d5bc --> |"3"| task_800f91e9 linkStyle 39 stroke:#a905c7,color:#a905c7 %% End of the role 'display_some_facts' %% End of the play 'Play: webserver (0)' %% End of the playbook 'tests/fixtures/multi-plays.yml'Note on block: Sinceblocks are logical group of tasks, the conditionalwhenis not displayed on the edges pointing to them but on the tasks inside the block. This mimicsAnsible behaviorregarding the blocks.CLI optionsThe available options:usage: ansible-playbook-grapher [-h] [-v] [-i INVENTORY] [--include-role-tasks] [-s] [--view] [-o OUTPUT_FILENAME] [--open-protocol-handler {default,vscode,custom}] [--open-protocol-custom-formats OPEN_PROTOCOL_CUSTOM_FORMATS] [--group-roles-by-name] [--renderer {graphviz,mermaid-flowchart}] [--renderer-mermaid-directive RENDERER_MERMAID_DIRECTIVE] [--renderer-mermaid-orientation {TD,RL,BT,RL,LR}] [--version] [-t TAGS] [--skip-tags SKIP_TAGS] [--vault-id VAULT_IDS] [--ask-vault-password | --vault-password-file VAULT_PASSWORD_FILES] [-e EXTRA_VARS] playbooks [playbooks ...] Make graphs from your Ansible Playbooks. positional arguments: playbooks Playbook(s) to graph options: --ask-vault-password, --ask-vault-pass ask for vault password --group-roles-by-name When rendering the graph, only a single role will be display for all roles having the same names. Default: False --include-role-tasks Include the tasks of the role in the graph. --open-protocol-custom-formats OPEN_PROTOCOL_CUSTOM_FORMATS The custom formats to use as URLs for the nodes in the graph. Required if --open-protocol-handler is set to custom. You should provide a JSON formatted string like: {"file": "", "folder": ""}. Example: If you want to open folders (roles) inside the browser and files (tasks) in vscode, set it to: '{"file": "vscode://file/{path}:{line}:{column}", "folder": "{path}"}'. path: the absolute path to the file containing the the plays/tasks/roles. line/column: the position of the plays/tasks/roles in the file. You can optionally add the attribute "remove_from_path" to remove some parts of the path if you want relative paths. --open-protocol-handler {default,vscode,custom} The protocol to use to open the nodes when double- clicking on them in your SVG viewer. Your SVG viewer must support double-click and Javascript. The supported values are 'default', 'vscode' and 'custom'. For 'default', the URL will be the path to the file or folders. When using a browser, it will open or download them. For 'vscode', the folders and files will be open with VSCode. For 'custom', you need to set a custom format with --open-protocol-custom- formats. --renderer {graphviz,mermaid-flowchart} The renderer to use to generate the graph. Default: graphviz --renderer-mermaid-directive RENDERER_MERMAID_DIRECTIVE The directive for the mermaid renderer. Can be used to customize the output: fonts, theme, curve etc. More info at https://mermaid.js.org/config/directives.html. Default: '%%{ init: { "flowchart": { "curve": "bumpX" } } }%%' --renderer-mermaid-orientation {TD,RL,BT,RL,LR} The orientation of the flow chart. Default: 'LR' --skip-tags SKIP_TAGS only run plays and tasks whose tags do not match these values --vault-id VAULT_IDS the vault identity to use --vault-password-file VAULT_PASSWORD_FILES, --vault-pass-file VAULT_PASSWORD_FILES vault password file --version show program's version number and exit --view Automatically open the resulting SVG file with your systemโ€™s default viewer application for the file type -e EXTRA_VARS, --extra-vars EXTRA_VARS set additional variables as key=value or YAML/JSON, if filename prepend with @ -h, --help show this help message and exit -i INVENTORY, --inventory INVENTORY specify inventory host path or comma separated host list. -o OUTPUT_FILENAME, --output-file-name OUTPUT_FILENAME Output filename without the '.svg' extension. Default: <playbook>.svg -s, --save-dot-file Save the graphviz dot file used to generate the graph. -t TAGS, --tags TAGS only run plays and tasks tagged with these values -v, --verbose Causes Ansible to print more debug messages. Adding multiple -v will increase the verbosity, the builtin plugins currently evaluate up to -vvvvvv. A reasonable level to start is -vvv, connection debugging might require -vvvv.Configuration: ansible.cfgThe content ofansible.cfgis loaded automatically when running the grapher according to Ansible's behavior. The corresponding environment variables are also loaded.The values in the config file (and their corresponding environment variables) may affect the behavior of the grapher. For exampleTAGS_RUNandTAGS_SKIPor vault configuration.More informationhere.LimitationsSince Ansible Playbook Grapher is a static analyzer that parses your playbook, it's limited to what can be determined statically: no task is run against your inventory. The parser tries to interpolate the variables, but some of them are only available when running your playbook ( ansible_os_family, ansible_system, etc.). The tasks inside anyimport_*orinclude_*with some variables in their arguments may not appear in the graph.The rendered SVG graph may sometime display tasks in a wrong order. I cannot control this behavior of Graphviz yet. Always check the edge label to know the tasks order.The label of the edges may overlap with each other. They are positioned so that they are as close as possible to the target nodes. If the same role is used in multiple plays or playbooks, the labels can overlap.ContributionContributions are welcome. Feel free to contribute by creating an issue or submitting a PR :smiley:Dev environmentTo setup a new development environment :Install graphviz (see above)(cd tests && pip install -r requirements_tests.txt)Run the tests and open the generated files in your systemโ€™s default viewer application:exportTEST_VIEW_GENERATED_FILE=1maketest# run all testsThe graphs are generated in the foldertests/generated-svgs. They are also generated as artefacts inGithub Actions. Feel free to look at them when submitting PRs.Lint and formatThe project uses black to format the code. Runblack .to format.LicenseGNU General Public License v3.0 or later (Same as Ansible)SeeLICENSEfor the full text
ansible-playbook-runner
ansible-playbook-runneransible-playbook-runner is a simple wrapper for ansibleInstallationUse the package managerpipto install ansible-playbook-runner.pipinstallfoobarUsagefromansible_playbook_runnerimportRunnerRunner(['inventory_path'],'playbook_path').run()ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.
ansible-please
ansible_pleaseHelper package to make running Ansible a bit smootherRun Ansible tasks and playbooks from python with ease!InstallationTo install python package from pypi:python -m pip install ansible-pleaseFrom source:python setup.py installTo install ansible plugins, likedocker_containeransible-galaxy collection install community.dockerOverviewMain Components:Inventory- Handles ansible inventory creation from input:Basic input:hosts: master_host: - 'localhost' host_info: '127.0.0.1': 'python_path': /usr/bin/python3AnsibleTask- Handles individual ansible task creationDocker Task creationfrom ansible_please.task_templates.docker_container import DockerContainer docker_container_task = DockerContainer( task_description="start-test-redis", name="test-redis", image="redis:latest", )converts to yaml:- name: '[up] start-test-redis' docker_container: name: test-redis image: redis:latest user: nobody keep_volumes: false detach: true tty: false interactive: false network_mode: host container_default_behavior: compatibility tags: - upupto start the container,downto tear it downPlaybook- Handles playbook creationfrom ansible_please.playbook import Playbook p = Playbook(name="Set up master_host", hosts="master_host", tasks=[docker_container_task.up(), docker_container_task.down()])converts to yaml:- name: Set up master_host hosts: master_host gather_facts: true tasks: - name: '[up] start-test-redis' docker_container: name: test-redis image: redis:latest user: nobody keep_volumes: false detach: true tty: false interactive: false network_mode: host container_default_behavior: compatibility tags: - up - name: '[down] start-test-redis' docker_container: name: test-redis state: absent user: nobody keep_volumes: false detach: true tty: false interactive: false network_mode: host container_default_behavior: compatibility tags: - downAnsibleRunner- main handler for running playbooksr = AnsibleRunner(playbook=p, input_path="test_input.yml") # or pass in Inventory class r.up()See full examplesFree software: MIT licenseDocumentation:https://ansible-please.readthedocs.io.CreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2020-12-22)First release on PyPI.
ansible-prngmgr-inventory
ansible-prngmgr-inventoryAn ansible dynamic inventory module to fetch peering session data fromprngmgr
ansible-provision
No description available on PyPI.
ansiblepy
#Ansibly Welcome to Ansibly
ansible-py3
Failed to fetch description. HTTP Status Code: 404
ansible-pygments
Pygmentslexer and style Ansible snippetsThis project provides aPygmentslexer that is able to handleAnsibleoutput. It may be used anywhere Pygments is integrated. The lexer is registered globally under the nameansible-output.It also provides aPygmentsstyle for tools needing to highlight code snippets.The code is licensed under the terms of theBSD 2-Clause license.Using the lexer inSphinxMake sure this library in installed in the same env as yourSphinxautomation viapip install ansible-pygments sphinx. Then, you should be able to use a lexer by its nameansible-outputin the code blocks of your RST documents. For example:..code-block::ansible-output[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)ok: [localhost] => {"msg": ""}Using the style inSphinxIt is possible to just setansibleinconf.pyand it will "just work", provided that this project is installed alongsideSphinxas shown above.pygments_style='ansible'
ansible-pylibssh
pylibssh: Python bindings to client functionality of libssh specific to Ansible use caseNightlies @ Dumb PyPI @ GitHub PagesWe publish nightlies on tags and pushes to devel. They are hosted on a GitHub Pages based index generated bydumb-pypi.The web view is @https://ansible.github.io/pylibssh/.$pipinstall\--extra-index-url=https://ansible.github.io/pylibssh/simple/\--pre\ansible-pylibsshRequirementsYou need Python 3.6+pylibssh requires libssh to be installed in particular:libssh version 0.9.0 and later.To install libssh refer to itsDownloads page.Building the moduleIn the local env, assumes thereโ€™s a libssh shared library on the system, build toolchain is present and env vars are set properly:$gitclonehttps://github.com/ansible/pylibssh.git$cdpylibssh$pipinstalltox$tox-ebuild-distsmanylinux-compatible wheels:$gitclonehttps://github.com/ansible/pylibssh.git$cdpylibssh$pipinstalltox$tox-ebuild-dists-manylinux1-x86_64# with Docker#orwithPodman$DOCKER_EXECUTABLE=podmantox-ebuild-dists-manylinux1-x86_64#toenableshellscriptdebugmodeuse$tox-ebuild-dists-manylinux1-x86_64---eDEBUG=1LicenseThis library is distributed under the terms of LGPL 2 or higher, see fileLICENSE.rstin this repository.
ansible-qt-launcher
QT Launcher for ansible - Make your ansible tasks easyA Helper pip package for ansible tasks. This pip package helps you automate your CI/CD pipeline.Assumptions:Assuming python is installed on your system.Ansible is installed on your systemInstallansible-qt-launceron your system using :pip install ansible-qt-launcherfollow this link for more detailshttps://dzone.com/articles/executable-package-pip-install
ansibler
No description available on PyPI.
ansible-repo
ANSIBLE-REPOusage:ansible-repo is a command which allowed ansible to download a git repository and run it as a playbook Usage: ansible-repo <repo> <tag> [-e <env_vars>] [-vvv] ansible-repo <repo> <tag> -i <inventory> [-e <env_vars>] [-vvv] Option: -v verbose -i inventory -e Ansible environment variables
ansible-review
No description available on PyPI.
ansible-risk-insight
No description available on PyPI.
ansible-role
The missing โ€œansible-roleโ€ command: downloads, installs, and cleans temporary ansible roles without the need for manually editing ansible playbooks
ansible-role-algosec
AlgoSec Ansible Role====================|docs| |travis| |coverage|.. |docs| image:: https://readthedocs.org/projects/algosec-ansible-role/badge/:target: http://algosec-ansible-role.readthedocs.io/en/latest/:alt: Documentation Status.. |coverage| image:: https://img.shields.io/codecov/c/github/algosec/algosec-ansible-role.svg:target: https://codecov.io/gh/algosec/algosec-ansible-role.. |travis| image:: https://travis-ci.com/algosec/algosec-ansible-role.svg?branch=master:target: https://travis-ci.com/algosec/algosec-ansible-roleAnsible role to DevOps-ify network security management, leveraging AlgoSec's business-driven security policy management solutionDocumentation available online at: http://algosec-ansible-role.readthedocs.io/en/latest/Requirements------------* This module is supported and fully tested under ``python2.7`` and ``python3.6``.* All modules of this role require environment::pip install algosec --upgradepip install ansible marshmallow urllib3Installation------------The Ansible role can be installed directly from Ansible Galaxy by running::ansible-galaxy install algosec.algosecIf the ``ansible-galaxy`` command-line tool is not available (usually shipped with Ansible), or you prefer to download the role package directly,navigate to the Ansible Galaxy `role page <https://galaxy.ansible.com/algosec/algosec/>`_ and hit "Download".Alternately, you can directly navigate to our `GitHub repository <https://github.com/algosec/algosec-ansible-role>`_.Usage--------------Once installed, you can start using the modules included in this role in your ansible playbooks.To quickly get up and running a simple example you can follow these steps:1. Download and unzip locally the examples folder by clicking `here <https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/algosec/algosec-ansible-role/tree/master/examples>`_.2. Update authentication credentials in ``vars/algosec-secrets.yml``.3. Update your AlgoSec server IP in ``inventory.ini``.4. Update the arguments of the relevant modules in one of the playbooks (files with the ``yml`` extension).5. Run ``ansible-playbook -i inventory.ini <playbook-filename>.yml``.6. You've made it!Documentation-------------.. image:: https://readthedocs.org/projects/algosec-ansible-role/badge/:target: https://algosec-ansible-role.readthedocs.io/en/latest/:alt: Documentation StatusDocumentation available online at: https://algosec-ansible-role.readthedocs.io/en/latest/How to build doc's locally?^^^^^^^^^^^^^^^^^^^^^^^^^^^Using Docker, running from one folder outside of the project::$ docker run -it -v $PWD/ansible-role-algosec/:/documents/ ivanbojer/spinx-with-rtd$ cd docs$ make htmlUsing Spinx::$ cd docs$ make htmlThen see the ``docs/_build`` folder created for the html files.License-------MIT (see full license `here <http://algosec-ansible-role.readthedocs.io/en/latest/license.html>`_)Author Information------------------AlgoSec Official Websitehttps://www.algosec.com/Development-----------To kickoff local development, just use `pipenv`::pipenv installAnd to use the newly installed virtual environment just run::pipenv shell
ansible_role_apply
Apply a single Ansible role to host(s) easilyExample usage$ansible-role-apply--helpUsage:ansible-role-apply[OPTIONS]ROLEHOSTSOptions:-s,--sudo/--no-sudo--show-playbook/--no-show-playbook--helpShowthismessageandexit.$ansible-role-applydockervagrant--sudo...PLAY[vagrant]****************************************************************GATHERINGFACTS***************************************************************ok:[vagrant]...PLAYRECAP********************************************************************vagrant:ok=16changed=1unreachable=0failed=0$ansible-role-applydockervagrant--sudo--show-playbook-------------------------------------------------------------------------------#!/usr/bin/env ansible-playbook----hosts:-vagrantroles:-dockersudo:True-------------------------------------------------------------------------------|--->pluginactivated:/Users/marca/dev/surveymonkey/smstack/smstack/ansible/plugins/no_hosts_matched_fail.pyc|--->pluginactivated:/Users/marca/dev/surveymonkey/smstack/smstack/ansible/plugins/sm_app_role.pyc|--->pluginactivated:/Users/marca/dev/surveymonkey/smstack/smstack/ansible/plugins/teamcity_messages.pycPLAY[vagrant]****************************************************************...PLAYRECAP********************************************************************vagrant:ok=16changed=1unreachable=0failed=0
ansible-role-atos-hsm
A role to manage ATOS Hardware Security Module (HSM) client software.Role VariablesNameDefault ValueDescriptionatos_client_working_dir/tmp/atos_client_installWorking directory in the target host.atos_client_iso_nameNoneFilename for the ATOS Client Software ISO.atos_client_iso_locationNoneFull URL where a copy of ATOS Client ISO can be downloaded.atos_client_cert_locationNoneFull URL where the client certificate can be downloaded.atos_client_key_locationNoneFull URL where the client key can be downloaded.atos_hsmsNoneList of one or more HSM devices.Requirementsansible >= 2.4UsageYouโ€™ll need to set up a temporary HTTP server somewhere that is accessible to the node where this role will be applied. The HTTP server should serve the following:ATOS Client Software ISO file.HSM Server Certificate file(s).HSM Client Certificate file.HSM Client Key file associated with the Client Certificate.Due to the sensitive nature of the Certificate and Key files, you should use TLS encryption and username and passwords to access the HTTP server.Use the hostname and user/password for your HTTP server for the full URL values that need to be set for this role. Seevars.yaml.example.
ansible-role-chrony
A role to manage chronyRole VariablesVariables used for chronyNameDefault ValueDescriptionchrony_debugFalseEnable debug option in chronychrony_role_actionallAnsible action when including the role. Should be one of: [all|install|config|upgrade|online]chrony_package_namechronychrony system package namechrony_service_namechronydchrony system service namechrony_manage_serviceTrueFlag used to specific if the ansible role should manage the servicechrony_manage_packageTrueFlag used to specific if the ansible role should manage the packagechrony_service_statestartedDefault service state to configure (started|stopped)chrony_config_file_location/etc/chrony.confChrony configuration file location.chrony_driftfile_path/var/lib/chrony/driftChrony drift file locationchrony_logdir_path/var/log/chronyChrony log directory locationchrony_ntp_servers[]List of NTP servers. This can be a list of hashes for advanced configuration. If using the hash format, aserver_nameandserver_settingskey should be populated with the appropriate data. If this is a list of hostnames, thechrony_global_server_settingswill be appended to the configuration.chrony_global_server_settings<none>Default setting to apply to the servers configurationchrony_ntp_pools[]List of NTP pools. This can be a list of hashes for advanced configuration. If using the hash format, apool_nameandpool_settingskey should be populated with the appropriate data. If this is a list of hostnames, thechrony_global_pool_settingswill be appended to the configuration.chrony_global_pool_settings<none>Default setting to apply to the pools configurationchrony_ntp_peers[]List of NTP peers. This can be a list of hashes for advanced configuration. If using the hash format, apeer_nameandpeer_settingskey should be populated with the appropriate data. If this is a list of hostnames, thechrony_global_peer_settingswill be appended to the configuration.chrony_global_peer_settings<none>Default setting to apply to the peers configurationchrony_bind_addresses[โ€˜127.0.0.1โ€™, โ€˜::1โ€™]List of addresses to bind to to listen for command packetschrony_acl_rules[]List of specific allow/deny commands for the configuration filechrony_rtc_settings[โ€˜rtcsyncโ€™]List of specific real time lock settingschrony_makestep1.0 3The chrony makestep configurationchrony_extra_options[]A list of extra option strings that is added to the end of the configuration file. This list is joined with new lines.Requirementsansible >= 2.4python >= 2.6DependenciesNoneExample Playbooks- hosts: localhost become: true roles: - chronyLicenseApache 2.0
ansible-role-collect-logs
Ansible role for aggregating logs from different nodes.The only supported way to call this role is using its main entry point. Do not usetasks_fromas this count as using private interfaces.RequirementsThis role gathers logs and debug information from a target system and collates them in a designated directory,artcl_collect_dir, on the localhost.Additionally, the role will convert templated bash scripts, created and used by TripleO-Quickstart during deployment, into rST files. These rST files are combined with static rST files and fed into Sphinx to create user friendly post-build-documentation specific to an original deployment.Finally, the role optionally handles uploading these logs to a rsync server or to an OpenStack Swift object storage. Logs from Swift can be exposed withos-loganalyze.Role VariablesFile Collectionartcl_collect_listโ€“ A list of files and directories to gather from the target. Directories are collected recursively and need to end with a โ€˜/โ€™ to get collected. Should be specified as a YaML list, e.g.:artcl_collect_list:-/etc/nova/-/home/stack/*.log-/var/log/artcl_collect_list_appendโ€“ A list of files and directories to be appended in the default list. This is useful for users that want to keep the original list and just add more relevant paths.artcl_exclude_listโ€“ A list of files and directories to exclude from collecting. This list is passed to rsync as an exclude filter and it takes precedence over the collection list. For details see the โ€˜FILTER RULESโ€™ topic in the rsync man page.artcl_exclude_list_appendโ€“ A list of files and directories to be appended in the default exclude list. This is useful for users that want to keep the original list and just add more relevant paths.artcl_collect_dirโ€“ A local directory where the logs should be gathered, without a trailing slash.collect_log_types- A list of which type of logs will be collected, such as openstack logs, network logs, system logs, etc. Acceptable values are system, monitoring, network, openstack and container.artcl_gzip: Archive files, disabled by default.artcl_rsync_collect_list- if true, a rsync filter file is generated forrsyncto collect files, if false,findis used to generate list of files to collect forrsync.findbrings some benefits like searching for files in a certain depth (artcl_find_maxdepth) or up to certain size (artcl_find_max_size).artcl_find_maxdepth- Number of levels of directories below the starting points, default is 4. Note: this variable is applied only whenartcl_rsync_collect_listis set to false.artcl_find_max_size- Max size of a file in MBs to be included in find search, default value is 256. Note: this variable is applied only whenartcl_rsync_collect_listis set to false.artcl_commands_extras- A nested dictionary of additional commands to be run during collection. First level contains the group type, as defined bycollect_log_typeslist which determines which groups are collected and which ones are skipped.Defined keys will override implicit ones from defaultsartcl_commandswhich is not expected to be changed by user.Second level keys are used to uniqly identify a command and determine the default output filename, unless is mentioned viacapture_fileproperty.cmdcontains the shell command that would be run.artcl_commands_extras:system:disk-space:cmd:df# will save output to /var/log/extras/dist-space.logmounts:cmd:mount -acapture_file:/mounts.txt# <-- custom capture file locationopenstack:key2:cmd:touch /foo.txtcapture_disable:true# <-- disable implicit std redirectionwhen:"1>2"# <-- optional conditionDocumentation generation relatedartcl_gen_docs: false/true โ€“ If true, the role will use build artifacts and Sphinx and produce user friendly documentation (default: false)artcl_docs_source_dirโ€“ a local directory that serves as the Sphinx source directory.artcl_docs_build_dirโ€“ A local directory that serves as the Sphinx build output directory.artcl_create_docs_payloadโ€“ Dictionary of lists that direct what and how to construct documentation.included_deployment_scriptsโ€“ List of templated bash scripts to be converted to rST files.included_deployment_scriptsโ€“ List of static rST files that will be included in the output documentation.table_of_contentsโ€“ List that defines the order in which rST files will be laid out in the output documentation.artcl_verify_sphinx_buildโ€“ false/true โ€“ If true, verify items defined inartcl_create_docs_payload.table_of_contentsexist in sphinx generated index.html (default: false)artcl_create_docs_payload:included_deployment_scripts:-undercloud-install-undercloud-post-installincluded_static_docs:-env-setup-virttable_of_contents:-env-setup-virt-undercloud-install-undercloud-post-installPublishing relatedartcl_publish: true/false โ€“ If true, the role will attempt to rsync logs to the target specified byartcl_rsync_url. UsesBUILD_URL,BUILD_TAGvars from the environment (set during a Jenkins job run) and requires the next to variables to be set.artcl_txt_rename: false/true โ€“ rename text based file to end in .txt.gz to make upstream log servers display them in the browser instead of offering them to downloadartcl_publish_timeout: the maximum seconds the role can spend uploading the logs, the default is 1800 (30 minutes)artcl_use_rsync: false/true โ€“ use rsync to upload the logsartcl_rsync_use_daemon: false/true โ€“ use rsync daemon instead of ssh to connectartcl_rsync_urlโ€“ rsync target for uploading the logs. The localhost needs to have passwordless authentication to the target or thePROVISIONER_KEYvar specified in the environment.artcl_use_swift: false/true โ€“ use swift object storage to publish the logsartcl_swift_auth_urlโ€“ the OpenStack auth URL for Swiftartcl_swift_usernameโ€“ OpenStack username for Swiftartcl_swift_passwordโ€“ password for the Swift userartcl_swift_tenant_nameโ€“ OpenStack tenant (project) name for Swiftartcl_swift_containerโ€“ the name of the Swift container to use, default islogsartcl_swift_delete_afterโ€“ The number of seconds after which Swift will remove the uploaded objects, the default is 2678400 seconds = 31 days.artcl_artifact_urlโ€“ An HTTP URL at which the uploaded logs will be accessible after upload.artcl_report_server_key- A path to a key for an access to the report server.Ara relatedara_enabled: true/false - If true, the role will generate ara reports.ara_overcloud_db_path: Path to ara overcloud path (tripleo only).ara_generate_html: true/false - Generate ara html.ara_graphite_prefix: Ara prefix to be used in graphite.ara_only_successful_tasks: true/false - Send to graphite only successfull tasks.ara_tasks_map: Dictionary with ara tasks to be mapped on graphite.Logs parsingโ€œSovaโ€ module parses logs for known patterns and returns messages that were found. Patterns are tagged by issues types, like โ€œinfraโ€, โ€œcodeโ€, etc. Patterns are located in file sova-patterns.yml in vars/ directory.config- patterns loaded from filefiles- files and patterns sections matchresult- path to file to write a result of parsingresult_file_dir- directory to write a file with patterns in nameExample of usage of โ€œsovaโ€ module:----name:Run sova tasksova:config:"{{pattern_config}}"files:console:"{{ansible_user_dir}}/workspace/logs/quickstart_install.log"errors:"/var/log/errors.txt""ironic-conductor":"/var/log/containers/ironic/ironic-conductor.log"syslog:"/var/log/journal.txt"logstash:"/var/log/extra/logstash.txt"result:"{{ansible_user_dir}}/workspace/logs/failures_file"result_file_dir:"{{ansible_user_dir}}/workspace/logs"Example Role Playbook----name:Gather logshosts:all:!localhostroles:-collect_logs** Note:The tasks that collect data from the nodes are executed with ignore_errors. Forexample:Templated Bash to rST Conversion NotesTemplated bash scripts used during deployment are converted to rST files during thecreate-docsportion of the roleโ€™s call. Shell scripts are fed into an awk script and output as restructured text. The awk script has several simple rules:Only lines between###---start_docsand###---stop_docswill be parsed.Lines containing# nodocwill be excluded.Lines containing## ::indicate subsequent lines should be formatted as code blocksOther lines beginning with## <anything else>will have the prepended##removed. This is how and where general rST formatting is added.All other lines, including shell comments, will be indented by four spaces.Enabling sosreport Collectionsosreportis a unified tool for collecting system logs and other debug information. To enable creation of sosreport(s) with this role, create a custom config (you can use centosci-logs.yml as a template) and ensure thatartcl_collect_sosreport: trueis set.Sanitizing Log StringsLogs can contain senstive data such as private links and access passwords. The โ€˜collectโ€™ task provides an option to replace private strings with sanitized strings to protect private data.The โ€˜sanitize_log_stringsโ€™ task makes use of the Ansible โ€˜replaceโ€™ module and is enabled by defining asanitize_linesvariable as shown in the example below:---sanitize_lines:-dir_path:'/tmp/{{inventory_hostname}}/etc/repos/'file_pattern:'*'orig_string:'^(.*)download(.*)$'sanitized_string:'SANITIZED_STR_download'-dir_path:'/tmp/{{inventory_hostname}}/home/zuul/'file_pattern:'*'orig_string:'^(.*)my_private_host\.com(.*)$'sanitized_string:'SANITIZED_STR_host'The task searches for files containing the sensitive strings (orig_string) within a file path, and then replaces the sensitive strings in those files with the sanitized_string.Usage with InfraRedRun the following steps to execute the role withinfrared.Install infrared and add ansible-role-collect-logs plugin by providing the url to this repo:(infrared)$ ir plugin add https://opendev.org/openstack/ansible-role-collect-logs.git --src-path infrared_pluginVerify that the plugin is imported by:(infrared)$ ir plugin listRun the plugin:(infrared)$ ir ansible-role-collect-logsLicenseApache 2.0Author InformationRDO-CI Team
ansible-role-container-registry
A role to deploy a container registry and provide methods to login to it. For now, the role only support Docker Registry v2. The login currently doesnโ€™t work with hub.docker.com.Role VariablesVariables used for container registryNameDefault ValueDescriptioncontainer_registry_debugfalseEnable debug option in Dockercontainer_registry_deploy_dockertrueWhether or not to deploy Dockercontainer_registry_deploy_docker_distributiontrueWhether or not to deploy Docker Distributioncontainer_registry_deployment_usercentosUser which needs to manage containerscontainer_registry_docker_optionsโ€“log-driver=journald โ€“signature-verification=false โ€“iptables=false โ€“live-restoreOptions given to Docker configurationcontainer_registry_docker_disable_iptablesfalseAdds โ€“iptables=false to /etc/sysconfig/docker-network configcontainer_registry_insecure_registries[]Array of insecure registriescontainer_registry_network_options[undefined]Docker networking optionscontainer_registry_hostlocalhostDocker registry hostcontainer_registry_port8787Docker registry portcontainer_registry_mirror[undefined]Docker registry mirrorcontainer_registry_storage_options-s overlay2Docker storage optionscontainer_registry_selinuxfalseWhether or not SElinux is enabled for containerscontainer_registry_additional_sockets[undefined]Additional sockets for containerscontainer_registry_skip_reconfigurationfalseDo not perform container registry reconfiguration if itโ€™s already configuredcontainer_registry_logins[]A dictionary containing registries and a username and a password associated with the registry. Example: {โ€˜docker.ioโ€™: {โ€˜myusernameโ€™: โ€˜mypasswordโ€™}, โ€˜registry.example.com:8787โ€™: {โ€˜otheruserโ€™: โ€˜otherpassโ€™}}Requirementsansible >= 2.4python >= 2.6DependenciesNoneExample PlaybooksModify ImageThe following playbook will deploy a Docker registry:- hosts: localhost become: true roles: - container-registryLicenseApache 2.0Running local testingLocal testing of this role can be done in a number of ways.Mimic ZuulSometimes its nessisary to setup a test that will mimic what the OpenStack gate will do (Zuul). To run tests that minic the gate,python-virtualenvgit,gcc, andansibleare required.$sudoyuminstallpython-virtualenvgitgccOnce the packages are installed, create a python virtual environment.$python-mvirtualenv--system-site-packages~/test-python$~/test-python/bin/pipinstallpipsetuptools--upgradeNow install the latest Ansible$~/test-python/bin/pipinstallansibleWith Ansible installed, activate the virtual environment and run therun-local.ymltest playbook.$source~/test-python/bin/activate(test-python)$ansible-playbook-i'localhost,'\-e"tripleo_src=$(realpath--relative-to="${HOME}""$(pwd)")"\-e"ansible_user=${USER}"\-e"ansible_user_dir=${HOME}"\-e"ansible_connection=local"\zuul.d/playbooks/run-local.ymlRunning Molecule directlyIt is also possible to test this role using molecule directly. When running tests directly it is assumed all of the dependencies are setup and ready to run on the local workstation. When$moleculetest--all
ansible_role_installer
UNKNOWN
ansible-role-lunasa-hsm
A role to manage Thales Luna Network Hardware Security Module (HSM) clients.Role VariablesThis ansible role automates the configuration of a new client for the Thales Luna Network HSM.NameDefault ValueDescriptionlunasa_client_working_dir/tmp/lunasa_client_installWorking directory in the target host.lunasa_client_tarball_nameNoneFilename for the Lunasa client software tarball.lunasa_client_tarball_locationNoneFull URL where a copy of the client software tarball can be downloaded.lunasa_client_installer_pathNonePath to the instal.sh script inside the tarball.lunasa_client_pinNoneThe HSM Partition Password (PKCS#11 PIN) to be used by the client.lunasa_client_ipNone(Optional) When set, this role will use the given IP to register the client instead of the clientโ€™s fqdn.lunasa_client_rotate_certFalseWhen set to True, the role will generate a new client certificate to replace the previous one.lunasa_hsmsNoneList of dictionaries, each of which describes a single HSMsee vars.sample.yamlfor details. When more than one HSM is listed here, the client will be configured in HA mode.Requirementsansible >= 2.4
ansible-role-manager
Project page and docs:http://mirskytech.github.io/ansible-role-managerDevelopment:https://github.com/mirskytech/ansible-role-managerFeature & issue tracking:https://github.com/mirskytech/ansible-role-manager/issuesPackage Index:https://pypi.python.org/pypi/ansible-role-managerDescriptionProvides the following utilities:initCreates the Ansible recommended folder structure and initial core files for playbooks, roles and modules.installInstalls Ansible roles from Ansible Galaxy or located in any version control repository (git, mercurial and svn).uninstallRemove dependencies from the playbookโ€™s libraryfreezeCreate list of installed dependencies for a playbookseearm helpfor all availble commands.Installation of Ansible Role Manager (ARM)Standard installation:>> pip install ansible-role-manageror installation for development:>> pip install -e git+https://github.com/mirskytech/ansible-role-manager.git#egg=role-manageror manual installation:>> git clone https://github.com/mirskytech/ansible-role-manager.git >> python setup.py installGet StartedCreate a well-structured playbooks (directory structure, initial files):>> arm init -p MyNewPlaybookInstall a role from Ansible Galaxy:>> arm install github_owner.github_repoInstall a role from an arbitrary git repository and name it locally asmyrolename:>> arm install git+ssh://github.com/github_owner/github_repo.gitor install with changing the locally installed name tomyrolename:>> arm install git+ssh://github.com/github_owner/github_repo.git#alias=myrolenameDependenciesmercurialgitpythonansiblerequestsgitpython (0.3.2.RC1)coloramahgapiReferencesAnsiblehttp://docs.ansible.com/Ansible Galaxyhttps://galaxy.ansible.com/exploreRelease Notes & Roadmaphttp://mirskytech.github.io/ansible-role-manager/releasenotes.html
ansible-roler
ansible-roleransible-roleris a simple command line tool to setup the recommendet folder structure for a new ansible role.Table of ContentSetupUsageConfigurationDefaultsBase configurationTemplatingLicenseMaintainers and ContributorsSetup# From internal pip repo as userpipinstallansible-roler--user# .. or as rootsudopipinstallansible-rolerUsage$ansible-roler--help usage:ansible-roler[-h][-cCONFIG_FILE][-nNAME][-pPATH][-v]Manageansibleroleenvironments optionalarguments:-h,--helpshowthishelpmessageandexit-cCONFIG_FILELocationofconfigurationfile:[/Users/rkau2905/Library/ApplicationSupport/ansible-role/config.ini]-nNAME,--nameNAMENameofthenewrole-pPATH,--pathPATHPathwherethenewrolewillbecreated-v,--verboseShowmoreverboseoutputConfigurationDefaultsansible-rolerwill create the the minimal recommended folder structure:common/ tasks/ main.yml handlers/ main.yml templates/ files/ vars/ defaults/ main.yml meta/ main.ymlThe main.yml files will be created only if you enable thetemplating feature. Otherwise the folders will be left empty.Base configurationIn addition to the command line options there are parameters you can adjust in a config file. The default location for your config file is~/ansible-roler/config.inibut you can place it anywhere and specifiy the path withansible-roler -c /path/to/config.ini.[base]# base path ansible-roler will use to create new rolesbase_path=~/ansibleroles[logging]# error |ย warning | info | debug# you can also control this with commandline attribute -vvvlog_level=warningTemplatingIn special cases it can be helpfule to add templated files to each new role. The templating function can be used to place in customized meta/main.yml or a default config file for your CI in each new role. The templating is disabled by default and must be enabled in the config file befor you can use it.[templating]# enable template functionalityenable_templating=false# path to your own subdir template file# if not in config file default one will be used# if added empty 'subdir_template=' subdir templating is disabledsubdir_template=/home/jdoe/ansible/custom/main.yaml.j2# if you like you can exclude some subdirs from templating# these folders will be left emptyexclude_subdirs=templates,vars,files# path to your own ci template file# if not in config file default one will be used# if added empty 'ci_template=' ci templating is disabledroot_template=/home/jdoe/ansible/custom/.drone.yaml.j2[template-vars]# define some variables you want to use in your templatemeta_author=John Doemeta_license=MITansible-rolercomes with simple default template file but as you can see in the config you can customize and use your own. The default file looks as follows:---{%ifsubdir=='tasks'%}# Contains the main list of tasks to be executed by the role. # Don't add tasks directly to the main.yml use includes instead - include_tasks: setup.yml{%endif%}{%ifsubdir=='handlers'%}# Contains handlers, which may be used by this role or # even anywhere outside this role.{%endif%}{%ifsubdir=='defaults'%}# Default variables for the role.{%endif%}{%ifsubdir=='meta'%}galaxy_info: author:{{vars.meta.author|default('UNKNOWN')}}description: Deploy some application license:{{vars.meta.author|default('MIT')}}min_ansible_version: 2.4 platforms: - name: EL versions: - 7 galaxy_tags: - myapp dependencies: []{%endif%}Currently, you can set two template files:subdir_template: template will be deployed to the folders, tasks, handlers, defaults and meta. This can be used to provide a pre-configured main.yml in each of these folders.root_template: tempate will be deployed to the root of your role. This can be used to provide a default config for your ci system.Templating inansible-rolerworks only with two static jinja2 files but you can control the content of the destination file with variables. Following variables will be automatically passed to the template processor:subdir_templaterolename: these variable holds the rolename you have passed toansible-rolerIf you have prefixed your role with prexix.myrole only the second part will be used.subdir: these variable holds the current subdir which is processed at this time. This is a good option to add differen content to your destination file in relation to the current directory. You look at the usage in the build-in example above.root_templaterolenameThere is also an option to set custom variables. These variables will be accessable througvars. This variable is an empty dictionary as long as you don't set some variables. Therefore you have to define variables under the `template-vars' section in config file.[template-vars]# define some variables you want to use in your templatemeta_author=John Doemeta_license=MITVariable names will be split at the first underscore. The first part is used as the name of a sub-dict undervarsthe other part is used as the of your variable. The result of the these small example looks as follows:{'meta':{'author':'John Doe','license':'MIT'}}If you want to access your variables in a template, here is an example{{ vars.meta.author }}.LicenseThis project is licensed under the MIT License - see theLICENSEfile for details.Maintainers and ContributorsRobert Kaussow
ansible-role-redhat-subscription
Red Hat SubscriptionManage Red Hat subscriptions and repositories. This role supports registering to Satellite 5, Satellite 6, or the Red Hat Customer Portal.RequirementsYou will need to have an active Red Hat subscription in order for registration to succeed.Providerhsm_usernameandrhsm_passwordorrhsm_activation_key. These options are mutually exclusive and providing both will result in a failure. The recommended option is to provide an activation key rather than username and password.Role VariablesNameDefault ValueDescriptionrhsm_methodportalMethod to use for activation:portalorsatellite. Ifsatellite, the role will determine the Satellite Server version and take the appropriate registration actions.rhsm_username[undefined]Red Hat Portal username.rhsm_password[undefined]Red Hat Portal password.rhsm_activation_key[undefined]Red Hat Portal Activation Key.rhsm_release[undefined]RHEL release version (e.g. 8.1).rhsm_org_id[undefined]Red Hat Portal Organization Identifier.rhsm_pool_ids[undefined]Red Hat Subscription pool IDs to consume.rhsm_statepresentWhether to enable or disable a Red Hat subscription.rhsm_autosubscribe[undefined]Whether or not to autosubscribe to available repositories.rhsm_consumer_hostname[undefined]Name of the system to use when registering. Defaults to using the system hostname if undefined.rhsm_force_registerFalseWhether or not to force registration.rhsm_repos[]The list of repositories to enable or disable.rhsm_repos_state[undefined]The state of all repos inrhsm_repos. The module default isenabled.rhsm_repos_purge[undefined]Whether or not to disable repos not specified inrhsm_repos. The module default isFalse.rhsm_rhsm_port443Port to use when connecting to subscription server. Must be 8443 if a capsule is used otherwise 443 for Satellite or RHN.rhsm_server_hostnamesubscription.rhn.redhat.comFQDN of subscription server.rhsm_server_prefix/subscriptionor/rhsmRHS server prefix./subscriptionwhen using registering viaportal,/rhsmwhen registering viasatellite.rhsm_insecureFalseDisable certificate validation.rhsm_simplified_content_accessFalseEnable Simplified Content Access.rhsm_ssl_verify_depth3Depths certificates should be validated when checking.rhsm_rhsm_proxy_proto[undefined]protocol used to reach the proxy server (http or https).rhsm_rhsm_proxy_hostname[undefined]FQDN of outbound proxy server.rhsm_rhsm_proxy_port[undefined]Port to use for proxy server.rhsm_rhsm_proxy_user[undefined]Username to use for proxy server.rhsm_rhsm_proxy_password[undefined]Password to use for proxy server. Save this in an Ansible Vault or other secret store.rhsm_baseurlhttps://cdn.redhat.comBase URL for content.rhsm_satellite_url[see defaults/main.yml]URL of the Satellite server that will be probed to determine the Satellite version. Uses the scheme and hostname ofrhsm_baseurlby default.rhsm_ca_cert_dir/etc/rhsm/ca/Server CA certificate directory.rhsm_product_cert_dir/etc/pki/productProduct certificate directory.rhsm_entitlement_cert_dir/etc/pki/entitlementEntitlement certificate directory.rhsm_consumer_cert_dir/etc/pki/consumerConsumer certificate directory.rhsm_manage_reposTrueManage generation of yum repositories for subscribed content.rhsm_full_refresh_on_yumFalseRefresh repo files with server overrides on everyyumcommand.rhsm_report_package_profileTrueWhether to report the package profiles to the subscription management service.rhsm_plugin_dir/usr/share/rhsm-pluginsDirectory to search for subscription manage plugins.rhsm_plugin_conf_dir/etc/rhsm/pluginconf.dDirectory to search for plugin configuration files.rhsm_cert_check_interval240Interval in minutes to run certificate check.rhsm_auto_attach_interval1440Interval in minutes to run auto-attach.rhsm_logging[seedefaults/main.yml]Logging settings for various RHSM components.DependenciesNone.About repositoriesIf you are using an activation key with Satellite, the repositories that are associated to the subscription are configured in your local instance of Satellite. You can't specify rhsm_repos parameter if you are using rhsm_activation_key with Satellite. Otherwise, when using Portal registration method you can use either rhsm_username and rhsm_password or activation key and you can use rhsm_repos to select which repos get deployed.Example Playbook with Red Hat portal::- hosts: all vars: rhsm_username: [email protected] rhsm_password: "{{ vault_rhsm_password }}" rhsm_repos: - rhel-7-server-rpms - rhel-7-server-extras-rpms - rhel-7-server-rh-common-rpms - rhel-ha-for-rhel-7-server-rpms roles: - openstack.redhat-subscriptionExample Playbook with Satellite 6::- hosts: all vars: rhsm_activation_key: "secrete_key" rhsm_org_id: "Default_Organization" rhsm_server_hostname: "mysatserver.com" rhsm_baseurl: "https://mysatserver.com/pulp/repos" rhsm_method: satellite rhsm_insecure: true roles: - openstack.redhat-subscriptionExample Playbook to unregister::- hosts: all tasks: - name: Unregister the node include_role: name: openstack.redhat-subscription tasks_from: unregisterLicenseApache 2.0
ansible-roles
Installationpip install ansible-rolesUsageansible-roles roles.txt roles/roles.txtpackage_name=git+ssh://[email protected]/geerlingguy/ansible-role-solr.git
ansible-roles-ctl
Ansible roles managementThe goal of this tool is to manage installation and upgrade of Ansible roles and collections using git tags and branches.This tool uses therequirements.ymlconfiguration file used by Ansible Galaxy. The syntax is unchanged and the ansible-galaxy tool can be used by people not involved in playbooks development. Version 1 of the requirements file with only a list of roles is still supported, but if you need collections then you need to convert it to version 2 (containing a list of 'roles' and a list of 'collections').Support for collections is limited to installing a full repo with all included collections. This means all included collections (if many) will be enabled and you cannot select to enable just some of them. The name of the collection must contain the namespace to be installed in the proper path (as defined in Ansible documentation). For example:roles: โ€ฆ collections: - name: kubernetes.core type: git source: https://github.com/ansible-collections/kubernetes.core.git version: stable-2.2will install it at<collections_path>/ansible_collections/kubernetes/core/withcollections_pathdefined in youransible.cfgor using Ansible default.This tool assumes arole/collection versionfollowssemantic versioningand is labelled with a git tag of the formvX.YorvX.Y.Z(examples: v1.0 or v2.3.1). In fact thevis optional but recommended for clarity.Roles managed manually, bare, or in a bad shape are ignored (with proper warning).InstallationYou can install it using PyPI:pip install ansible-roles-ctlOr you can run it in-place since it has very few dependencies (you only need the ansible-roles-ctl script, version.py is only used in the build process).You need the following dependencies:Ansible (probably starting from 2.7)GitPython(package 'python3-git' on Debian systems, or using pip)argcomplete>= 1.9.2 (package 'python3-argcomplete' on Debian systems, or using pip)CompletionCommand completion is done usingargcomplete. It needs to be enabled first to work. The easiest way is to use theactivate-global-python-argcomplete3script as root. Other methods are described on theargcompletewebsite.UsageSyntax is as follow:ansible-roles-ctl [global options] [subcommand] [subcommand options]You can use -h/--help option to list all available options at command and subcommand level, as well as all available subcommands.Follows documentation about the various subcommands.Statusansible-roles-ctl statusThis subcommand display a status report about all required roles/collections.You can limit output to a selection of roles/collections:ansible-roles-ctl status mailman3 httpdThe -c/--changelog option displays changelog entries if available.Installansible-roles-ctl installThis subcommand install all roles/collections listed in the requirements.You can limit this action to a selection of roles/collections:ansible-roles-ctl install bind9 postfixUpdateansible-roles-ctl updateThis subcommand update all roles/collections listed in the requirements to the latest version.You can limit this action to a selection of roles/collections:ansible-roles-ctl update bind9 postfix
ansible-roles-graph
# ansible-roles-graphGenerate a graph of Ansible role dependencies.## Installpip install ansible-roles-graphAssuming you already installed [graphviz](http://www.graphviz.org/) with [python bindings](http://www.graphviz.org/content/python-graphviz).## UsageQuite simply:ansible-roles-graphWill look for roles in the./roles/directory, and generate an./ansible-roles.pngfile.The command also accepts multiple role directories and various options:ansible-roles-graph -o schema.png -f png roles/ ../other/rolesSeeansible-roles-graph โ€“helpfor more info.## Output![PNG example](./example.png)## License[GNU GPLv3 or later](https://www.gnu.org/licenses/gpl.html)
ansible-role-thales-hsm
This is a role to manage the client software for Entrust nShield Connect Hardware Security Modules (HSMs).This repo uses the โ€œThalesโ€ name for historical reasons:At the time when this repository was created nShield HSMs were owned by Thales. Since then, the nShield line of HSMs have gone through some ownership changes, including nCipher for some time, and currently Entrust.If you are looking for the ansible role to manage client software for Thales Luna Network HSMs you can find it here:https://opendev.org/openstack/ansible-role-lunasa-hsmRole VariablesNameDefault ValueDescriptionthales_install_clientfalseWhether the role should install the client software on the target host.thales_configure_rfsfalseWhether the role should execute the RFS configuration tasks.thales_client_working_dir/tmp/thales_client_installWorking directory in the target host.thales_client_gid42481Group ID for the thales group.thales_client_uid42481User ID for the thales user.thales_client_tarball_nameNoneFilename for the Thales client software tarball.thales_client_tarball_locationNoneFull URL where a copy of the client software tarball can be downloaded.thales_client_pathlinux/libc6_11/amd64/nfastPath to the client software directory inside the tarballthales_km_data_tarball_nameNoneFilename for the KM Data tarballthales_km_data_locationNoneFull URL where a copy of the KM Data tarball can be downloaded.thales_rfs_ip_addressNoneIPv4 address for the Thales RFS host.thales_client_ipsNoneWhitespace separated list of IP addresses to be added to RFS config.thales_bootstrap_client_ipNoneBootstrap client IP address. This IP will be allowed to update RFS server.nshield_hsmsNoneList of one or more HSM devicesRequirementsansible >= 2.4
ansible-role-tripleo-modify-image
TripleO Modify ImageA role to allow modification to container images built for the TripleO project.Role Variables.. list-table:: Variables used for modify image :widths: auto :header-rows: 1NameDefault ValueDescriptionsource_image[undefined]Mandatory fully qualified reference to the source image to be modified. The supplied Dockerfile will be copied and modified to make the FROM directive match this variable.modify_dir_path[undefined]Mandatory path to the directory containing the Dockerfile to modify the imagemodified_append_tagdate +-modified-%Y%m%d%H%M%SString to be appended after the tag to indicate this is a modified version of the source image.target_image[undefined]If set, the modified image will be tagged withtarget_image + modified_append_tag. Iftarget_imageis not set, the modified image will be tagged withsource_image + modified_append_tag. If the purpose of the image is not changing, it may be enough to rely on thesource_image + modified_append_tagtag to identify that this is a modified version of the source image... list-table:: Variables used for yum update :widths: auto :header-rows: 1NameDefault ValueDescriptionsource_image[undefined]See modify image variablesmodified_append_tagdate +-modified-%Y%m%d%H%M%SSee modify image variablestarget_image''See modify image variablesrpms_path''If set, packages present in rpms_path will be updated but dependencies must also be included if required as yum is called with localupdate.update_repo''If set, packages from this repo will be updated. Other repos will only be used for dependencies of these updates.yum_repos_dir_pathNoneOptional path of directory to be used as/etc/yum.repos.dduring the updateyum_cacheNoneOptional path to the host directory for yum cache during the update. Requires an overlay-enabled FS that also supports SE context relabling.force_purge_yum_cacheFalseOptional argument that tells buildah to forcefully re-populate the yum cache with new contents... list-table:: Variables used for yum install :widths: auto :header-rows: 1NameDefault ValueDescriptionsource_image[undefined]See modify image variablesmodified_append_tagdate +-modified-%Y%m%d%H%M%SSee modify image variablestarget_image''See modify image variablesyum_packages[]Provide a list of packages to install via yumyum_repos_dir_pathNoneOptional path of directory to be used as/etc/yum.repos.dduring the update.. list-table:: Variables used for dev install :widths: auto :header-rows: 1NameDefault ValueDescriptionsource_image[undefined]See modify image variablesmodified_append_tagdate +-modified-%Y%m%d%H%M%SSee modify image variablestarget_image''See modify image variablesrefspecs[]An array of project/refspec pairs that will be installed into the generated container. Currently only supports python source projects.python_dir[]Directory which contains a Python project ready to be installed with pip.Requirementsansible >= 2.4python >= 2.6DependenciesNoneWarningsOn-disk repositories ....................Please ensure the SELinux label for the on-disk repositories are correct. Depending on your container-selinux (and podman) version, you may face issues.Some examples of a correct type:system_u:object_r:rpm_var_cache_tsystem_u:object_r:container_file_tFirst one matches the one of /var/cache/dnf, and is accessible from within a container, while the second one may allow a container to actually write in there.Directories located in the user's home ......................................You may want to avoid pointing to directories in your $HOME when running this role, especially when it's running from within TripleO client (for instance with theopenstack tripleo container image preparecommand). Doing so may break due to the SELinux labels and permissions associated to your home directory.Please use another location, such as /opt, or even /tmp - and double-check the SELinux labels therein.Example PlaybooksModify ImageThe following playbook will produce a modified image with the tag `:latest-modified-<timestamp>` based on the Dockerfile in the custom directory `/path/to/example_modify_dir`. .. code-block:: - hosts: localhost tasks: - name: include ansible-role-tripleo-modify-image import_role: name: ansible-role-tripleo-modify-image tasks_from: modify_image.yml vars: source_image: docker.io/tripleomaster/centos-binary-nova-api:latest modify_dir_path: /path/to/example_modify_dir The directory `example_modify_dir` contains the `Dockerfile` which will perform the modification, for example: .. code-block:: # This will be replaced in the file Dockerfile.modified FROM centos-binary-nova-api # switch to root to install packages USER root # install packages RUN curl "https://bootstrap.pypa.io/get-pip.py" -o "/tmp/get-pip.py" RUN python /tmp/get-pip.py # switch the container back to the default user USER nova Yum update ~~~~~~~~~~ The following playbook will produce a modified image with the tag `:latest-updated` which will do a yum update using the host's /etc/yum.repos.d. Only file repositories will be used (with baseurl=file://...). In this playbook the tasks\_from is set as a variable instead of an `import_role` parameter. .. code-block:: - hosts: localhost tasks: - name: include ansible-role-tripleo-modify-image import_role: name: ansible-role-tripleo-modify-image vars: tasks_from: yum_update.yml source_image: docker.io/tripleomaster/centos-binary-nova-api:latest yum_repos_dir_path: /etc/yum.repos.d modified_append_tag: updated yum_cache: /tmp/containers-updater/yum_cache rpms_path: /opt/rpms .. code-block:: - hosts: localhost tasks: - name: include ansible-role-tripleo-modify-image import_role: name: ansible-role-tripleo-modify-image vars: tasks_from: yum_update.yml source_image: docker.io/tripleomaster/centos-binary-nova-api:latest modified_append_tag: updated rpms_path: /opt/rpms/ Note, if you have a locally installed gating repo, you can add ``update_repo: gating-repo``. This may be the case for the consequent in-place deployments, like those performed with the CI reproducer script. Yum install ~~~~~~~~~~~ The following playbook will produce a modified image with the tag `:latest-updated` which will do a yum install of the requested packages using the host's /etc/yum.repos.d. In this playbook the tasks\_from is set as a variable instead of an `import_role` parameter. .. code-block:: - hosts: localhost tasks: - name: include ansible-role-tripleo-modify-image import_role: name: ansible-role-tripleo-modify-image vars: tasks_from: yum_install.yml source_image: docker.io/tripleomaster/centos-binary-nova-api:latest yum_repos_dir_path: /etc/yum.repos.d yum_packages: ['foobar-nova-plugin', 'fizzbuzz-nova-plugin'] RPM install ~~~~~~~~~~~ The following playbook will produce a modified image with RPMs from the specified rpms\_path on the local filesystem installed as a new layer for the container. The new container tag is appened with the '-hotfix' suffix. Useful for creating adhoc hotfix containers with local RPMs with no network connectivity. .. code-block:: - hosts: localhost tasks: - name: include ansible-role-tripleo-modify-image import_role: name: ansible-role-tripleo-modify-image vars: tasks_from: rpm_install.yml source_image: docker.io/tripleomaster/centos-binary-nova-api:latest rpms_path: /opt/rpms modified_append_tag: -hotfix Dev install ~~~~~~~~~~~ The following playbook will produce a modified image with Python source code installed via pip. To minimize dependencies within the container we generate the sdist locally and then copy it into the resulting container image as an sdist tarball to run pip install locally. It can be used to pull a review from OpenDev Gerrit: .. code-block:: - hosts: localhost connection: local tasks: - name: dev install heat-api import_role: name: ansible-role-tripleo-modify-image vars: tasks_from: dev_install.yml source_image: docker.io/tripleomaster/centos-binary-heat-api:current-tripleo refspecs: - project: heat refspec: refs/changes/12/1234/3 modified_append_tag: -devel or it can be used to build an image from a local Python directory: .. code-block:: - hosts: localhost connection: local tasks: - name: dev install heat-api import_role: name: ansible-role-tripleo-modify-image vars: tasks_from: dev_install.yml source_image: docker.io/tripleomaster/centos-binary-heat-api:current-tripleo modified_append_tag: -devel python_dir: - /home/joe/git/openstack/heat Note: here, we can use a directory located in the user's home because it's probably launched by the user. License ------- Apache 2.0
ansible-roster
Ansible Roster Inventory PluginRoster is an Ansible inventory plugin with focus on groups applied to hosts instead of hosts included in groups. It supports ranges (eg: "[0:9]"), regex hostnames (eg: "(dev|prd)-srv"), file inclusions, and variable merging.This inventory plugin has been written withdebopsin mind.Installation from Ansible GalaxyYou can install the latest version from Ansible Galaxy repository.ansible-galaxycollectioninstall-Ujulien_lecomte.roster python3-mpipinstallboltonscerberusexrexIf you are usingrequirement.ymlfiles for downloading collections and roles, add these lines:collections:-julien_lecomte.rosterInstallation from PyPIYou can install the latest version from PyPI package repository.python3-mpipinstall-Uansible-rosterQuickstartPlease refer to thefull documentationfor all the details.The roster is a file in yaml format and 'yml' or 'yaml' file extension.In order for Ansible to use the plugin and parse your roster file, several conditions must be met:Your yaml file must contain a line indicating that the file is in the roster format.You must activate plugins and enable the roster inventory plugin in youransible.cfg, or in your.debops.cfgif usingdebops. If usingdebops, refresh the configuration withdebops project refresh.Sampleansible.cfg[defaults]# The following line prevents having to pass -i to ansible-inventory.# Filename can be anything as long as it has a 'yml' or 'yaml' extension althoughinventory=roster.yml[inventory]# You must enable the roster plugin if 'auto' does not work for you.# Use 'roster' if installed via the Python package,# Use 'julien_lecomte.roster.roster' if installed via Ansible Galaxyenable_plugins=julien_lecomte.roster.rosterSample.debops.cfg[ansibleinventory]enabled=roster# Use 'roster' if installed via the Python package,# Use 'julien_lecomte.roster.roster' if installed via Ansible Galaxyenable_plugins=julien_lecomte.roster.rosterSampleroster.yml---# This line is mandatory, and enables the plugin differenciating between# any yaml file and a roster yaml file.plugin:rostervars:foobar01:"aglobalvar"groups:debian:vars:distrib:"debian"buster:parents:-debianvars:release:"buster"desktops:vars:components:"maincontribnon-free"hosts:desktop01.internal.example.com:groups:-desktops-busterA larger example Roster inventory for DebOps can be found here:https://gitlab.com/jlecomte/ansible/ansible-roster-example.LicenseThis project is licensed under the MIT License - see theLICENSEfile for details.SupportShow your support by joining the Discord server (link below), or by adding a star to the project.LocationsDocumentation:https://jlecomte.gitlab.io/ansible/ansible-roster/GitLab:https://gitlab.com/jlecomte/ansible/ansible-rosterPyPi:https://pypi.org/project/ansible-rosterGalaxy:https://galaxy.ansible.com/julien_lecomte/rosterDiscord:https://discord.gg/Dhttff9MrT
ansible-rulebook
ansible-rulebookFree software: Apache Software License 2.0Event driven automation for Ansible.The real world is full of events that change the state of our software and systems. Our automation needs to be able to react to those events. Introducingansible-rulebook; a command line tool that allows you to recognize events that you care about and react accordingly by running a playbook or other actions.FeaturesConnect to event streams and handle events in near real time.Conditionally launch playbooks or Towerโ€™s job templates based on rules that match events in event streams.Store facts about the world from data in eventsLimit the hosts where playbooks run based on event dataRun smaller jobs that run more quickly by limiting the hosts where playbooks run based on event dataInstallationPlease follow theInstallation guideto installansible-rulebook.DocumentationPlease refer to theGetting Started guideto get started withansible-rulebook.ContributingWe ask all of our community members and contributors to adhere to theAnsible code of conduct. If you have questions or need assistance, please reach out to our community team [email protected] to the Contributing guide to get started developing, reporting bugs or providing feedback.Creditsansible-rulebook is sponsored byRed Hat, Inc.This package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.
ansible-run
No description available on PyPI.
ansible-runner
Ansible RunnerAnsible Runner is a tool and Python library that helps when interfacing with Ansible directly or as part of another system. Ansible Runner works as a standalone tool, a container image interface, or a Python module that can be imported. The goal is to provide a stable and consistent interface abstraction to Ansible.See thelatest documentationfor usage details.Get InvolvedGitHub issuesto track bug report and feature ideasGitHub Milestonesto track what's for the next releaseWant to contribute? Please check out ourcontributing guideJoin us in the#ansible-runnerchannel on Libera.chat IRCJoin the discussion in#awx-projectFor the full list of Ansible email Lists, IRC channels see theAnsible Mailing lists.
ansiblerunnerapi
AnsibleRunnerๆ‹ท่ฒ่‡ชjumpserver้ …็›ฎinstallpip install ansible==2.8.8RunnerAnsible Command Runnersimple runnerrunner = CommandRunner() runner.execute('ls')inventory hostshost_data = [ { "hostname": "demo-web1", "ip": "192.168.33.101", "port": 2222, "username": "root", "private_key": "/Users/maliao/.ssh/id_rsa", }, { "hostname": "demo-web2", "ip": "192.168.33.102", "port": 2222, "username": "root", "private_key": "/Users/maliao/.ssh/id_rsa", }, { "hostname": "demo-web3", "ip": "192.168.33.103", "port": 2222, "username": "root", "private_key": "/Users/maliao/.ssh/id_rsa", }, { "hostname": "demo-web4", "ip": "192.168.33.104", "port": 2222, "username": "root", "private_key": "/Users/maliao/.ssh/id_rsa", }, ] runner = CommandRunner(inventory=host_data) runner.execute('ls')Ansible Playbook Runnersimple runnerrunner = PlayBookRunner(hostname='maliao-web1',path='test.yml') runner.run()optionrunner1 = PlayBookRunner(hostname='maliao-web1', path='test.yml',options={'memory_mb': 1024, 'size_gb': 30,'num_cpus': 2}) runner1.run()inventory hostshosts = [ { "hostname": 'maliao-web1', "ip": '192.168.1.1', "port": '22', "username": "root", "private_key": "/Users/maliao/.ssh/id_rsa" }, { "hostname": 'maliao-web1', "ip": '192.168.1.1', "port": '22', "username": "root", "private_key": "/Users/maliao/.ssh/id_rsa" }, ] runner2 = PlayBookRunner(hostname='maliao-web1', path='test.yml', inventory=hosts) runner2.run()Callbackgather_resultไปปๅ‹™้–‹ๅง‹ v2_playbook_on_play_startไปปๅ‹™ๆˆๅŠŸ v2_runner_on_ok็„กๆณ•้€ฃๆŽฅ v2_runner_on_unreachableไปปๅ‹™ๅคฑๆ•— v2_runner_on_failed
ansible-runner-beats
Ansible Runner beats Event EmitterThis project is a plugin forAnsible Runnerthat allows emitting Ansible status and events to Logstash through the Beats protocol. This can allowRunnerto notify other systems as Ansible jobs are run and to deliver key events to that system if it's interested.It is useful to send data to Logstash through the Beats protocol.This plugin is inspired byansible-runner-httplicensed under Apache License 2.0:zap: Installationpython3-mpipinstallansible-runner-beats:gear: VariablesRunner configEnvironment variableDefault valuerunner_beats_hostRUNNER_BEATS_HOSTNonerunner_beats_portRUNNER_BEATS_PORTNonerunner_beats_ssl_certRUNNER_BEATS_SSL_CERT""runner_beats_ssl_keyRUNNER_BEATS_SSL_KEY""runner_beats_ssl_caRUNNER_BEATS_SSL_CA""runner_beats_custom_fieldsRUNNER_BEATS_CUSTOM_FIELDS{}The plugin also register an environment variable namedRUNNER_BEATS_TIMEDOUTif the beats endpoint is not available:copyright:LicenseMozilla Public License Version 2.0
ansible-runner-http
Ansible Runner HTTP Event EmitterThis project is a plugin forAnsible Runnerthat allows emitting Ansible status and events to HTTP services in the form ofPOSTevents. This can allowRunnerto notify other systems as Ansible jobs are run and to deliver key events to that system if it's interested.For more details and the latest documentation see:https://ansible-runner.readthedocs.io/en/latest
ansible-runner-kafka
Ansible Runner Kafka Event Emitter=================================This project is a plugin for [Ansible Runner](https://github.com/ansible/ansible-runner) that allows emitting Ansible status and events to Kafka topics.For more details and the latest documentation see: https://ansible-runner.readthedocs.io/en/latestThis plugin is very *very* basic. Especially error-handling is missing, since the produce-call is asymmetric, and I don't know how to return errors back to ansible runner.Also the repeated initialization of the producer and call to flush is not very efficient.Available settings------------------These [runner-settings](https://ansible-runner.readthedocs.io/en/latest/intro.html#env-settings-settings-for-runner-itself) are available:- `bootstrap_servers` input for confluent-kafka's producer's *bootstrap_servers*. **default**: None. **Sample**: localhost:9092. If not set, this plugin will be skipped- `event_topic` topic to produce events to. **default**: ansible.runner.event. **Sample**: event- `status_topic` topic to produces status messages to. **default**: ansible.runner.status. **Sample**: status
ansible-runner-nats
Ansible Runner NATS Event EmitterThis project is a plugin forAnsible Runnerthat allows emitting Ansible status and events to NATS in the form of published messages. This can allowRunnerto notify other systems as Ansible jobs are run and to deliver key events to that system if it's interested.For more details and the latest documentation see:https://ansible-runner.readthedocs.io/en/latestConfiguring the emitterDefault behaviourBy default the emitter is disabled.Enabling the emitterIn order to enable the emitter, subject ID must be configured either asnats_subject_idvariable or as an environment variable:RUNNER_NATS_SUBJECT_ID: Subject IDWhen subject ID is configured, messages are published to the following subjects:pub.ansible.runner.{subject_id}.{runner_ident}.event: message contains an eventpub.ansible.runner.{subject_id}.{runner_ident}.status: message contains a status updaterunner_identis an auto-generated UUID assigned to each runner instance.Special case: ifRUNNER_NATS_SUBJECT_IDis set tohostname, then hostname read usingsocket.gethostname()is used as subject id.Configuring headersHeaders can be configured to be sent with each message.They can be provided as comma separated list of keyvalues (using=).Example:RUNNER_NATS_HEADERS="producer=ansible-runner,foo=bar"Configuring NATS clientNATS client options can be provided asnats_optionssettings in the runner config settings.Configuring client authenticationThe following environment variables can be set to authenticate the client:RUNNER_NATS_USERNAME: user nameRUNNER_NATS_PASSWORD: user passwordRUNNER_NATS_TOKEN: connection tokenRUNNER_NATS_USER_CREDENTIALS: user credentialsRUNNER_NATS_NKEYS_SEED: user nkey seedConfiguring serversRUNNER_NATS_SERVERS: comma separated list of NATS server URLs.Advanced configurationRUNNER_NATS_CLIENT_NAME: client nameRUNNER_NATS_CLIENT_VERBOSE: enable verbose mode when value istrue,yes,y,1oron.RUNNER_NATS_ALLOW_RECONNECT: allow reconnect when value istrue,yes,y,1oron.RUNNER_NATS_CONNECT_TIMEOUT: connection timeout in secondsRUNNER_NATS_RECONNECT_TIME_WAIT: time to wait before reconnectingRUNNER_NATS_MAX_RECONNECT_ATTEMPTS: maximum number of reconnect attemptsRUNNER_NATS_PING_INTERVAL: interval between system pingsRUNNER_MAX_OUTSTANDING_PINGS: maximum number of outstanding ping before considering connection staleRUNNER_NATS_FLUSHER_QUEUE_SIZE: maximum size of flusher queue
ansible-scribe
Ansible ScribeAnsible Scribe sets out to automate as much as it can with regards to getting a role well documented and ready for sharing with the Ansible community via Ansible Galaxy. It tries to push you towards a more easily usable role by others and pushes a few ideas of โ€œbest practicesโ€ on you:You should be using roles and separate playbooks. This is the best way to make your code modular and reusable by others.You should have sane default variables set up to the point that anyone can use your role without having to change variables and it will still successfully run through. This means Ansible Scribe checks for missing and empty variables.You should have a license file defined.You should be using CI testing, so it checks for a CI file.You should use names for all of your tasks as it helps others (including those not as experienced in Ansible) to understand your code.That being said, Ansible Scribe is not intended to just run and be done. It creates as much of the necessary documentation as it possibly can, but it will not do everything. You will at a minimum need to complete the variables table. You might also have to:Convert the list of task names into a coherent description if you havenโ€™t provided one (it will write the task names in a list in the README so you can make one).Fill out any other empty or incomplete portions of the README.md and meta/main.yml files. These will exist if you do not give all of the necessary information up front. Ansible Scribe will not create values for you, in that case it will just create the skeleton of those portions.Assign settings to the empty default variables in defaults/main.yml if there are any.Ansible Scribe Does Not:Lint your codeFormat your codeMake warnings for:Deprecated codeUsing incorrect modulesOther tools exist for those things and Ansible Scribe follows the Unix Philosophy of doing one thing and doing it well. My aim is to take roles you have and make it as easy as possible to get them ready for pushing to Ansible Galaxy as quickly as possible.What to set up comes from: (https://galaxy.ansible.com/docs/contributing/index.html)InputsConfig file (~/.config/ansible-scribe/global.conf) example:[Paths] roles = /etc/ansible/roles/ playbooks = /etc/ansible/playbooks/ output = /tmp/ansible-scribe/ [Metadata] # License type (currently supported = apache, bsd2, bsd3, cc-by, gpl2, gpl3, isc, mit) repo_license = mit author = Sam Oehlert bio = Security Engineer. email: [email protected] company = My Company [CI] # What type of CI file you want to use (currently supported = gitlab, travis) type = gitlabRole specific config file (~/.config/ansible-scribe/netdata.conf) example:[versions] ansible_min = 2.0 container_min = role = 1.0 [urls] repo = https://github.com/soehlert/ansible-role-netdata branch = master issue_tracker = [config] description = Sets up the Netdata package for distributed real time performance and health monitoring requirements = N/A galaxy_tags = netdata deploy playbook = common.yml [platforms] ubuntu = 16.04, 18.04Pass it a role:reads all the variables and creates a table for them in the readme.VariablePurposeDefaultapache_portdefines port for apache to listen on80testa variable for testingnoneMakes sure all the variables are in the defaults/main.yml fileTakes task names and sets them in a list in readme.md file to give you a skeleton to build off ofReads playbook in order toAdd copy of example playbook to readmeLook for any roles that have namespace.rolename setup (adds to dependencies)Warns if you are missing CI files or have empty CI filesMake file:Make roles creates files in default file location outside of role pathMake overwrite creates files in role pathMake install creates empty config fileMakefile dynamic targets for each role (https://stackoverflow.com/questions/22754778/dynamic-makefile-target)OutputsREADME.mddefaults/main.yml (if it doesnโ€™t exist)License filemeta/main.ymlCI fileWarnings for:CI files:None found - created empty file at $ci_file_locationFound empty fileEmpty defaults variablesNo task names
ansible-sdk
Ansible SDK for PythonThe Ansible SDK provides lightweight Python library for dispatching and live-monitoring Ansible tasks, roles, and playbooks from the product or project.Dispatching of jobs can be local to the machine you are running your python application from or over Ansible Mesh using the receptor integrations.Demo App to show how you can use the SDK in real use case -https://github.com/ansible/ansible_sdk_demoDocumentationWe are building extensive documentation and API reference here -https://ansible-sdk.readthedocs.io/en/latest/install.htmlPlease feel free to contribute and help the documentation effort.You can build the documentation from this repository as follows:$ tox -e docs $ firefox docs/build/html/If you want to run Sphinx commands directly, open thetox.inifile and use the commands in the[testenv:docs]section. Remember that you need to pip installdocs/doc-requirements.txtbefore running Sphinx.Releases and maintenanceTBDAnsible version compatibilityTBDInstallationYou can follow the installation guide specified indocs.Required Python libraries and SDKsThe Ansible-SDK depends on Python 3.8+, Ansible Core, Ansible Runner and other third party libraries:ansible-coreasyncioansible-runnerreceptorctlTesting and DevelopmentRed Hat Enterprise Linux - Install Ansible-SDK and dependecies directly on/into a RHEL Virtual machine. MacOS - Install PODMAN using BREW, and pull the RHEL8 image, ssh to that and follow the RHEL instructions above.CommunicationTBDLicenseSeeLICENSEto see the full text.
ansible-selvpc-modules
No description available on PyPI.
ansible-semaphore-client
semaphore_clientNo description provided (generated by Openapi Generatorhttps://github.com/openapitools/openapi-generator)This Python package is automatically generated by theOpenAPI Generatorproject:API version: 2.8.34Package version: 1.0.0Build package: org.openapitools.codegen.languages.PythonClientCodegenRequirements.Python >=3.6Installation & Usagepip installIf the python package is hosted on a repository, you can install directly using:pipinstallgit+https://github.com/VitexSoftware/libpython-semaphore-client.git(you may need to runpipwith root permission:sudo pip install git+https://github.com/VitexSoftware/libpython-semaphore-client.git)Then import the package:importsemaphore_clientSetuptoolsInstall viaSetuptools.pythonsetup.pyinstall--user(orsudo python setup.py installto install the package for all users)Then import the package:importsemaphore_clientGetting StartedPlease follow theinstallation procedureand then run the following:importtimeimportsemaphore_clientfrompprintimportpprintfromsemaphoreimportauthentication_apifromsemaphore_client.model.api_tokenimportAPITokenfromsemaphore_client.model.loginimportLogin# Defining the host is optional and defaults to https://demo.ansible-semaphore.com/api# See configuration.py for a list of all supported configuration parameters.configuration=semaphore_client.Configuration(host="https://demo.ansible-semaphore.com/api")# The client must configure the authentication and authorization parameters# in accordance with the API server security policy.# Examples for each auth method are provided below, use the example that# satisfies your auth use case.# Configure API key authorization: bearerconfiguration.api_key['bearer']='YOUR_API_KEY'# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed# configuration.api_key_prefix['bearer'] = 'Bearer'# Configure API key authorization: cookieconfiguration.api_key['cookie']='YOUR_API_KEY'# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed# configuration.api_key_prefix['cookie'] = 'Bearer'# Enter a context with an instance of the API clientwithsemaphore_client.ApiClient(configuration)asapi_client:# Create an instance of the API classapi_instance=authentication_api.AuthenticationApi(api_client)login_body=Login(auth="auth_example",password="password_example",)# Login |try:# Performs Loginapi_instance.auth_login_post(login_body)exceptsemaphore_client.ApiExceptionase:print("Exception when calling AuthenticationApi->auth_login_post:%s\n"%e)Documentation for API EndpointsAll URIs are relative tohttps://demo.ansible-semaphore.com/apiClassMethodHTTP requestDescriptionAuthenticationApiauth_login_postPOST/auth/loginPerforms LoginAuthenticationApiauth_logout_postPOST/auth/logoutDestroys current sessionAuthenticationApiuser_tokens_api_token_id_deleteDELETE/user/tokens/{api_token_id}Expires API tokenAuthenticationApiuser_tokens_getGET/user/tokensFetch API tokens for userAuthenticationApiuser_tokens_postPOST/user/tokensCreate an API tokenDefaultApievents_getGET/eventsGet Events related to Semaphore and projects you are part ofDefaultApievents_last_getGET/events/lastGet last 200 Events related to Semaphore and projects you are part ofDefaultApiinfo_getGET/infoFetches information about semaphoreDefaultApiping_getGET/pingPING testDefaultApiws_getGET/wsWebsocket handlerProjectApiproject_project_id_deleteDELETE/project/{project_id}/Delete projectProjectApiproject_project_id_environment_environment_id_deleteDELETE/project/{project_id}/environment/{environment_id}Removes environmentProjectApiproject_project_id_environment_environment_id_putPUT/project/{project_id}/environment/{environment_id}Update environmentProjectApiproject_project_id_environment_getGET/project/{project_id}/environmentGet environmentProjectApiproject_project_id_environment_postPOST/project/{project_id}/environmentAdd environmentProjectApiproject_project_id_events_getGET/project/{project_id}/eventsGet Events related to this projectProjectApiproject_project_id_getGET/project/{project_id}/Fetch projectProjectApiproject_project_id_inventory_getGET/project/{project_id}/inventoryGet inventoryProjectApiproject_project_id_inventory_inventory_id_deleteDELETE/project/{project_id}/inventory/{inventory_id}Removes inventoryProjectApiproject_project_id_inventory_inventory_id_putPUT/project/{project_id}/inventory/{inventory_id}Updates inventoryProjectApiproject_project_id_inventory_postPOST/project/{project_id}/inventorycreate inventoryProjectApiproject_project_id_keys_getGET/project/{project_id}/keysGet access keys linked to projectProjectApiproject_project_id_keys_key_id_deleteDELETE/project/{project_id}/keys/{key_id}Removes access keyProjectApiproject_project_id_keys_key_id_putPUT/project/{project_id}/keys/{key_id}Updates access keyProjectApiproject_project_id_keys_postPOST/project/{project_id}/keysAdd access keyProjectApiproject_project_id_putPUT/project/{project_id}/Update projectProjectApiproject_project_id_repositories_getGET/project/{project_id}/repositoriesGet repositoriesProjectApiproject_project_id_repositories_postPOST/project/{project_id}/repositoriesAdd repositoryProjectApiproject_project_id_repositories_repository_id_deleteDELETE/project/{project_id}/repositories/{repository_id}Removes repositoryProjectApiproject_project_id_tasks_getGET/project/{project_id}/tasksGet Tasks related to current projectProjectApiproject_project_id_tasks_last_getGET/project/{project_id}/tasks/lastGet last 200 Tasks related to current projectProjectApiproject_project_id_tasks_postPOST/project/{project_id}/tasksStarts a jobProjectApiproject_project_id_tasks_task_id_deleteDELETE/project/{project_id}/tasks/{task_id}Deletes task (including output)ProjectApiproject_project_id_tasks_task_id_getGET/project/{project_id}/tasks/{task_id}Get a single taskProjectApiproject_project_id_tasks_task_id_output_getGET/project/{project_id}/tasks/{task_id}/outputGet task outputProjectApiproject_project_id_templates_getGET/project/{project_id}/templatesGet templateProjectApiproject_project_id_templates_postPOST/project/{project_id}/templatescreate templateProjectApiproject_project_id_templates_template_id_deleteDELETE/project/{project_id}/templates/{template_id}Removes templateProjectApiproject_project_id_templates_template_id_getGET/project/{project_id}/templates/{template_id}Get templateProjectApiproject_project_id_templates_template_id_putPUT/project/{project_id}/templates/{template_id}Updates templateProjectApiproject_project_id_users_getGET/project/{project_id}/usersGet users linked to projectProjectApiproject_project_id_users_postPOST/project/{project_id}/usersLink user to projectProjectApiproject_project_id_users_user_id_admin_deleteDELETE/project/{project_id}/users/{user_id}/adminRevoke admin privilegesProjectApiproject_project_id_users_user_id_admin_postPOST/project/{project_id}/users/{user_id}/adminMakes user adminProjectApiproject_project_id_users_user_id_deleteDELETE/project/{project_id}/users/{user_id}Removes user from projectProjectApiproject_project_id_views_getGET/project/{project_id}/viewsGet viewProjectApiproject_project_id_views_postPOST/project/{project_id}/viewscreate viewProjectApiproject_project_id_views_view_id_deleteDELETE/project/{project_id}/views/{view_id}Removes viewProjectApiproject_project_id_views_view_id_getGET/project/{project_id}/views/{view_id}Get viewProjectApiproject_project_id_views_view_id_putPUT/project/{project_id}/views/{view_id}Updates viewProjectsApiprojects_getGET/projectsGet projectsProjectsApiprojects_postPOST/projectsCreate a new projectScheduleApiproject_project_id_schedules_postPOST/project/{project_id}/schedulescreate scheduleScheduleApiproject_project_id_schedules_schedule_id_deleteDELETE/project/{project_id}/schedules/{schedule_id}Deletes scheduleScheduleApiproject_project_id_schedules_schedule_id_getGET/project/{project_id}/schedules/{schedule_id}Get scheduleScheduleApiproject_project_id_schedules_schedule_id_putPUT/project/{project_id}/schedules/{schedule_id}Updates scheduleUserApiuser_getGET/user/Fetch logged in userUserApiuser_tokens_api_token_id_deleteDELETE/user/tokens/{api_token_id}Expires API tokenUserApiuser_tokens_getGET/user/tokensFetch API tokens for userUserApiuser_tokens_postPOST/user/tokensCreate an API tokenUserApiusers_getGET/usersFetches all usersUserApiusers_postPOST/usersCreates a userUserApiusers_user_id_deleteDELETE/users/{user_id}/Deletes userUserApiusers_user_id_getGET/users/{user_id}/Fetches a user profileUserApiusers_user_id_password_postPOST/users/{user_id}/passwordUpdates user passwordUserApiusers_user_id_putPUT/users/{user_id}/Updates user detailsDocumentation For ModelsAPITokenAccessKeyAccessKeyRequestEnvironmentEnvironmentRequestEventInfoTypeInfoTypeUpdateInventoryInventoryRequestLoginProjectProjectProjectIdDeleteRequestProjectProjectIdTasksGetRequestProjectProjectIdUsersGetRequestProjectRequestRepositoryRepositoryRequestScheduleScheduleRequestTaskTaskOutputTemplateTemplateRequestUserUserPutRequestUserRequestUsersUserIdPasswordPostRequestViewViewRequestDocumentation For AuthorizationbearerType: API keyAPI key parameter name: AuthorizationLocation: HTTP headercookieType: API keyAPI key parameter name: CookieLocation: HTTP headerAuthorNotes for Large OpenAPI documentsIf the OpenAPI document is large, imports in semaphore_client.apis and semaphore_client.models may fail with a RecursionError indicating the maximum recursion limit has been exceeded. In that case, there are a couple of solutions:Solution 1: Use specific imports for apis and models like:from semaphore_client.api.default_api import DefaultApifrom semaphore_client.model.pet import PetSolution 2: Before importing the package, adjust the maximum recursion limit as shown below:import sys sys.setrecursionlimit(1500) import semaphore_client from semaphore_client.apis import * from semaphore_client.models import *
ansible-shell
No description available on PyPI.
ansible-sign
ansible-signThis is a library and auxillary CLI tool for dealing with Ansible content verification.It does the following:checksum manifest generation and validation (sha256sum)GPG detached signature generation and validation (using python-gnupg) for contentNote: The API (library) part of this package is not officially supported and might change as time goes on. CLI commands should be considered stable within major verions (theXof versionX.Y.Z).Documentation can be foundhereincluding arundown/tutorialexplaining how to use the CLI for basic project signing and verification.
ansible-simple
ansible_simpleA simple SDK to use Ansible API.usagepip install ansible-simpleansible modulefromansible_simple.apiimportAnsibleApia=AnsibleApi(remote_user="root",hosts=["192.168.13.109","192.168.13.56"],remote_password={"conn_pass":"password"})# a.run(module='shell', args='hostname')# print(a.get_result())ansible playbook-name:mydbserverhosts:mydbservergather_facts:notasks:-name:uptimeraw:uptimeregister:uptime-debug:msg:"{{uptime.stdout}}"-name:online pm2 lsraw:lsregister:ls-debug:msg:"{{ls.stdout}}"a.playbook(dynamic_inv={"mydbserver":["192.168.13.109","192.168.13.56"]},playbooks=['test.yml'])# print(a.get_result())referencehttps://packaging.python.org/en/latest/tutorials/packaging-projects/https://docs.ansible.com
ansible-solace
Ansible Modules for Solace PubSub+ Event Brokers SEMP(v2) REST APIDEPRECATEDSeeSolace PubSub+ Ansible Collectioninstead.
ansiblespawner
AnsibleSpawnerSpawnJupyterHubsingle user notebook servers usingAnsible.This spawner runs Ansible playbooks to start, manage and stop JupyterHub singleuser servers. This means any Ansible module can be used to orchestrate your singleuser servers, includingDocker and many public/private clouds, and other infrastructure platforms supported by the community. You can do things like create multiple storage volumes for each user, or provision additional services on other containers/VMs.PrerequisitesPython 3.6 or above and JupyterHub 1.0.0 or above are required.InstallationConfigurationExamplejupyterhub_config.pyspawner configuration.ansible_path = "/path/to/" c.JupyterHub.spawner_class = "ansible" c.AnsibleSpawner.inventory = ansible_path + "inventory.yml.j2" c.AnsibleSpawner.create_playbook = ansible_path + "create.yml" c.AnsibleSpawner.update_playbook = ansible_path + "update.yml" c.AnsibleSpawner.poll_playbook = ansible_path + "poll.yml" c.AnsibleSpawner.destroy_playbook = ansible_path + "destroy.yml" c.AnsibleSpawner.playbook_vars = { "container_image": "docker.io/jupyter/base-notebook", "ansible_python_interpreter": "python3", } c.AnsibleSpawner.start_timeout = 600 c.JupyterHub.hub_connect_ip = "10.0.0.1"See the example playbooks under./examplesDevelopmentPytest is used to run automated tests that requireDockerandPodman. These platforms were chosen because they are self-contained and can be installed in Travis, whereas testing with public cloud platforms requires secure access credentials.If you only have one of these you can limit tests by specifying a marker. For example, to disable the Docker tests:pytest -vs -m "not docker"To view test coverage run pytest with--cov=ansiblespawner --cov-report=html, then openhtmlcov/index.html.setuptools-scmis used to manage versions. Just create a git tag.