question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I know there are thousands of questsions like this one here on SO but i've seen them all and i'm still not able to deal with my problem.
I'm doing everything using ansible so it's quite automated but, anyway, here are my files:
pg_hba.conf
local all all trust
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
host all all 10.11.12.0/24 md5
database.yml
production:
database: my_db
adapter: postgresql
host: localhost
username: deploy
encoding: unicode
min_messages: WARNING
template: template0
And i have a deploy user (and postgres user without a password set) in my system created. And now, while i'm totally able to sign in to postgres from bash using psql -d my_db (on server), i'm not able to connect to the db with my rails app. Running rake db:migrateMigration gives me
PG::ConnectionBad: fe_sendauth: no password supplied
I'm quite terrible at beeing a devop and i'm fighting with that issue from the day before yesterday's morning and it's still here so if there is anyone who can help me with that, i would be be more than grateful.
| psql is using a local socket connection, rails is using localhost over TCP/IP. Local is trusted, localhost requires a password (using md5). You could setup a pgpass file for your rails user: http://www.postgresql.org/docs/current/static/libpq-pgpass.html
| Ansible | 27,419,961 | 22 |
When I use lineinfile in ansible it is not writing ', " characters
lineinfile: 'dest=/home/xyz state=present line="CACHES="default""'
it is giving CACHES=default
but the desired output is CACHES="default"
How to achieve this?
| it appears you can escape the quotes:
- lineinfile: dest=/tmp/xyz state=present line="CACHES=\"default\""
That gives this output:
$ cat /tmp/xyz
CACHES="default"
You don't need to escape single quotes that are inside double quotes:
- lineinfile: dest=/tmp/xyz state=present line="CACHES=\"default\" foo='x'"
cat /tmp/xyz
CACHES="default" foo='x'
source: YAML specification, stackoverflow answer
| Ansible | 24,126,943 | 22 |
We have one Ansible role that needs to run three tasks in the handlers/main.yml task file, but it only runs the first task. How do I force it to run the other two tasks? I do have the ignore flag on for if the first task fails.
The tasks/main.yml file looks like:
- name: openfire | Copy plugins into openfire/plugins
copy: src={{ srcdir }}/xmpp/{{ item }} dest=${bindir}/openfire/plugins/{{ item }}
with_items:
- x.jar
- y.jar
sudo: yes
sudo_user: ${tomcat_user}
notify: restart openfire
- name: openfire | Copy jars into openfire/lib
copy: src={{ srcdir }}/xmpp/{{ item }} dest=${bindir}/openfire/lib/{{ item }}
with_items:
- a.jar
- b.jar
sudo: yes
sudo_user: ${tomcat_user}
notify: restart openfire
The handlers/main.yml file looks like:
- name: restart openfire
service: name=openfire state=stopped
ignore_errors: true
sudo: yes
- name: restart openfire
file: path=/var/run/openfire.pid state=absent
sudo: yes
- name: restart openfire
service: name=openfire state=restarted enabled=yes
sudo: yes
Only the first handler task (shut down openfire) runs.
| Its possible for handler to call another notify. Multiple notify calls are also allowed:
---
- name: restart something
command: shutdown.sh
notify:
- wait for stop
- start something
- wait for start
- name: wait for stop
wait_for: port={{port}} state=stopped
- name: start something
command: startup.sh
- name: wait for start
wait_for: port={{port}} state=started
| Ansible | 21,389,364 | 22 |
I'm converting a vagrant provisioner from shell to ansible and I was wondering if there's any option to show the actual time it takes to complete each task?
Ideally I want to benchmark the difference between installing multiple packages in yum using a shell: method and the in built yum: with_items method. ATM I'm sitting here with a stop watch but I need accurate times for this.
| I've solved for timing Ansible task durations by adding a callback plugin. Callback plugins were designed to allow you to run your own arbitrary code based on events that happen in the context of an Ansible run.
The plugin I use is easily deployed by creating a callback_plugins directory and dropping a python script into it.
Here is an sample of the resulting output at the end of your playbook run:
PLAY RECAP ********************************************************************
npm_install_foo | Install node dependencies via npm ------------------- 194.92s
gulp_build | Run Gulp to build ----------------------------------------- 89.99s
nodejs | Update npm ---------------------------------------------------- 26.96s
common | Update apt cache and upgrade base os packages ----------------- 17.78s
forever | Install forever (restarts Node.js if it fails) --------------- 16.84s
nodejs | Node.js | Install Node.js and npm ----------------------------- 15.11s
bower | Install bower --------------------------------------------------- 9.37s
Copy locally fetched repo to each instance ------------------------------ 8.03s
express | Express | Install Express ------------------------------------- 8.00s
Additionally, I prepend the shell command time to the ansible-playbook run. This nicely aggregates all of the individual task durations.
EDIT #1:
As of Ansible v2.0.0 this particular plugin ships with Ansible itself! Just add callbacks_enabled = profile_tasks to your ~/.ansible.cfg file in the [defaults] section.
EDIT #2:
Since Ansible v2.1.5 callback_whitelist has been deprecated in favor of callbacks_enabled.
[defaults]
callbacks_enabled = profile_tasks
...
| Ansible | 19,857,343 | 22 |
The documentation refers us to the github example, but this is a bit sparse and mysterious.
It says this:
# created with:
# crypt.crypt('This is my Password', '$1$SomeSalt')
password: $1$SomeSalt$UqddPX3r4kH3UL5jq5/ZI.
but crypt.crypt doesn't emit what the example shows. It also uses MD5.
I tried this:
# python
import crypt
crypt.crypt('This is my Password', '$6$somereallyniceandbigrandomsalt$')
>> '$69LxCegsnIwI'
but the password field of user should get something like this:
password: $6$somereallyniceandbigrandomsalt$UqddPX3r4kH3UL5jq5/ZI.
which includes three $ delimiters separating the 6 (which signifies that its a SHA-512 hash), the salt, and the crypted password.
Note that the python crypt docs don't mention anything about the $N format.
Questions:
Is the salt, as supplied to crypt.crypt, supposed to end with a trailing $ or is it in $N$SALT format?
Python docs refer to DES, but how is SHA-512 or MD5 being called and where is the documention for this?
Am I really supposed to take the output of crypt.crypt and cut off the first $6 and make $N$SALT$CRYPTED? Is this what ansible needs?
| The python example shown in the documentation depends on what version of crypt is running on the OS you are using.
I generated the crypt on OS X and the server I was targetting is ubuntu.
Due to differences in which implementation of crypt is offered by the OS, the result is different and incompatible.
Use this instead:
http://pythonhosted.org/passlib/
Passlib is a password hashing library for Python 2 & 3, which provides
cross-platform implementations of over 30 password hashing algorithms,
as well as a framework for managing existing password hashes. It’s
designed to be useful for a wide range of tasks, from verifying a hash
found in /etc/shadow, to providing full-strength password hashing for
multi-user application.
>>> # import the hash algorithm
>>> from passlib.hash import sha512_crypt
>>> # generate new salt, and hash a password
>>> hash = sha512_crypt.encrypt("password")
>>> hash
'$6$rounds=656000$BthPsosdEpqOM7Qd$l/ln9nyEfxM67ea8Bvb79JoW50pGjf6iM87taIvfSmpjasE4/wBG1.60pFS6W992T7Q1q2wikMbxYUvMHD1tT1'
| Ansible | 15,231,661 | 22 |
I have noticed that terraform will only run "file", "remote-exec" or "local-exec" on resources once. Once a resource is provisioned if the commands in a "remote-exec" are changed or a file from the provisioner "file" is changed then terraform will not make any changes to the instance. So how to I get terraform to run provisioner "file", "remote-exec" or "local-exec" everytime I run a terraform apply?
For more details:
Often I have had a resource provisioned partially due to an error from "remote-exec" causes terraform to stop (mostly due to me entering in the wrong commands while I'm writing my script). Running terraform again after this will cause the resource previously created to be destroyed and force terraform to create a new resource from scratch. This is also the only way I can run "remote-exec" twice on a resource... by creating it over from scratch.
This is really a drawback to terraform as opposed to ansible, which can do the same exact job as terraform except that it is totally idempotent. When using Ansible with tasks such as "ec2", "shell" and "copy" I can achieve the same tasks as terraform only each of those tasks will be idempotent. Ansible will automatically recognise when it doesn't need to make changes, where it does and because of this it can pick up where a failed ansible-playbook left off without destroying everything and starting from scratch. Terraform lacks this feature.
For reference here is a simple terraform resource block for an ec2 instance that uses both "remote-exec" and "file" provisioners:
resource "aws_instance" "test" {
count = ${var.amt}
ami = "ami-2d39803a"
instance_type = "t2.micro"
key_name = "ansible_aws"
tags {
name = "test${count.index}"
}
#creates ssh connection to consul servers
connection {
user = "ubuntu"
private_key="${file("/home/ubuntu/.ssh/id_rsa")}"
agent = true
timeout = "3m"
}
provisioner "remote-exec" {
inline = [<<EOF
sudo apt-get update
sudo apt-get install curl unzip
echo hi
EOF
]
}
#copying a file over
provisioner "file" {
source = "scripts/test.txt"
destination = "/path/to/file/test.txt"
}
}
| Came across this thread in my searches and eventually found a solution:
resource "null_resource" "ansible" {
triggers {
key = "${uuid()}"
}
provisioner "local-exec" {
command = "ansible-playbook -i /usr/local/bin/terraform-inventory -u ubuntu playbook.yml --private-key=/home/user/.ssh/aws_user.pem -u ubuntu"
}
}
You can use uuid(), which is unique to every terraform run, to trigger a null resource or provisioner.
| Ansible | 39,069,311 | 21 |
I am running the following ansible playbook
- hosts: localhost
connection: local
vars_files:
- vars/config_values.yaml
gather_facts: no
tasks:
- name: Set correct project in gcloud config
shell: "gcloud config set project {{ google_project_name }}"
Which yields the following warning:
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Given that I am explicitly stating that it will be run against host: localhost, why is it complaining about no inventory being parsed and that the "provided host list is empty"?
How to remove these warnings? (without just suppressing them if possible)
| These are just warnings telling you that:
you did not provide any inventory file either with the -i option, in your ansible.cfg or by default in /etc/ansible/hosts (and hence none was parsed and it is empty)
your inventory is empty and hence only the implicit localhost is available (reminding you that it does not match the all group)
additional notes
hosts: localhost in your above playbook example is the target for your play. It could be an other host or a group (or a more complicated pattern). The targeted hosts must exist in the inventory to be managed. localhost always exists at least as the implicit local machine.
You can read Ansible introduction to inventories for more information
Although those two warning seem a bit redundant, they give different information (i.e. "no inventory at all" vs "inventory is simply empty"). As such they are governed by different config options
The second warning can be silenced since ansible v2.6 with the LOCALHOST_WARNING option
The first warning can be silenced since ansible-core v2.14 with the INVENTORY_UNPARSED_WARNING option
For the records, the first feature as been added to Ansible by a pull request proposed by @larsk, a stackoverflow user
So if you intend to run playbooks primarily targeted to localhost without giving any inventory to parse, you can silence both warnings as in the following one-liner (see links above for all options to set those vars)
ANSIBLE_LOCALHOST_WARNING=False \
ANSIBLE_INVENTORY_UNPARSED_WARNING=False \
ansible-playbook your_localhost_playbook.yml
If you want to make this permanent, you can add the following two lines to your .bashrc (or equivalent file for the shell you use):
# Silence absent and/or empty Ansible inventory warnings
export ANSIBLE_LOCALHOST_WARNING=False
export ANSIBLE_INVENTORY_UNPARSED_WARNING=False
Note that, as explained in the above links, these features can be turned off for individual projects using an ansible.cfg file at project root:
[defaults]
localhost_warning=False
[inventory]
inventory_unparsed_warning=False
| Ansible | 59,938,088 | 21 |
I’m seeing the above message on multiple playbooks using Ansible 2.8 on Ubuntu 18.04. In the interests of simplicity I’ve reproduced it using this basic playbook for a single node Drupal server. https://github.com/geerlingguy/ansible-for-devops/tree/master/drupal; this playbook works fine on earlier versions of ubuntu, but not on 18.04 which I understand includes python3 by default.
I’ve used vagrant to create the base machine, which shows the following:
$ which python
/usr/bin/python
$ which python2
/usr/bin/python2
$ which python3
/usr/bin/python3
$ python --version
Python 2.7.15rc1
$ python2 --version
Python 2.7.15rc1
$ python3 --version
Python 3.6.7
Which seems to be telling me that both python 2 and python 3 are installed, but that 2.7 is the default as that is what responds to $ python --version.
I have tried all the suggestions described in this article: https://www.rollnorocks.com/2018/12/ansible-python-and-mysql-untangling-the-mess/
Including specifying the
ansible_python_interpreter=/usr/bin/python3
But nothing affects the message. The edited -vvv output from the playbook run is below. Has anyone got any more ideas about either the problem or solution.
TASK [Remove the MySQL test database.] ****************************************************************************************************************************
task path: /vagrant/provisioning/playbook.yml:96
<10.1.1.11> ESTABLISH SSH CONNECTION FOR USER: mt-ansible-user
<10.1.1.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o
.
.
.
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/database/mysql/mysql_db.py
<10.1.1.11> PUT /home/mt-tools-user/.ansible/tmp/ansible-local-21287bh5dK5/tmp7pOKOH TO /home/mt-ansible-user/.ansible/tmp/ansible-tmp-1558868136.49-16683638151793
1/AnsiballZ_mysql_db.py
<10.1.1.11> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/mt-tools-user/.ssh/mt_ansible_rsa"' -o KbdInteractiveAuthent
ication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="mt-ansible-user"' -o ConnectTimeout=
10 -o ControlPath=/home/mt-tools-user/.ansible/cp/af4de51057 '[10.1.1.11]'
<10.1.1.11> (0, 'sftp> put /home/mt-tools-user/.ansible/tmp/ansible-local-21287bh5dK5/tmp7pOKOH /home/mt-ansible-user/.ansible/tmp/ansible-tmp-1558868136.49-166836
381517931/AnsiballZ_mysql_db.py\n', '')
<10.1.1.11> ESTABLISH SSH CONNECTION FOR USER: mt-ansible-user
<10.1.1.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/mt-tools-user/.ssh/mt_ansible_rsa"' -o KbdInteractiveAuthenticatio
n=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="mt-ansible-user"' -o ConnectTimeout=10 -o
ControlPath=/home/mt-tools-user/.ansible/cp/af4de51057 10.1.1.11 '/bin/sh -c '"'"'chmod u+x /home/mt-ansible-user/.ansible/tmp/ansible-tmp-1558868136.49-1668363815
17931/ /home/mt-ansible-user/.ansible/tmp/ansible-tmp-1558868136.49-166836381517931/AnsiballZ_mysql_db.py && sleep 0'"'"''
<10.1.1.11> (0, '', '')
<10.1.1.11> ESTABLISH SSH CONNECTION FOR USER: mt-ansible-user
<10.1.1.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/mt-tools-user/.ssh/mt_ansible_rsa"' -o KbdInteractiveAuthenticatio
n=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="mt-ansible-user"' -o ConnectTimeout=10 -o
ControlPath=/home/mt-tools-user/.ansible/cp/af4de51057 -tt 10.1.1.11 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qwjewg
qckuyapsxnkbqoegainrkyiinc ; /usr/bin/python3 /home/mt-ansible-user/.ansible/tmp/ansible-tmp-1558868136.49-166836381517931/AnsiballZ_mysql_db.py'"'"'"'"'"'"'"'"' &
& sleep 0'"'"''
<10.1.1.11> (1, 'BECOME-SUCCESS-qwjewgqckuyapsxnkbqoegainrkyiinc\r\n\r\n{"msg": "The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module is req
uired.", "failed": true, "invocation": {"module_args": {"db": "test", "state": "absent", "name": "test", "login_host": "localhost", "login_port": 3306, "encoding":
"", "collation": "", "connect_timeout": 30, "config_file": "/root/.my.cnf", "single_transaction": false, "quick": true, "ignore_tables": [], "login_user": null, "
login_password": null, "login_unix_socket": null, "target": null, "client_cert": null, "client_key": null, "ca_cert": null}}}\r\n', 'Shared connection to 10.1.1.11
closed.\r\n')
<10.1.1.11> Failed to connect to the host via ssh: Shared connection to 10.1.1.11 closed.
<10.1.1.11> ESTABLISH SSH CONNECTION FOR USER: mt-ansible-user
<10.1.1.11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/mt-tools-user/.ssh/mt_ansible_rsa"' -o KbdInteractiveAuthenticatio
n=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="mt-ansible-user"' -o ConnectTimeout=10 -o
ControlPath=/home/mt-tools-user/.ansible/cp/af4de51057 10.1.1.11 '/bin/sh -c '"'"'rm -f -r /home/mt-ansible-user/.ansible/tmp/ansible-tmp-1558868136.49-16683638151
7931/ > /dev/null 2>&1 && sleep 0'"'"''
<10.1.1.11> (0, '', '')
| Your remote host is telling you:
The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module is required.
Did you follow the recommendations and install the required pymysql python package on your remote host ?
For a quick test, on your remote host:
if using python 2.7: sudo pip install pymysql
if using python 3.x: sudo pip3 install pymysql
Once tested, to make sure this dependency is always present, add a task in your playbook prior to launching any mysql task:
- name: Make sure pymysql is present
become: true # needed if the other tasks are not played as root
pip:
name: pymysql
state: present
You should not have to specify the executable option in this case (see doc) as it will default to your ansible_python_interpreter
| Ansible | 56,313,083 | 21 |
With Ansible please advise how i could download the latest release binary from Github repository. As per my current understanding the steps would be:
a. get URL of latest release
b. download the release
For a. I have something like which does not provide the actual release (ex. v0.11.53):
- name: get latest Gogs release
local_action:
module: uri
url: https://github.com/gogits/gogs/releases/latest
method: GET
follow_redirects: no
status_code: 301
register: release_url
For b. I have the below which works but needs constant updating. Instead of version i would need a variable set in a.:
- name: download latest
become: yes
become-user: "{{gogs_user}}"
get_url:
url: https://github.com/gogs/gogs/releases/download/v0.11.53/linux_amd64.tar.gz
dest: "/home/{{gogs_user}}/linux_amd64.tar.gz"
Thank you!
| Github has an API to manipulate the release which is documented.
so imagine you want to get the latest release of ansible (which belong to the project ansible) you would
call the url https://api.github.com/repos/ansible/ansible/releases/latest
get an json structure like this
{
"url": "https://api.github.com/repos/ansible/ansible/releases/5120666",
"assets_url": "https://api.github.com/repos/ansible/ansible/releases/5120666/assets",
"upload_url": "https://uploads.github.com/repos/ansible/ansible/releases/5120666/assets{?name,label}",
"html_url": "https://github.com/ansible/ansible/releases/tag/v2.2.1.0-0.3.rc3",
"id": 5120666,
"node_id": "MDc6UmVsZWFzZTUxMjA2NjY=",
"tag_name": "v2.2.1.0-0.3.rc3",
"target_commitish": "devel",
"name": "THESE ARE NOT OUR OFFICIAL RELEASES",
...
},
"prerelease": false,
"created_at": "2017-01-09T16:49:01Z",
"published_at": "2017-01-10T20:09:37Z",
"assets": [
],
"tarball_url": "https://api.github.com/repos/ansible/ansible/tarball/v2.2.1.0-0.3.rc3",
"zipball_url": "https://api.github.com/repos/ansible/ansible/zipball/v2.2.1.0-0.3.rc3",
"body": "For official tarballs go to https://releases.ansible.com\n"
}
get the value of the key tarball_url
download the value of the key retrieved just above
In ansible code that would do
- hosts: localhost
tasks:
- uri:
url: https://api.github.com/repos/ansible/ansible/releases/latest
return_content: true
register: json_reponse
- get_url:
url: "{{ json_reponse.json.tarball_url }}"
dest: ./ansible-latest.tar.gz
I let you adapt the proper parameters to answer your question :)
| Ansible | 50,966,777 | 21 |
I'm trying multiple concatenation when preforming with_items for the destination section.
Right now it looks like this:
- name: create app except+lookup
copy: content="" dest="{{ dir.comp ~ '/config/con2dd/' ~ item.name ~ 'File.txt' }}" force=no group=devops owner=devops mode: 0755
with_items:
...
I get:
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Tried couple of approaches but none resulted something that's working.
Is it possible to concat the variables with the strings?
| Don't mix pure YAML and key=value syntaxes for parameters. And always use YAML syntax for complex arguments:
- name: create app except+lookup
copy:
content: ""
dest: "{{ dir.comp }}/config/con2dd/{{ item.name }}File.txt"
force: no
group: devops
owner: devops
mode: 0755
with_items:
...
| Ansible | 45,400,851 | 21 |
I am trying to download and extract a tar archive in the remote machine and remote destination must be created if not exists. BUT it is not happening.
ERROR: destination directory doesn't exist
MYCODE:
- unarchive:
src: http://apache.mirrors.ionfish.org/tomcat/tomcat-8/v8.5.15/bin/apache-tomcat-8.5.15.tar.gz
dest: /opt/tomcat/
creates: yes
remote_src: True
NOTE:
* running the play as root.
thanks in advance
| While using the unarchive module, the dest path should be a path to an existing directory, and creates should be a path to a file and not a boolean.
- name: ensure tomcat directory exists
file:
path: /opt/tomcat
state: directory
- unarchive:
src: http://apache.mirrors.ionfish.org/tomcat/tomcat-8/v8.5.15/bin/apache-tomcat-8.5.15.tar.gz
dest: /opt/tomcat/ # already existing path
creates: /opt/tomcat/config # some path to make sure that the archive has already been unpacked
remote_src: yes
| Ansible | 44,719,303 | 21 |
When I run this on the command line it works fine:
echo -e "n\np\n1\n\n\nw" | sudo fdisk /dev/sdb
But in Ansible it does not want to run in shell:
- name: partition new disk
shell: echo -e "n\np\n1\n\n\nw" | sudo fdisk /dev/sdb
It does not come back with an error, but it does not create the partition either.
I checked that Ansible and LVM will not do what I need.
Any advice?
| With Ansible 2.3 and above, you can use parted module to create partitions from a block device.
For example:
- parted:
device: /dev/sdb
number: 1
flags: [ lvm ]
state: present
To format the partition just use filesystem module as shown below:
- filesystem:
fstype: ext2
dev: /dev/sdb1
To mount the partition to, let's say, /work folder just use mount module as shown below:
- mount:
fstype: ext2
src: /dev/sdb1
path: /work
state: mounted
| Ansible | 42,348,098 | 21 |
I am trying to install node js version using nvm using below Ansible yml file.
I get error like source "source /home/centos/.nvm/nvm.sh" file not found. But if I do the same by logging into the machine using ssh then it works fine.
- name: Install nvm
git: repo=https://github.com/creationix/nvm.git dest=~/.nvm version={{ nvm.version }}
tags: nvm
- name: Source nvm in ~/.profile
lineinfile: >
dest=~/.profile
line="source ~/.nvm/nvm.sh"
create=yes
tags: nvm
- name: Install node {{ nvm.node_version }}
command: "{{ item }}"
with_items:
- "source /home/centos/.nvm/nvm.sh"
- nvm install {{ nvm.node_version }}
tags: nvm
Error:
failed: [172.29.4.71] (item=source /home/centos/.nvm/nvm.sh) => {"cmd": "source /home/centos/.nvm/nvm.sh", "failed": true, "item": "source /home/centos/.nvm/nvm.sh", "msg": "[Errno 2] No such file or directory", "rc": 2}
failed: [172.29.4.71] (item=nvm install 6.2.0) => {"cmd": "nvm install 6.2.0", "failed": true, "item": "nvm install 6.2.0", "msg": "[Errno 2] No such file or directory", "rc": 2}
| Regarding the "no such file" error:
source is an internal shell command (see for example Bash Builtin Commands), not an external program which you can run. There is no executable named source in your system and that's why you get No such file or directory error.
Instead of the command module use shell which will execute the source command inside a shell.
Regarding the sourcing problem:
In a with_items loop Ansible will run the shell twice and both processes will be independent of each other. Variables set in one will not be seen by the other.
You should run the two commands in one shell process, for example with:
- name: Install node {{ nvm.node_version }}
shell: "source /home/centos/.nvm/nvm.sh && nvm install {{ nvm.node_version }}"
tags: nvm
Other remarks:
Use {{ ansible_env.HOME }} instead of ~ in the git task. Either one will work here, but tilde expansion is the functionality of shell and you are writing code for Ansible.
| Ansible | 41,379,083 | 21 |
I'm having trouble running a full playbook because some of the facts later plays depend on are modified in earlier plays, but ansible doesn't update facts mid-run.
Running ansible somehost -m setup when the whole playbook starts against a new VPS:
"ansible_selinux": {
"status": "disabled"
},
My playbook contains a play that installs SELinux and reboots the server (while ansible wait_for's), and a later task uses the conditional when: ansible_selinux.status != 'disabled'. However even though SELinux is now installed and enforcing (which required the reboot) the facts for the system still show SELinux is disabled so that conditional fails and the task is skipped.
Running the playbook again of course works because facts are updated and now return:
"ansible_selinux": {
"config_mode": "enforcing",
"mode": "enforcing",
"policyvers": 28,
"status": "enabled",
"type": "targeted"
}
Is there any way to make facts refresh mid-playbook? Maybe the hack is to set_fact on ansible_selinux.status myself after the reboot?
Update:
Well that was too easy, thanks to BruceP I added this task to fetch updated facts at the end of my SELinux play
- name: SELinux - Force ansible to regather facts
setup: filter='ansible_selinux'
| Add this in your playbook to use the setup module to update the facts.
For example I added another interface with DHCP now I want to know what address it has then do this:
- name: do facts module to get latest information
setup:
| Ansible | 35,185,635 | 21 |
I have a host_var in ansible with dict with all interfaces:
---
interfaces:
vlan0:
ip: 127.0.0.1
mask: 255.255.255.0
state: true
vlan2:
ip: 127.0.1.1
mask: 255.255.255.0
state: true
And I want to check if dict has a key vlan1 if ok put to template value vlan1.ip else put vlan2.ip.
{% if interfaces.vlan1 %}
# and also I try {% if 'vlan1' in interfaces %}
{{ interfaces.vlan1.ip }};
{% else %}
{{ interfaces.vlan2.ip|default("127.0.0.1") }};
{% endif %};
But i have an error:
fatal: [127.0.0.1] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'vlan1'", 'failed': True}
I found that it have to be work in Jinja2 but it seems to doesn't work in ansible. Maybe someone have another way for solving this problem?
When I define vlan1 it works fine. Ansible version 1.9.2
I was trying to reproduce it in python and have no error if my dictionary have not key vlan1. thanks to @GUIDO
>>> from jinja2 import Template
>>> b = Template("""
... {% if interfaces.vlan1 %}
... {{ interfaces.vlan1.ip }}
... {% else %}
... {{ interfaces.vlan2.ip|default("127.0.3.1") }}
... {% endif %}""")
>>> b.render(interfaces={'vlan3':{'ip':'127.0.1.1'},'vlan2':{'ip':'127.0.2.1'}})
u'\n\n127.0.2.1\n'
>>> b.render(interfaces={'vlan1':{'ip':'127.0.1.1'},'vlan2':{'ip':'127.0.2.1'}})
u'\n\n127.0.1.1\n'
| The answer is simple and it showed on ansible error message. First of all I need to check if var is defined.
{% if interfaces.vlan1 is defined %}
{{ interfaces.vlan1.ip }}
{% else %}
{{ interfaces.vlan2.ip|default("127.0.3.1") }}
{% endif %}
This combination works well.
| Ansible | 33,400,771 | 21 |
I have a specific ansible variable structure that I want to get from the vault into a yaml file on my hosts.
Lets assume a structure like this:
secrets:
psp1:
username: this
password: that
secret_key: 123
...
I need something like a "generic" template to output whatever "secrets" contains at the moment, since the content changes almost completely based on the current environment.
The easiest solution I can think of is to output the whole structure in an template like this:
# config/secrets.yml
{{ secrets | to_yaml }}
But the jinja2 to_yaml filter does only "yamlify" the first level, deeper nestings are outputted in json.
Can I work around that problem somehow? Is there an easier way to achieve what I want?
Thanks for any help!
|
As jwodder said, it's valid.
If you're using to_yaml (instead of to_nice_yaml) you have fairly old install of ansible, it's time to upgrade.
Use to_nice_yaml
It's possible to pass your own kwargs to filter functions, which usually pass them on to underlying python module call. Like this one for your case. So something like:
{{ secrets | to_nice_yaml( width=50, explicit_start=True, explicit_end=True) }}
only catch is you can't override indent=4,* allow_unicode=True, default_flow_style=False
* Note that indent can now be overridden, at least as of Ansible 2.2.0 (I use it to indent 2 spaces to follow coding standards for one project).
Better documentation for to_nice_yaml can be found here.
| Ansible | 28,963,751 | 21 |
In Ansible, if I try to use a variable as a parameter name, or a key name, it is never resolved. For example, if I have {{ some_var }}: true, or:
template: "{{ resolve_me_to_src }}": "some_src"
the variables will just be used literally and never resolve. My specific use case is using this with the ec2 module, where some of my tag names are stored as variables:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: { "{{ some_tag }}": "{{ some_value }}", "{{ other_tag }}": "{{ other_value }}" }
Is there any way around this? Can I mark that I want to force evaluation somehow?
| Will this work for you?
(rc=0)$ cat training.yml
- hosts: localhost
tags: so5
gather_facts: False
vars: [
k1: 'key1',
k2: 'key2',
d1: "{
'{{k1}}': 'value1',
'{{k2}}': 'value2',
}",
]
tasks:
- debug: msg="{{item}}"
with_dict: "{{d1}}"
(rc=0)$ ansible-playbook training.yml -t so5
PLAY [localhost] ****************************************************************
PLAY [localhost] ****************************************************************
TASK: [debug msg="{{item}}"] **************************************************
ok: [localhost] => (item={'key': 'key2', 'value': 'value2'}) => {
"item": {
"key": "key2",
"value": "value2"
},
"msg": "{'value': 'value2', 'key': 'key2'}"
}
ok: [localhost] => (item={'key': 'key1', 'value': 'value1'}) => {
"item": {
"key": "key1",
"value": "value1"
},
"msg": "{'value': 'value1', 'key': 'key1'}"
}
PLAY RECAP ********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
(rc=0)$
Trick is to wrap dict declaration with double quotes. Ansible applies this undocumented (but consistant) and crappy translation (ansible's equivalent of shell variable expantion) to most (not all) YAML values (everything RHS of ':') in the playbook. It is some combination putting these strings through Jinja2-engine, Python-interpreter and ansible-engine in some unknown order.
| Ansible | 27,805,976 | 21 |
I would like to know if there is a way to print information while a module is executing -- primarily as a means to demonstrate that the process is working and has not hung. Specifically, I am trying to get feedback during the execution of the cloudformation module. I tried modifying the (Python) source code to include the following:
def debug(msg):
print json.dumps({
"DEBUG" : msg
})
...
debug("The stack operation is still working...")
What this did, of course, was store all this output and only print it all after the module had finished executing. So for particularly large cloudformation templates, this means that I wait around for 5 minutes or so, and then suddenly see a large amount of text appear on the screen at the end. What I was expecting was to see "The stack operation is still working..." printed every x seconds.
It would seem that the Asynchronous Actions and Polling are what I'm looking for... but this didn't work, either. The entire task, "Launch CloudFormation for {{ stackname }}", was skipped entirely. See below for the relevant (YAML) snippet from my playbook:
- name: Launch CloudFormation for {{ stackname }}
cloudformation: >
stack_name="{{ stackname }}" state=present
region="{{ region }}" disable_rollback=true
template="{{ template }}"
register: cloud
args:
template_parameters:
KeyName: "{{ keyName }}"
Region: "{{ region }}"
SecurityGroup: "{{ securityGroup }}"
BootStrapper: "{{ bootStrapper }}"
BootStrapCommand: "powershell.exe -executionpolicy unrestricted -File C:\\{{ bootStrapper }} {{ region }}"
S3Bucket: "{{ s3Bucket }}"
async: 3600
poll: 30
This tells me that async is meant for typical shell commands, and not complex modules such as cloudformation. OR -- I may have done something wrong.
Could anyone shed some light on this situation? Again, for large cloudformation tasks that take a while, I would like some periodic indication that the task is still running, and not hanging. I appreciate the help!
| My approach for localhost module:
...
module.log(msg='test!!!!!!!!!!!!!!!!!')
...
Then on another window:
$ tail -f /var/log/messages
Nov 29 22:32:44 nfvi-ansible-xxxx python2: ansible-test-module test!!!!!!!!!!!!!!!!!
| Ansible | 25,918,068 | 21 |
I am trying to use multiple inventory file and dynamic inventory with Ansible 1.4 and dev. Ansible returns No hosts matched.
I have a simulated scenario with two hosts file in a directory test the content of the directory is listed.
hosts1.ini
[group1]
test1 ansible_ssh_host=127.0.0.1
test2 ansible_ssh_host=127.0.0.2
[group2]
test3 ansible_ssh_host=127.0.0.3
hosts2.ini
[group3]
test4 ansible_ssh_host=127.0.0.4
[group4]
test5 ansible_ssh_host=127.0.0.4
test6 ansible_ssh_host=127.0.0.5
if I run ansible -i test --list-hosts all it returns No hosts matched.
I digged into the code and found dir.py with a small amended i got it too work. But I think i must have done something wrong and the hack is not required.
Any ideas on how to solve it ?
| Remove the .ini from your file names:
$ ls test/
hosts1 hosts2
$ ansible -i test --list-hosts all
test1
test2
test3
test5
test6
test4
| Ansible | 21,638,996 | 21 |
Example scenario: config files for a certain service are kept under version control on a private github repo. I want to write a playbook that fetches one of these files on the remote node and puts it into the desired location.
I can think of several solutions to this:
do a checkout on the machine that runs ansible (local_action) and then use the copy module
do a checkout on the remote node (with the git module), copy the files to the desired location with command: cp src dest creates=dest (perhaps do this with a handler - only when repo has changes to be pulled)
use the url module or command: wget https://raw.github.com/repo/.../file creates=file in the playbook to only download the file of interest. Is the command module actually going to check if the file to be created is different from the one that may already exist or does it just check the file exists?
use wget on the machine that runs ansible (local_action) and then use the copy module to push it to the remote node
What are the advantages/disadvantages of these. Which (if any) of these could be considered good practice. What is the best general solution to this?
| I'll start by saying that we chose the 2nd solution for our production environment and I guarantee one thing - it just works. Now for the longer version:
Solution no. 1:
Simple and robust - will just work
Does not "contaminate" production servers with irrelevant files (other configuration files)
Does not load production servers with I/O to GitHub (probably negligible)
Solution no. 2:
Simple and robust - will just work
To reduce contamination, we clone the configuration repo to /tmp and delete it at the end of the playbook
Solution no. 3/4:
My guess it will work, but feels a bit strange to have your configuration in source control and then not really using source control features.
The advantage of these solutions is that you can "cherry pick" which configuration files you want to download rather than cloning the whole repository. This also reduces I/O against github as cloning becomes heavier over time.
| Ansible | 21,590,906 | 21 |
I have some Terraform code with an aws_instance and a null_resource:
resource "aws_instance" "example" {
ami = data.aws_ami.server.id
instance_type = "t2.medium"
key_name = aws_key_pair.deployer.key_name
tags = {
name = "example"
}
vpc_security_group_ids = [aws_security_group.main.id]
}
resource "null_resource" "example" {
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml"
}
}
It kinda works, but sometimes there is a bug (probably when the instance in a pending state). When I rerun Terraform - it works as expected.
Question: How can I run local-exec only when the instance is running and accepting an SSH connection?
| The null_resource is currently only going to wait until the aws_instance resource has completed which in turn only waits until the AWS API returns that it is in the Running state. There's a long gap from there to the instance starting the OS and then being able to accept SSH connections before your local-exec provisioner can connect.
One way to handle this is to use the remote-exec provisioner on the instance first as that has the ability to wait for the instance to be ready. Changing your existing code to handle this would look like this:
resource "aws_instance" "example" {
ami = data.aws_ami.server.id
instance_type = "t2.medium"
key_name = aws_key_pair.deployer.key_name
tags = {
name = "example"
}
vpc_security_group_ids = [aws_security_group.main.id]
}
resource "null_resource" "example" {
provisioner "remote-exec" {
connection {
host = aws_instance.example.public_dns
user = "centos"
private_key = file("files/id_rsa")
}
inline = ["echo 'connected!'"]
}
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml"
}
}
This will first attempt to connect to the instance's public DNS address as the centos user with the files/id_rsa private key. Once it is connected it will then run echo 'connected!' as a simple command before moving on to your existing local-exec provisioner that runs Ansible against the instance.
Note that just being able to connect over SSH may not actually be enough for you to then provision the instance. If your Ansible script tries to interact with your package manager then you may find that it is locked from the instance's user data script running. If this is the case you will need to remotely execute a script that waits for cloud-init to be complete first. An example script looks like this:
#!/bin/bash
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
echo -e "\033[1;36mWaiting for cloud-init..."
sleep 1
done
| Ansible | 62,403,030 | 20 |
How can I run a local command on a Ansible control server, if that control server does not have a SSH daemon running?
If I run the following playbook:
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
local_action: command echo "hello world"
I get the following error:
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host localhost port 22: Connection refused", "unreachable": true}
It seems that local_action is the same as delegate_to: 127.0.0.1, so Ansible tries to ssh to the localhost. However, there is no SSH daemon running on the local controller host (only on the remote machines).
So my immediate question is how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
Crucial addition, not in the original question:
My host_vars contained the following line:
ansible_connection: ssh
|
how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
connection: local is sufficient to make the tasks run in the controller without using SSH.
Try,
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
command: echo "hello world"
| Ansible | 61,295,544 | 20 |
In my inventory I define hosts like this:
[server1]
141.151.176.223
I am looking for a variable which keeps the server1 name, as I am using it to define server hostname.
inventory_hostname is set to 141.151.176.223
ansible_hostname as well as inventory_hostname_short is set to 148.
To workaround this problem I am setting my own variable like this:
[server1]
141.151.176.223 hostname=server1
but I am not satisfied with this approach.
Any ideas?
| Explanation
If the inventory file was defined this way:
[server1_group]
server1 ansible_host=141.151.176.223
Then you can access:
server1 with the inventory_hostname fact;
141.151.176.223 with the ansible_host fact;
server1_group with group_names|first (group_names fact contains a list of all groups server belongs to, first selects the first element from that list).
Regardless of the above, ansible_hostname fact contains the host name as defined on the host itself (the value is set during facts gathering).
Solution
You should use a standard ansible_host declaration to point to the target's IP address and set the inventory hostname to however you want to refer to the server in Ansible playbooks.
In particular you can skip groups definition altogether and define just:
server1 ansible_host=141.151.176.223
| Ansible | 48,367,708 | 20 |
I want to change sudo session timeout according to this answer. I can edit ordinary file:
lineinfile:
path: /etc/sudoers
regexp: ^Defaults env_reset
line: Defaults env_reset,timestamp_timeout=60
But in first line of my /etc/sudoers written: # This file MUST be edited with the 'visudo' command as root. How to deal with it?
P.S.
Despite the fact that the short answer is yes, one must read Konstantin Suvorov answer about right way to do it with lineinfile and very interesting techraf answer about possible pitfalls on this way
| There's a safenet option for such cases: validate.
The validation command to run before copying into place. The path to the file to validate is passed in via '%s' which must be present as in the example below. The command is passed securely so shell features like expansion and pipes won't work.
If you look at the examples section of lineinfile module, you'll see exactly what you need:
# Validate the sudoers file before saving
- lineinfile:
path: /etc/sudoers
state: present
regexp: '^%ADMIN ALL='
line: '%ADMIN ALL=(ALL) NOPASSWD: ALL'
validate: '/usr/sbin/visudo -cf %s'
| Ansible | 46,720,411 | 20 |
I'm trying to loop a dictionary through an ansible template using jinja2 to create a number of datasources but receive this error [{'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'value'", 'failed': True}]}
When running a debug task it does get the correct values back so I feel like my issue is in the template itself but I've been unable to figure out what I am doing wrong.
Ansible Task
- name: debug dictionary
debug: msg="{{ item.value.db_url }}"
with_dict: databases
- name: copy tomcat config files
template: src="{{ item.src }}" dest="{{ item.dest }}"
with_items:
- { src: 'context.xml.j2', dest: '/opt/tomcat/conf/context.xml'}
notify: restart tomcat
with_dict: databases
Ansible Dictionary
databases:
db1:
db_resource: jdbc/db1
db_maxidle: 50
db_maxconn: 350
db_maxwait: 10000
db_user: dbuser
db_pass: "{{ dbpass }}"
db_url: jdbc:postgresql://server:5432/dbname
db_driver: org.postgresql.Driver
Jinja2 Template
{% for items in databases %}
<resource name="{{ item.value.db_resource }}" auth="container" type="javax.sql.datasource" maxtotal="{{ item.value.db_maxconn }}" maxidle="{{ item.value.db_maxidle }}" maxwaitmillis="{{ item.value.db_maxwait }}" username="{{ item.value.db_user }}" password="{{ item.value.db_pass }}" driverclassname="{{ item.value.db_driver }}" url="{{ item.value.db_url }}" />
{% endfor %}
Debug Output
ok: [IP] => (item={'key': 'db1', 'value': {'db_maxwait': 10000, 'db_maxconn': 350, 'db_maxidle': 50, 'db_driver': 'org.postgresql.Driver', 'db_pass': u'REDACTED', 'db_resource': 'jdbc/db1', 'db_user': 'dbuser', 'db_url': 'jdbc:postgresql://server:5432/dbname'}}) => {
"item": {
"key": "db1",
"value": {
"db_driver": "org.postgresql.Driver",
"db_maxconn": 350,
"db_maxidle": 50,
"db_maxwait": 10000,
"db_pass": "REDACTED",
"db_resource": "jdbc/db1",
"db_url": "jdbc:postgresql://server:5432/db",
"db_user": "dbuser"
}
},
"msg": "jdbc:postgresql://server:5432/dbname"
}
| I discovered today that using dict.values() loops over each dict element's values rather than its keys. So you should be able use something like this for your template.
{% for item in databases.values() %}
<resource name="{{ item.db_resource }}" auth="container" type="javax.sql.datasource" maxtotal="{{ item.db_maxconn }}" maxidle="{{ item.db_maxidle }}" maxwaitmillis="{{ item.db_maxwait }}" username="{{ item.db_user }}" password="{{ item.db_pass }}" driverclassname="{{ item.db_driver }}" url="{{ item.db_url }}" />
{% endfor %}
I know that it's way after the fact, but maybe somebody else searching for this answer can make use of this additional discovery.
| Ansible | 37,756,586 | 20 |
Using Ansible I'm having a problem registering a variable the way I want. Using the implementation below I will always have to call .stdout on the variable - is there a way I can do better?
My playbook:
Note the unwanted use of .stdout - I just want to be able to use the variable directly without calling a propery...?
---
- name: prepare for new deployment
hosts: all
user: ser85
tasks:
- name: init deploy dir
shell: echo ansible-deploy-$(date +%Y%m%d-%H%M%S-%N)
# http://docs.ansible.com/ansible/playbooks_variables.html
register: deploy_dir
- debug: var=deploy_dir
- debug: var=deploy_dir.stdout
- name: init scripts dir
shell: echo {{ deploy_dir.stdout }}/scripts
register: scripts_dir
- debug: var=scripts_dir.stdout
The output when I execute the playbook:
TASK [init deploy dir] *********************************************************
changed: [123.123.123.123]
TASK [debug] *******************************************************************
ok: [123.123.123.123] => {
"deploy_dir": {
"changed": true,
"cmd": "echo ansible-deploy-$(date +%Y%m%d-%H%M%S-%N)",
"delta": "0:00:00.002898",
"end": "2016-05-27 10:53:38.122217",
"rc": 0,
"start": "2016-05-27 10:53:38.119319",
"stderr": "",
"stdout": "ansible-deploy-20160527-105338-121888719",
"stdout_lines": [
"ansible-deploy-20160527-105338-121888719"
],
"warnings": []
}
}
TASK [debug] *******************************************************************
ok: [123.123.123.123] => {
"deploy_dir.stdout": "ansible-deploy-20160527-105338-121888719"
}
TASK [init scripts dir] ********************************************************
changed: [123.123.123.123]
TASK [debug] *******************************************************************
ok: [123.123.123.123] => {
"scripts_dir.stdout": "ansible-deploy-20160527-105338-121888719/scripts"
}
Any help or insights appreciated - thank you :)
| If I understood it right you want to assign deploy_dir.stdout to a variable that you can use without stdout key. It can be done with set_fact module:
tasks:
- name: init deploy dir
shell: echo ansible-deploy-$(date +%Y%m%d-%H%M%S-%N)
# http://docs.ansible.com/ansible/playbooks_variables.html
register: deploy_dir
- set_fact: my_deploy_dir="{{ deploy_dir.stdout }}"
- debug: var=my_deploy_dir
| Ansible | 37,479,605 | 20 |
It seems that getting failures due to /var/lib/dpkg/lock is something not very rare. Based on observations these are caused most of the time 9/10 due to state lock file or while a cron job was running.
This means that a retry mechanism combined with a removal of stale file could be the solution.
How can I do this in ansible?
| I'd try to solve this with until feature of ansible (http://docs.ansible.com/ansible/latest/playbooks_loops.html#do-until-loops)
- name: Apt for sure
apt: name=foobar state=installed
register: apt_status
# 2018 syntax:
# until: apt_status|success
# 2020 syntax:
until: apt_status is success
delay: 6
retries: 10
| Ansible | 36,630,299 | 20 |
Normally, you can ssh into a Vagrant-managed VM with vagrant ssh. There are two options:
You can use an insecure_private_key generated by Vagrant to
authenticate.
Use your own private key - provided that
config.ssh.forward_agent is set to true, and the VM is
configured correctly
I use the second option. S when I run vagrant ssh, I ssh into the machine with my custom private key.
Now I need to let Ansible SSH into my Vagrant machine and I do not want to use Vagrantfile for it.
So I executed:
ansible-playbook -i hosts/development --private-key=~/.ssh/id_rsa -u vagrant dev.yml
And I have this error returned:
fatal: [192.168.50.5] => SSH Error: Permission denied (publickey).
while connecting to 192.168.50.5:22
The hosts/inventory file holds just the IP of my Vagrant VM (192.168.50.5).
I do not know why Ansible cannot ssh into the VM. It's using exactly the same user (vagrant) and key (id_rsa) as when executing vagrant ssh.
However, there is no problem sshing with vagrant ssh while the above would not run.
Any suggestions would be much appreciated.
| The problem probably lies within your hosts/inventory file. You need to add the proper connection configuration for Ansible therein, save and re-run.
192.168.50.5 ansible_ssh_port=22 ansible_ssh_user=vagrant ansible_ssh_private_key_file=~/.ssh/id_rsa
If you are not using port 22, adjust the ansible_ssh_port in your hosts file accordingly.
It is also a possibility that you have not setup your pubkey in Vagrant, hence this would also not work. To test this, run:
vagrant ssh-config | grep IdentityFile
# result should be your private key and not
# .vagrant/machines/default/virtualbox/private_key
If you have not put your pubkey in the Vagrant vm, you will need to add that before you can try your private key.
Reference: http://docs.ansible.com/ansible/intro_inventory.html#list-of-behavioral-inventory-parameters
Reference: https://docs.vagrantup.com/v2/cli/ssh_config.html
| Ansible | 32,748,585 | 20 |
I am about to bump up the changelog of a lot of locally developed debian packages. I am using 'Ansible' to call 'dch' from the devscripts package. I am using Ansible because I already have the subversion paths to the packages listed in an Ansible variable. I would like to be able to enter the actual changelog message as a command line variable, but it seems that ansible cannot parse spaces in variables entered on the command line.
I have tried
ansible-playbook tag_changelog_on_packages.yml -e changelog_message="testing testing"
ansible-playbook tag_changelog_on_packages.yml -e changelog_message='testing testing'
ansible-playbook tag_changelog_on_packages.yml -e changelog_message=testing\ testing
ansible-playbook tag_changelog_on_packages.yml -e changelog_message="testing\ testing"
In all cases I only get the first "testing" . The last try makes ansible crash with ValueError: No escaped character probably because the whitespace is stripped.
Am I missing anything ?
Cheers
| That is because it is processed by shell first which eats the quotes and backslashes you used.
You can enclose the whole argument into single quotes which tells shell not to touch what is inside. Then the value of the variable can be enclosed into double quotes, which will remain there for ansible.
ansible-playbook tag_changelog_on_packages.yml -e 'changelog_message="testing testing"'
EDIT: As Ken Pronovici pointed out in the comment, reversing the quotation marks makes more sense as it enables the use of shell vars:
-e "name='${NAME}'"
| Ansible | 32,584,112 | 20 |
jinja2 has filter '|default()' to works with undefined variables. But it does not work with dictionary values.
if D may have or not have key foo (D[foo]), than:
{{ D[foo]|default ('no foo') }}
will prints 'no foo' if D is undefined, but will cause error ('dict object' has no attribute 'foo') if D is defined, but D[foo] is undefined.
Is any way to make default for dictionary item?
| This appears to be working properly for me using Ansible 1.7.2. Here's a test playbook I just wrote:
---
- hosts: localhost
vars:
D:
1 : "one"
2 : "two"
tasks:
- debug: var=D
- debug: msg="D[1] is {{ D[1]|default ('undefined') }}"
- debug: msg="D[3] is {{ D[3]|default ('undefined') }}"
And here is the output from running it:
TASK: [debug var=D] ***********************************************************
ok: [localhost] => {
"D": {
"1": "one",
"2": "two"
}
}
TASK: [debug msg="D[1] is one"] ***********************************************
ok: [localhost] => {
"msg": "D[1] is one"
}
TASK: [debug msg="D[3] is undefined"] *****************************************
ok: [localhost] => {
"msg": "D[3] is undefined"
}
| Ansible | 28,885,184 | 20 |
I am trying to write a task which runs a list of ldapmodify statements and only want it to fail if any of the return codes are not 0 or 68 (object allready existed):
- name: add needed LDAP infrastructure
action: command ldapmodify -x -D '{{ ADMINDN }}' -w '{{ LDAPPW }}' -H {{ LDAPURI }} -c -f {{ item }}
register: result
failed_when: "result.results | rejectattr('rc', 'sameas', 0) | rejectattr('rc', 'sameas', 68) | list | length > 0"
# ignore_errors: true
with_items:
- a.ldif
- b.ldif
Does not work, producing the error:
error while evaluating conditional: result.results | rejectattr('rc', 'sameas', 0) | rejectattr('rc', 'sameas', 68) | list | length > 0
However if I comment the failed_when and use ignore_errors, the following tasks produce the correct results. While I can use this workaround to solve my problem, I would like to understand why the failed_when version is not working, as I would find that more elegant.
- debug: var="result.results | rejectattr('rc', 'sameas', 0) | rejectattr('rc', 'sameas', 68) | list | length > 0"
- fail: msg="failure during ldapmodify"
when: "result.results | rejectattr('rc', 'sameas', 0) | rejectattr('rc', 'sameas', 68) | list | length > 0"
Sidenote sameas might be equalto in other versions of jinja2, in case you are wondering.
| Well, it turns out I was going about it much too complicated. The problem was: Ansible runs failed_when after every iteration of the loop. As such I simply need to access result.rc:
- name: add needed LDAP infrastructure
action: command ldapmodify -x -D '{{ ADMINDN }}' -w '{{ LDAPPW }}' -H {{ LDAPURI }} -c -f {{ item }}
register: result
# As per comment from user "ypid"
failed_when: ( result.rc not in [ 0, 68 ] )
# failed_when: ( result.rc != 0 ) and ( result.rc != 68 )
with_items:
- a.ldif
- b.ldif
produces the wanted result.
After the loop the variable result is filled with a summary dictionary which has the details of each item in the results key.
But since I was not able to find any examples of using result.results with filter chains I will just leave this question up, hoping someone else might find it useful. ( I'm sure I will eventually want to look it up again some day ;) )
Thanks to sivel on #ansible for pointing this out.
| Ansible | 25,981,863 | 20 |
In my playbook I have
- name: Grab h5bp/server-configs-nginx
git: repo=https://github.com/h5bp/server-configs-nginx.git
dest=/tmp/server-configs-nginx
version="3db5d61f81d7229d12b89e0355629249a49ee4ac"
force=yes
- name: Copy over h5bp configuration
command: cp -r /tmp/server-configs-nginx/{{ item }} /etc/nginx/{{ item }}
with_items:
- "mime.types"
- "h5bp/"
Which raises the warning in ansible-lint:
[ANSIBLE0006] cp used in place of copy module
/Users/austinpray/Dropbox/DEV/opensauce/bedrock-ansible/roles/nginx/tasks/main.yml:0
Task/Handler: Copy over h5bp configuration
So this raises the question: is there a better way to do this with ansible modules rather than a command?
| You can use the synchronize module with mode='pull'
- name: Copy over h5bp configuration
synchronize: mode=pull src=/tmp/server-configs-nginx/{{ item }} dest=/etc/nginx/{{ item }}
with_items:
- "mime.types"
- "h5bp/"
Note: To copy remote-to-remote, use the same command and add delegate_to (as remote source) and current inventory_host (as remote dest)
| Ansible | 25,576,871 | 20 |
I'm trying to get this task to run locally (on the machine that is running the playbook) :
- name: get the local repo's branch name
local_action: git branch | awk '/^\*/{print $2}'
register: branchName
I tried plenty of variations with no success
all other tasks are meant to run on the target host, which is why running the whole playbook local is not an option
TASK: [get the local repo's branch name] **************************************
<127.0.0.1> REMOTE_MODULE git branch | awk '/^\*/{print $2}'
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1407258765.57-75899426008172 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1407258765.57-75899426008172 && echo $HOME/.ansible/tmp/ansible-tmp-1407258765.57-75899426008172']
<127.0.0.1> PUT /tmp/tmpQVocvw TO /home/max/.ansible/tmp/ansible-tmp-1407258765.57-75899426008172/git
<127.0.0.1> EXEC ['/bin/sh', '-c', '/usr/bin/python /home/max/.ansible/tmp/ansible-tmp-1407258765.57-75899426008172/git; rm -rf /home/max/.ansible/tmp/ansible-tmp-1407258765.57-75899426008172/ >/dev/null 2>&1']
failed: [portal-dev] => {"failed": true}
msg: this module requires key=value arguments (['branch', '|', 'awk', '/^\\*/{print $2}'])
FATAL: all hosts have already failed -- aborting
update:
I have followed bkan's suggestion (bellow), and got a bit further, but
- name: get the local repo's branch name
local_action: command git branch | (awk '/^\*/{print $2}')
sudo: no
register: branchName
now the git command gets launched but not correctly (see error below).
note that this command runs perfectly as a "shell" but unfortunately there is no local_shell equivalent of local_action ...
failed: [portal-dev] => {"changed": true, "cmd": ["git", "branch", "|", "(awk", "/^\\*/{print $2})"], "delta": "0:00:00.002980", "end": "2014-08-05 18:00:01.293632", "rc": 129, "start": "2014-08-05 18:00:01.290652"}
stderr: usage: git branch [options] [-r | -a] [--merged | --no-merged]
or: git branch [options] [-l] [-f] <branchname> [<start-point>]
or: git branch [options] [-r] (-d | -D) <branchname>...
or: git branch [options] (-m | -M) [<oldbranch>] <newbranch>
...
| The format for local_action is:
local_action: <module_name> <arguments>
In your example, Ansible thinks you are trying to use the git module and throws an error because you don't have the correct arguments for the git module. Here is how it should look:
local_action: shell git branch | awk '/^\*/{print $2}'
Source: http://docs.ansible.com/playbooks_delegation.html#delegation
| Ansible | 25,144,608 | 20 |
I have a command in ubuntu as
sudo chown $(id -u):$(id -g) $HOME/.kube/config
I want to convert into ansible script. I have tried below
- name: Changing ownership
command: chown $(id -u):$(id -g) $HOME/.kube/config
become: true
but i am getting error as below
fatal: [ubuntu]: FAILED! => {"changed": t> fatal: [ubuntu]: FAILED! => {"changed": true, "cmd": ["chown", "$(id",
"-u):$(id", "-g)", "$HOME/.kube/config"], "delta": "0:00:00.003948",
"end": "2019-07-17 07:22:31.798773", "msg": "non-zero return code",
"rc": 1, "start": "2019-07-17 07:22:31.794825", "stderr": "chown:
invalid option -- 'u'\nTry 'chown --help' for more information.",
"stderr_lines": ["chown: invalid option -- 'u'", "Try 'chown --help'
for more information."], "stdout": "", "stdout_lines": []}rue, "cmd": ["chown", "$(id",
"-u):$(id", "-g)", "$HOME/.kube/config"], "delta": "0:00:00.003948",
"end": "2019-07-17 07:22:31.798773", "msg": "non-zero return code",
"rc": 1, "start": "2019-07-17 07:22:31.794825", "stderr": "chown:
invalid option -- 'u'\nTry 'chown --help' for more information.",
"stderr_lines": ["chown: invalid option -- 'u'", "Try 'chown --help'
for more information."], "stdout": "", "stdout_lines": []}
EDIT:
File module also did not work.
- name: Create a symbolic link
file:
path: $HOME/.kube
owner: $(id -u)
group: $(id -g)
| Assuming the file already exists and you just want to change permissions, you can retrieve user ID and group from Ansible facts and do something like:
- name: Change kubeconfig file permission
file:
path: $HOME/.kube/config
owner: "{{ ansible_effective_user_id }}"
group: "{{ ansible_effective_group_id }}"
You can also use ansible_real_group_id / ansible_real_user_id or ansible_user_gid/ansible_user_uid depending on your need.
Please don't forget double quotes around ansible expression.
See this post for details on the difference between real and effective user
See Ansible docs on system variables for all available variables
| Ansible | 57,070,583 | 19 |
Can I print a warning message from Ansible? Like as Ansible does for an internal warning:
[WARNING]: Ignoring invalid attribute: xx
The targeted use are warning, that are not an error, so they should not end the playbook execution, but they should be clearly visible (in standard Ansible purple color).
Example usage:
I have some hardcoded URL of the latest release.
The playbook downloads the latest avaiable URL.
And print warning if the URLs differs.
As the source is not trusted, the downloaded URL is should be used only for comparison, but not used directly.
| Here is a simple filter plugin that will emit a Warning message:
from ansible.utils.display import Display
class FilterModule(object):
def filters(self): return {'warn_me': self.warn_filter}
def warn_filter(self, message, **kwargs):
Display().warning(message)
return message
Place the above into a file in, for example, [playbook_dir]/filter_plugins/warn_me.py.
A contrived example playbook that calls this filter might look like this:
---
- name: Demonstrate warn_me filter plugin
gather_facts: no
hosts: all
tasks:
- meta: end_play
when: ('file XYZ cannot be processed' | warn_me())
delegate_to: localhost
run_once: yes
And running this playbook might produce this output:
$ ansible-playbook test_warnme.yml -l forwards
__________________________________________
< PLAY [Demonstrate warn_me filter plugin] >
------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
[WARNING]: file XYZ cannot be processed
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
| Ansible | 48,033,923 | 19 |
I want to make the command via Ansible:
curl -sL https://deb.nodesource.com/setup | sudo bash -
How can I do it via Ansible? Now I have:
- name: Add repository
command: curl -sL https://deb.nodesource.com/setup | sudo bash -
But it throw error:
[WARNING]: Consider using get_url or uri module rather than running curl
fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": ["curl", "-sL", "https://deb.nodesource.com/setup", "|", "sudo", "bash", "-"], "delta": "0:00:00.006202", "end": "2017-12-27 15:11:55.441754", "msg": "non-zero return code", "rc": 2, "start": "2017-12-27 15:11:55.435552", "stderr": "curl: option -: is unknown\ncurl: try 'curl --help' or 'curl --manual' for more information", "stderr_lines": ["curl: option -: is unknown", "curl: try 'curl --help' or 'curl --manual' for more information"], "stdout": "", "stdout_lines": []}
| You can:
- name: Add repository
shell: curl -sL https://deb.nodesource.com/setup | sudo bash -
args:
warn: no
shell to allow pipes, warn: no to suppress warning.
But if I were you, I'd use apt_key + apt_repository Ansible modules to create self explaining playbook that also support check_mode runs.
| Ansible | 47,994,497 | 19 |
I merged two lists from an Ansible inventory:
set_fact:
fact1: "{{ groups['group1'] + groups[group2']|list }}
The output is:
fact1:
- server01
- server02
- server03
With the above results, I need to append https:// to the front, and a port number to the back of each element.
Then I need to convert it to a comma delimited list for a server config.
In this example I want: https://server01:8000,https://server02:8000,https://server03:8000.
I tried using a join:
set_fact:
fact2: "{{ fact1|join(':8000,') }}"
which partly worked but it left the last server without a port.
How can I achieve my goal?
| Solution
set_fact:
fact2: "{{ fact1 | map('regex_replace', '(.*)', 'https://\\1:8000') | join(',') }}"
Explanation
map filter applies a filter (regex_replace) to individual elements of the list;
regex_replace filter (with the following regular expression) adds a prefix and suffix to a string;
current_list | map('regex_replace', '(.*)', 'prefix\\1suffix')
join filter converts the list to comma-delimited string in the output.
Alternative
Another possible solution (builds on what you already know) would be to use Jinja2 to directly for the target string:
set_fact:
fact2: "{{ 'https://' + fact1|join(':8000,https://') + ':8000' }}"
| Ansible | 47,047,876 | 19 |
How can I run a playbook only on first host in the group?
I am expecting something like this:
---
- name: playbook that only run on first host in the group
hosts: "{{ groups[group_name] | first }}"
tasks:
- debug:
msg: "on {{ inventory_hostname }}"
But this doesn't work, gives error:
'groups' is undefined
How can I make it work?
| You can use:
hosts: group_name[0]
Inventory hosts values (specified in the hosts directive) are processed with a custom parser, which does not allow Jinja2 expressions like the regular template engine does.
Read about Patterns.
| Ansible | 42,942,875 | 19 |
I have an Ansible script, and I am trying to get the filename of the newest item in a directory. I am using this Ansible script:
- name: Finding newest file in a folder
find:
paths: "/var/www/html/wwwroot/somefolder/"
age: "latest"
age_stamp: mtime
However, I am getting the following error -
FAILED! => {"age": "latest", "changed": false, "failed": true, "msg": "failed to process age"}
How can I get Ansible to retrieve the filename of the newest file in a directory?
| Pure Ansible solution:
- name: Get files in a folder
find:
paths: "/var/www/html/wwwroot/somefolder/"
register: found_files
- name: Get latest file
set_fact:
latest_file: "{{ found_files.files | sort(attribute='mtime',reverse=true) | first }}"
| Ansible | 41,971,169 | 19 |
I have the following vars inside of my ansible playbook I got the following structure
domains:
- { main: 'local1.com', sans: ['test.local1.com', 'test2.local.com'] }
- { main: 'local3.com' }
- { main: 'local4.com' }
And have the following inside of the my conf.j2
{% for domain in domains %}
[[acme.domains]]
{% for key, value in domain.iteritems() %}
{% if value is string %}
{{ key }} = "{{ value }}"
{% else %}
{{ key }} = {{ value }}
{% endif %}
{% endfor %}
{% endfor %}
Now when I go in the VM and see the file I get the following:
Output
[[acme.domains]]
main = "local1.com
sans = [u'test.local1.com', u'test2.local.com']
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
Notice the u inside of the sans array.
Excpeted output
[[acme.domains]]
main = "local1.com"
sans = ["test.local1.com", "test2.local.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
Why is this happening and how can I fix it?
| You get u' ' because you print the object containing the Unicode strings and this is how Python renders it by default.
You can filter it with list | join filters:
{% for domain in domains %}
[[acme.domains]]
{% for key, value in domain.iteritems() %}
{% if value is string %}
{{ key }} = "{{ value }}"
{% else %}
{{ key }} = ["{{ value | list | join ('\',\'') }}"]
{% endif %}
{% endfor %}
{% endfor %}
Or you can rely on the fact, that the string output after sans = is a JSON and render it with to_json filter:
{{ key }} = {{ value | to_json }}
Either will get you:
[[acme.domains]]
main = "local1.com"
sans = ["test.local1.com", "test2.local.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
But the first one is more versatile.
| Ansible | 41,521,138 | 19 |
The Ansible best practices documentation recommends to separate inventories:
inventories/
production/
hosts.ini # inventory file for production servers
group_vars/
group1 # here we assign variables to particular groups
group2 # ""
host_vars/
hostname1 # if systems need specific variables, put them here
hostname2 # ""
staging/
hosts.ini # inventory file for staging environment
group_vars/
group1 # here we assign variables to particular groups
group2 # ""
host_vars/
stagehost1 # if systems need specific variables, put them here
stagehost2 # ""
My staging and production environments are structured in the same way. I have in both environments the same groups. And it turns out that I have also the same group_vars for the same groups. This means redundancy I would like to wipe out.
Is there a way to share some group_vars between different inventories?
As a work-around I started to put shared group_vars into the roles.
my_var:
my_group:
- { var1: 1, var2: 2 }
This makes it possible to iterate over some vars by intersecting the groups of a host with the defined var:
with_items: "{{group_names | intersect(my_var.keys())}}"
But this is a bit complicate to understand and I think roles should not know anything about groups.
I would like to separate most of the inventories but share some of the group_vars in an easy to understand way. Is it possible to merge global group_vars with inventory specific group_vars?
| I scrapped the idea of following Ansible's recommendation. Now one year later, I am convinced that Ansible's recommendation is not useful for my requirements. Instead I think it is important to share as much as possible among different stages.
Now I put all inventories in the same directory:
production.ini
reference.ini
And I take care that each inventory defines a group including all hosts with the name of the stage.
The file production.ini has the group production:
[production:children]
all_production_hosts
And the file reference.ini has the group reference:
[reference:children]
all_reference_hosts
I have just one group_vars directory in which I define a file for every staging group:
group_vars/production.yml
group_vars/reference.yml
And each file defines a stage variable. The file production.yml defines this:
---
stage: production
And the file reference.yml defines that:
---
stage: reference
This makes it possible to share everything else between production and reference. But the hosts are completely different. By using the right inventory the playbook runs either on production or on reference hosts:
ansible-playbook -i production.ini site.yml
ansible-playbook -i reference.ini site.yml
If it is necessary for the site.yml or the roles to behave slightly different in the production and reference environment, they can use conditions using the stage variable. But I try to avoid even that. Because it is better to move all differences into equivalent definitions in the staging files production.yml and reference.yml.
For example, if the group_vars/all.yml defines some users:
users:
- alice
- bob
- mallory
And I want to create the users in both environments, but I want to exclude mallory from the production environment, I can define a new group called effective_users. In the reference.yml it is identical to the users list:
effective_users: >-
{{ users }}
But in the production.yml I can exclude mallory:
effective_users: >-
{{ users | difference(['mallory']) }}
The playbook or the roles do not need to distinguish between the two stages, they can simply use the group effective_users. The group contains automatically the right list of users simply by selecting the inventory.
| Ansible | 40,606,890 | 19 |
I'd like to remove a single key from a dictionary in Ansible.
For example, I'd like this:
- debug: var=dict2
vars:
dict:
a: 1
b: 2
c: 3
dict2: "{{ dict | filter_to_remove_key('a') }}"
To print this:
ok: [localhost] => {
"dict2": {
"b": 2,
"c": 3
}
}
Please note that the dictionary is loaded from a json file and I POST it to the Grafana REST API. I'd like to allow saving an 'id' key in the file and remove the key before POSTing it.
This is closer to the actual use I have for the removal:
- name: Install Dashboards
uri:
url: "{{ grafana_api_url }}/dashboards/db"
method: POST
headers:
Authorization: Bearer {{ grafana_api_token }}
body:
overwrite: true
dashboard:
"{{ lookup('file', item) | from_json | removekey('id') }}"
body_format: json with_fileglob:
- "dashboards/*.json"
- "../../../dashboards/*.json"
| - set_fact:
dict:
a: 1
b: 2
c: 3
dict2: {}
- set_fact:
dict2: "{{dict2 |combine({item.key: item.value})}}"
when: "{{item.key not in ['a']}}"
with_dict: "{{dict}}"
- debug: var=dict2
or create a filter plugin and use it.
| Ansible | 40,496,021 | 19 |
I am getting value of variable "env" in Jinja2 template file using a variable defined in group_vars like:
env: "{{ defined_variable.split('-')[0] }}"
env possible three values could be abc, def, xyz.
On the basis of this value I want to use server URL, whose possible values I have defined inside defaults/main.yml as:
server_abc: https://xxxx.xxx.com
server_def: https://xxxxx.xxx.com
server_xyz: https://xxxx.xxx.com
In Jinja2 template, I am trying to do:
{% if 'abc' == "{{env}}" %}
serverURL: '{{ server_abc }}'
{% elif 'def' == "{{env}}" %}
serverURL: '{{ server_def}}'
{% elif 'xyz' == "{{env}}" %}
serverURL: '{{ server_xyz }}'
{% else %}
ServerURL: 'server Url not found'
{% endif %}
However it is always ending up defining ServerURL = "server URL not found" even if env comes with value of abc, def or xyz.
If I try to replace env in Jinja2 template (hardcoded) like below condition does satisfy to true:
{% if 'abc' == "abc" %}
serverURL: '{{ server_abc }}'
So that implies me syntax is true but the value of "{{env}}" at run time is not evaluated.
Any suggestion what can I do to solve this?
| You don't need quotes and braces to refer to variables inside expressions. The correct syntax is:
{% if 'abc' == env %}
serverURL: '{{ server_abc }}'
{% elif 'def' == env %}
serverURL: '{{ server_def }}'
{% elif 'xyz' == env %}
serverURL: '{{ server_xyz }}'
{% else %}
ServerURL: 'server URL not found'
{% endif %}
Otherwise you compare two strings, for example abc and {{env}} and you always get a negative result.
| Ansible | 40,086,613 | 19 |
I got puzzled setting up server with the following cpu facts:
"ansible_processor": [
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2650L v3 @ 1.80GHz",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2650L v3 @ 1.80GHz"
],
"ansible_processor_cores": 1,
"ansible_processor_count": 2,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 2,
It seems to report the number of CPUs correctly but what should I base my workers (threads) number on? I was sure I'd use ansible_processor_cores but it seems to report only one (1 - sic!) despite reporting two processors! How do you get the number of threads available for worker processes?
| Looking into the code ansible_processor_vcpus should be your choice.
It should contain number of processors in /proc/cpuinfo (which actually is a number of total threads, as per this answer.
| Ansible | 39,539,559 | 19 |
I have this in vars:
var1: "test1"
var2: "test2"
var3: "{{var1}}"
Now I want to dynamically change var3: "{{var2}}".
I can assign var3: "test2". But how can I assign var3: "{{var2}}"?
| My attempt at the interpretation of the phrase "dynamically change Ansible variable" based on your question:
---
- hosts: localhost
connection: local
vars:
var1: "test1"
var2: "test2"
var3: "{{var1}}"
tasks:
- debug: var=var3
- set_fact:
var3: "{{var2}}"
- debug: var=var3
Regarding the comment:
i was thinking set_fact makes vars as hostvars which don't have precedence over playbook vars
Variables assigned through a set_fact module are in their own class of variables which has a lower priority only to block vars, task vars and extra vars. See Variable Precedence.
| Ansible | 39,072,079 | 19 |
I'm fairly new in using Ansible and have been reading here and google and haven't found an answer yet.
My scenario is that I have 1 user on a server but 2-3 different pub keys that need to put in it's authorized_keys file.
I can successfully remove all keys, or add all keys with this script:
---
- hosts: all
tasks:
- name: update SSH keys
authorized_key:
user: <user>
key: "{{ lookup('file', item) }}"
state: present
#exclusive: yes
with_fileglob:
- ../files/pub_keys/*.pub
With the present flag it reads and adds all the keys. With the absent flag it removes all keys listed.
Problem is that I have an old key that is only on the server and I want to remove/overwrite it and for future deployments overwrite any unauthorized keys that might be on the server and not in my playbook.
With the exclusive flag it only takes the last key and adds it. This would be fantastic if it would loop and recusively add all the keys. If there is a way to do this in Ansible I have not found it.
Is there any way to loop over pub files and use the exclusive option at the same time?
| Is there any way to loop over pub files and use the exclusive option at the same time?
No. There is a note about loops and exclusive in the docs:
exclusive: Whether to remove all other non-specified keys from the authorized_keys file. Multiple keys can be specified in a single key string value by separating them by newlines.
This option is not loop aware, so if you use with_ , it will be exclusive per iteration of the loop, if you want multiple keys in the file you need to pass them all to key in a single batch as mentioned above.
So you need to join all your keys and send all them at once.
Something like this:
- name: update SSH keys
authorized_key:
user: <user>
key: "{{ lookup('pipe','cat ../files/pub_keys/*.pub') }}"
state: present
exclusive: yes
Check this code before running in production!
| Ansible | 38,879,266 | 19 |
I'm new to ansible. I have a requirement that requires me to pull OS version for of more than 450 linux severs hosted in AWS. AWS does not provide this feature - it rather suggests us to get it from puppet or chef.
I created few simple playbooks which does not run
---
- hosts: testmachine
user: ec2-user
sudo: yes
tasks:
- name: Update all packages to latest
yum: name=* state=latest
task:
- name: obtain OS version
shell: Redhat-release
playbook should output a text file with hostname and OS version. Any insight on this will be highly appreciated.
| Use one of the following Jinja2 expressions:
{{ hostvars[inventory_hostname].ansible_distribution }}
{{ hostvars[inventory_hostname].ansible_distribution_major_version }}
{{ hostvars[inventory_hostname].ansible_distribution_version }}
where:
hostvars and ansible_... are built-in and automatically collected by Ansible
ansible_distribution is the host being processed by Ansible
For example, assuming you are running the Ansible role test_role against the host host.example.com running a CentOS 7 distribution:
---
- debug:
msg: "{{ hostvars[inventory_hostname].ansible_distribution }}"
- debug:
msg: "{{ hostvars[inventory_hostname].ansible_distribution_major_version }}"
- debug:
msg: "{{ hostvars[inventory_hostname].ansible_distribution_version }}"
will give you:
TASK [test_role : debug] *******************************************************
ok: [host.example.com] => {
"msg": "CentOS"
}
TASK [test_role : debug] *******************************************************
ok: [host.example.com] => {
"msg": "7"
}
TASK [test_role : debug] *******************************************************
ok: [host.example.com] => {
"msg": "7.5.1804"
}
| Ansible | 38,078,247 | 19 |
I am trying to make Ansible work with --limit and to do that I need facts about other hosts, which I am caching with fact_caching. What command should I run so that it simply gathers all the facts on all the hosts and caches them, without running any tasks? Something like the setup module would be perfect if it cached the facts it gathered, but it seems like it does not.
| Here is how I'd solve the problem:
1.- Enable facts gathering on your playbook (site.yml):
gather_facts: yes
2.- Enable facts caching on ansible.cfg:
2.1.- Option 1 - Use this if you have the time to install redis:
[defaults]
gathering = smart
fact_caching = redis
# two hours timeout
fact_caching_timeout = 7200
2.2.- Option 2 - Use this to test right now is simple but slower than redis:
[defaults]
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/facts_cache
# two hours timeout
fact_caching_timeout = 7200
3.- Update or create the facts cache. To do this create a new role (cache-update) with just one task: execute ping. We use ping because is the simplest and fastest ansible task so it will help us update the cache really fast:
- name: Pinging server to update facts cache
ping:
Greetings,
| Ansible | 32,703,874 | 19 |
I want to pretty print a registered object in ansible to help with debugging. How do I do it?
| You also have to_nice_yaml and to_nice_json if you want to control the format itself. More details here.
| Ansible | 29,745,534 | 19 |
I need to do base64 encoding of something like: "https://myurl.com". Because there is a colon in that string, I need to enclose everything in quotes. So I have something like:
- name: do the encode
shell: 'echo "https://myurl.com" | /usr/bin/base64'
register: bvalue
But I get a blank when I use:
{{ bvalue.stdout }}
So I want to use the Ansible construct, but I don't know how and the documentation is not clear. It's something like:
- name: do the encode
shell: '{{ "https://myurl.com" | b64encode }}'
But I know that is wrong. And I can't find any examples. Help!
| I think this is how to do it. Define a variable in a playbook:
MYVAR: "https://myurl.com"
Then in the role, do:
- name: do the encode
shell: echo {{ MYVAR | b64encode }} > /tmp/output
| Ansible | 22,978,319 | 19 |
Having an issue with ansible.builtin.shell and ansible.builtin.command. Probably not using them right, but usage matches the docs examples.
Ansible version 2.10.3
In roles/rabbitmq/tasks/main.yml
---
# tasks file for rabbitmq
# If not shut down cleanly, the following will fix:
# systemctl stop rabbitmq-server
- name: Stop RabbitMQ service
ansible.builtin.service:
name: rabbitmq-server
state: stopped
become: yes
# rabbitmqctl force_boot
# https://www.rabbitmq.com/rabbitmqctl.8.html
# force_boot Ensures that the node will start next time, even if it was not the last to shut down.
- name: Force RabbitMQ to boot anyway
ansible.builtin.shell: /usr/sbin/rabbitmqctl force_boot
# systemctl start rabbitmq-server
- name: Stop RabbitMQ service
ansible.builtin.service:
name: rabbitmq-server
state: started
become: yes
Resulting in the following error:
ERROR! this task 'ansible.builtin.shell' has extra params, which is only allowed in the following modules: shell, command, ansible.windows.win_shell, ...
The error appears to be in '.../roles/rabbitmq/tasks/main.yml': line 15, > column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
# force_boot Ensures that the node will start next time, even if it was not the last to shut down.
- name: Force RabbitMQ to boot anyway
^ here
I've tried ansible.builtin.command, both with and without the cmd: parameter.
What don't I understand about the usage?
| Try this:
- name: Force RabbitMQ to boot anyway
command: "/usr/sbin/rabbitmqctl force_boot"
register: result
ignore_errors: True
I basically took out the ansible.builtin.. It works for me.
register captures the output into a variable named result.
ignore_errors is useful so that if an error occurs Ansible will not stop.
You can output that variable with:
- debug: var=result
| Ansible | 66,178,118 | 18 |
I am creating a systemd service using template module
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
The contents of the sonarqube.service can change of course. On change I want to restart the service. How can I do this?
| There are two solutions.
Register + When changed
You can register template module output (with its status change),
register: service_conf
and then use when clause.
when: service_conf.changed
For example:
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
register: service_conf
- name: restart service
service:
name: sonarqube
state: restarted
when: service_conf.changed
Handler + Notify
You define your restart service task as handler. And then in your template task you notify the handler.
tasks:
- name: Add Sonarqube to Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: Restart Sonarqube
- …
handlers:
- name: Restart Sonarqube
service:
name: sonarqube
state: restarted
More info can be found in Ansible Doc.
Difference between those 2?
In the first case, the service will restart directly. In the case of the handler the restart will happen at the end of the play.
Another difference will be, if you have several tasks changes that need to restart of your service, you simply add the notify to all of them.
The handler will run if any of those task get a changed status. With the first solution, you will have to register several return. And it will generate a longer when clause_1 or clause_2 or …
The handler will run only once even if notified several times.
| Ansible | 57,571,765 | 18 |
Background:
This is an Ansible playbook using templates to CONSTRUCT a yaml file from a template. So basically I have a jinja2 template file with a line as such:
private_key: {{ myvar }}
Ansible uses yaml to define the variables. So I will fill in the myvar value something like this. Here I am using the | special character to define a multiline string:
myvar: |
- "-----BEGIN PRIVATE KEY-----"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "zzzzzzzzzzzzzzzzzz="
- "-----END PRIVATE KEY-----"
However the output trims off the indentation:
private_key:
- "-----BEGIN PRIVATE KEY-----"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "zzzzzzzzzzzzzzzzzz="
- "-----END PRIVATE KEY-----"
Since the output file is a yaml itself, I need to retain the indentation. It seems no matter what I'll lose the indent.
I need the end result to look EXACTLY like this:
private_key:
- "-----BEGIN PRIVATE KEY-----"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "zzzzzzzzzzzzzzzzzz="
- "-----END PRIVATE KEY-----"
| I found the answer in a Google search right after posting the question.
Essentially the yaml string will strip indents, so in this case we have to use Jinja to insert spaces where they were stripped. Luckily this is super easy to do:
In the template file, I replaced this:
private_key: {{ myvar }}
With this:
private_key: {{ myvar | indent( width=4, indentfirst=True) }}
The width field can be adjusted for how many spaces of indentation are needed.
The actual variable declaration is done exactly like I posted in the question. However now with the indent added in the template, I now have the desired output with indentation:
private_key:
- "-----BEGIN PRIVATE KEY-----"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "asdfasdfasdfasdfasdfasdfadfasdfasdfasdfasdfasdfssadf"
- "zzzzzzzzzzzzzzzzzz="
- "-----END PRIVATE KEY-----"
| Ansible | 55,411,080 | 18 |
My problem is with ansible and parsing stdout. I need to capture the stdout from an ansible play and parse this output for a specific substring within stdout and save into a var. My specific use case is below
- shell: "vault.sh --keystore EAP_HOME/vault/vault.keystore |
--keystore-password vault22 --alias vault --vault-block |
vb --attribute password --sec-attr 0penS3sam3 --enc-dir |
EAP_HOME/vault/ --iteration 120 --salt 1234abcd"
register: results
become: true
This generates an output with the following line, the goal is to capture the masked key that jboss vault generates and save that in an ansible var so I can use it to configure the standalone.xml template:
vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"/>
I need a way parse this string with possibly regex and save the "MASK-5dOaAVafCSd" substring into an ansible var using set_facts module or any other ansible module.
Currently my code looks like this
#example stdout
results: vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5dOaAVafCSd\"/>
- name: JBOSS_VAULT:define keystore password masked value variable
set_fact:
masked_value: |
"{{ results.stdout |
regex_replace('^.+(MASK-.+?)\\.+','\\\1') }}"
This code is defining masked_value as the results.stdout, not the expected capture group.
| You are very close. I advice you to use regex101.com to test regular expressions.
Here is my solution:
---
- hosts: localhost
gather_facts: no
tasks:
- shell: echo 'vault-option name="KEYSTORE_PASSWORD" value="MASK-5dOaAVafCSd"'
register: results
- set_fact:
myvalue: "{{ results.stdout | regex_search(regexp,'\\1') }}"
vars:
regexp: 'value=\"([^"]+)'
- debug:
var: myvalue
result:
ok: [localhost] => {
"myvalue": [
"MASK-5dOaAVafCSd"
]
}
Update:
regex_search returns a list of found matches, so to get only first one use:
{{ results.stdout | regex_search(regexp,'\\1') | first }}
| Ansible | 45,740,777 | 18 |
When running a playbook Ansible randomly sets a node as first, second and third.
ok: [node-p02]
ok: [node-p03]
ok: [node-p01]
Q: How can I configure Ansible to let it execute with the hosts in sorted order? Example:
ok: [node-p01]
ok: [node-p02]
ok: [node-p03]
Serial: 1 is not an option, since it slows down the play, and my playbook is meant for 3 nodes in a single play.
| Applicable for Ansible 2.4 and higher:
This is now the default behaviour, ansible will play the hosts in the order they were mentioned in the inventory file. Ansible also provides a few built in ways you can control it with order:
- hosts: all
order: sorted
gather_facts: False
tasks:
- debug:
var: inventory_hostname
Possible values of order are:
inventory: The default. The order is ‘as provided’ by the inventory
reverse_inventory: As the name implies, this reverses the order ‘as provided’ by the inventory
sorted: Hosts are alphabetically sorted by name
reverse_sorted: Hosts are sorted by name in reverse alphabetical order
shuffle: Hosts are randomly ordered each run
Source: https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html#ordering-execution-based-on-inventory
| Ansible | 42,506,865 | 18 |
Is it possible to use variables on command or shell modules?
I have the following code, and I would like to use variable file to provide some configurations:
I would like to read the Hadoop version from my variables file. On other modules of ansible I could use {{ansible_version}}, but with command or shell it doesn't works.
- name: start ZooKeeper HA
command: hadoop-2.7.1/bin/hdfs zkfc -formatZK -nonInteractive
- name: start zkfc
shell: hadoop-2.7.1/sbin/hadoop-daemon.sh start zkfc
I would like to convert to the following:
- name: Iniciar zkfc
command: {{ hadoop_version }}/sbin/hadoop-daemon.sh start zkfc
Because if I run the with this syntax it throws this error:
- name: inicializar estado ZooKeeper HA
command: {{hadoop_version}}/bin/hdfs zkfc -formatZK -nonInteractive
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
I have try using, but same problem:
- name: Iniciar zkfc
command: "{{ hadoop_version }}"/sbin/hadoop-daemon.sh start zkfc
What is the correct syntax?
| Quote the full string in the command argument:
- name: Iniciar zkfc
command: "{{ hadoop_version }}/sbin/hadoop-daemon.sh start zkfc"
| Ansible | 41,567,196 | 18 |
I'm writing an Ansible template that needs to produce a list of ip's in a host group, excluding the current hosts IP. I've searched around online and through the documentation but I could not find any filters that allow you to remove an item in a list. I have created the (hacky) for loop below to do this but was wondering if anyone knew a "best practice" way of filtering like this.
{% set filtered_list = [] %}
{% for host in groups['my_group'] if host != ansible_host %}
{{ filtered_list.append(host)}}
{% endfor %}
Lets say groups['my_group'] has 3 ip's (192.168.1.1, 192.168.1.2 and 192.168.1.3). When the template is generated for 192.168.1.1 it should only print the ip's 192.168.1.2 and 192.168.1.3.
| There is difference filter for that:
- debug: var=item
with_items: "{{ groups['my_group'] | difference([inventory_hostname]) }}"
This will give you all items hosts from my_group without current host.
| Ansible | 40,696,130 | 18 |
This should be very simple. I want to make an Ansible statement to create a Postgres user that has connection privileges to a specific database and select/insert/update/delete privileges to all tables within that specific database. I tried the following:
- name: Create postgres user for my app
become: yes
become_user: postgres
postgresql_user:
db: "mydatabase"
name: "myappuser"
password: "supersecretpassword"
priv: CONNECT/ALL:SELECT,INSERT,UPDATE,DELETE
I get relation \"ALL\" does not exist
If I remove ALL:, I get Invalid privs specified for database: INSERT UPDATE SELECT DELETE
| What I had to do was first create the user and then grant the privileges separately. It's working like a charm.
- name: Create postgres user for my app
become: yes
become_user: postgres
postgresql_user:
name: "myappuser"
password: "supersecretpassword"
- name: Ensure we have access from the new user
become: yes
become_user: postgres
postgresql_privs:
db: mydatabase
role: myappuser
objs: ALL_IN_SCHEMA
privs: SELECT,INSERT,UPDATE,DELETE
| Ansible | 40,290,837 | 18 |
I'm new to the configuration management and deployment tools. I have to implement a Continuous Delivery/Continuous Deployment tool for one of the most interesting projects I've ever put my hands on.
First of all, individually, I'm comfortable with AWS, I know what Ansible is, the logic behind it and its purpose. I do not have same level of understanding of Docker but I got the idea. I went through a lot of Internet resources, but I can't get the the big picture.
What I've been struggling is how they fit together. Using Ansible, I can manage my Infrastructure as Code; building EC2 instances, installing packages... I can even deploy a full application by pulling its code, modify config files and start web server. Docker is, itself, a tool that packages an application and ensures that it can be run wherever you deploy it.
My problems are:
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Suppose we have a source code repository, the team members finish working on a feature and they push their work. Jenkins detects this, runs all the acceptance/unit/integration test suites and if they all passed, it declares it as a stable build. How Docker fits here? I mean when the team pushes their work, does Jenkins have to pull the Docker file source coded within the app, build the image of the application, start the container and run all the tests against it or it runs the tests the classic way and if all is good then it builds the Docker image from the Docker file and saves it in a private place?
Should Jenkins tag the final image using x.y.z for example!?
Docker containers configuration :
Suppose we have an image built by Jenkins stored somewhere, how to handle deploying the same image into different environments, and even, different configurations parameters ( Vhosts config, DB hosts, Queues URLs, S3 endpoints, etc...) What is the most flexible way to deal with this issue without breaking Docker principles? Are these configurations backed in the image when it gets build or when the container based on it is started, if so how are they injected?
Ansible and Docker:
Ansible provides a Docker module to manage Docker containers. Assuming I solved the problems mentioned above, when I want to deploy a new version x.t.z of my app, I tell Ansible to pull that image from where it was stored on, start the app container, so how to inject the configuration settings!? Does Ansible have to log in the Docker image, before it's running ( this sounds insane to me ) and use its Jinja2 templates the same way with a classic host!? If not, how is this handled?!
Excuse me if it was a long question or if I misspelled something, but this is my thinking out loud. I'm blocked for the past two weeks and I can't figure out the correct workflow. I want this to be a reference for future readers.
Please, it would very helpful to read your experiences and solutions because this looks like a common workflow.
| I would like to answer in parts
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Since docker images same everywhere, you use your docker images as if they are production images. Therefore, when somebody committed a code, you build your docker image. You run tests against it. When all tests pass, you tag that image accordingly. Since docker is fast, this is a feasible workflow.
Also docker changes are incremental; therefore, your images will have minimal impact on storage. Also when your tests fail, you may also choose to save that image too. In this way, developer will pull that image and investigate easily why your tests failed. Developer may choose to run tests in their machine too since docker images in jenkins and their machine are not different.
What this brings that all developers will have same environment, same version of all software since you decide which one will be used in docker images. I have come across to bugs that are due to differences between developer machines. For example in the same operating system, unicode settings may affect your code. But in docker images all developers will test against same settings, same version software.
Docker containers configuration :
If you are using a private repository, and you should use one, then configuration changes will not affect hard disk space much. Therefore except security configurations, such as db passwords, you can apply configuration changes to docker images(Baking the Configuration into the Container). Then you can use ansible to apply not-stored configurations to deployed images before/after startup using environment variables or Docker Volumes.
https://dantehranian.wordpress.com/2015/03/25/how-should-i-get-application-configuration-into-my-docker-containers/
Does Ansible have to log in the Docker image, before it's running (
this sounds insane to me ) and use its Jinja2 templates the same way
with a classic host!? If not, how is this handled?!
No, ansible will not log in the Docker image, but ansible with Jinja2 templates can be used to change dockerfile. You can change dockerfile with templates and can inject your configuration to different files. Tag your files accordingly and you have configured images to spin up.
| Ansible | 37,499,514 | 18 |
Im trying to install ansible-galaxy roles on Mac OS X El Capitan via CLI
$ ansible-galaxy install -r requirements.yml
I am getting this error:
ERROR! Unexpected Exception: (setuptools 1.1.6 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('setuptools>=11.3'))
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 73, in <module>
mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
File "/Library/Python/2.7/site-packages/ansible/cli/galaxy.py", line 38, in <module>
from ansible.galaxy.role import GalaxyRole
File "/Library/Python/2.7/site-packages/ansible/galaxy/role.py", line 35, in <module>
from ansible.playbook.role.requirement import RoleRequirement
File "/Library/Python/2.7/site-packages/ansible/playbook/__init__.py", line 25, in <module>
from ansible.playbook.play import Play
File "/Library/Python/2.7/site-packages/ansible/playbook/play.py", line 27, in <module>
from ansible.playbook.base import Base
File "/Library/Python/2.7/site-packages/ansible/playbook/base.py", line 35, in <module>
from ansible.parsing.dataloader import DataLoader
File "/Library/Python/2.7/site-packages/ansible/parsing/dataloader.py", line 32, in <module>
from ansible.parsing.vault import VaultLib
File "/Library/Python/2.7/site-packages/ansible/parsing/vault/__init__.py", line 67, in <module>
from cryptography.hazmat.primitives.hashes import SHA256 as c_SHA256
File "/Library/Python/2.7/site-packages/cryptography/hazmat/primitives/hashes.py", line 15, in <module>
from cryptography.hazmat.backends.interfaces import HashBackend
File "/Library/Python/2.7/site-packages/cryptography/hazmat/backends/__init__.py", line 7, in <module>
import pkg_resources
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2797, in <module>
parse_requirements(__requires__), Environment()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 580, in resolve
raise VersionConflict(dist,req) # XXX put more info here
VersionConflict: (setuptools 1.1.6 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('setuptools>=11.3'))
| Run the following to upgrade setuptools under the python user:
pip install --upgrade setuptools --user python
For some reason, the way things are installed inside OS X (and in my case, under CentOS 7 inside a Docker container), the setuptools package doesn't get installed correctly under the right user.
| Ansible | 36,958,125 | 18 |
This is a known issue and I found a solution but it's not working for me.
First I had:
fatal: [openshift-node-compute-e50xx] => SSH Error: ControlPath too long
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
So I created a ~/.ansible.cfg. The content of it:
[ssh_connection]
control_path=%(directory)s/%%h‐%%r
But after rerunning my ansible I stil have an error about 'too long'.
fatal: [openshift-master-32axx] => SSH Error: unix_listener: "/Users/myuser/.ansible/cp/ec2-xx-xx-xx-xx.eu-central-1.compute.amazonaws.com-centos.AAZFTHkT5xXXXXXX" too long for Unix domain socket
while connecting to 52.xx.xx.xx:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Why is it still too long?
| The limit is 104 or 108 characters. (I found different statements on the web)
You XXXed out some sensitive information in the error message so it's not clear how long your path actually is.
I guess %(directory)s is replaced with the .ansible directory in your users folder. Removing that and using directly your user folder would save you 12 characters:
control_path=~/%%h‐%%r
Sure, that will spam your home directory with control sockets.
Depending on the actual length of your username, you could see if you can just create another directory or find a shorter path anywhere. For example, I use ~/.ssh/tmp/%%h_%%r
Only 3 chars less but it's enough.
Finally if none of that helps, you still could fall back using /tmp for storing the sockets. But be aware that anyone with access to /tmp on that machine might be able to use your sockets then.
| Ansible | 35,970,686 | 18 |
I'm trying to use ansible to loop over a list of lists to install some packages. But {{item}} is returning every element in the sub lists rather than the sublist itself. I have a yaml file which come from a manifest list from outside ansible and it looks like this:
---
modules:
- ['module','version','extra']
- ['module2','version','extra']
- ['module3','version','extra']
My task looks like this:
task:
- include_vars: /path/to/external/file.yml
- name: install modules
yum: name={{item.0}} state=installed
with_items: "{{ modules }}"
When I run that I get:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! int object has no element 0"}
When I try:
- debug: msg="{{item}}"
with_items: "{{module}}"
it prints every element (module, version, extra, and so on), not just the sublist (which is what I would expect)
| An alternative way to solve this issue is to use a complex item instead of a list of list. Structure your variables like this:
- modules:
- {name: module1, version: version1, info: extra1}
- {name: module2, version: version2, info: extra2}
- {name: module3, version: version3, info: extra3}
Then you can still use with_items, like this:
- name: Printing Stuffs...
shell: echo This is "{{ item.name }}", "{{ item.version }}" and "{{ item.info }}"
with_items: "{{modules}}"
| Ansible | 35,662,388 | 18 |
I am new to ansible and I am using a template statement in my playbook to copy a file from my local machine to a remote machine. I get an error saying the destination directory does not exist, but it is there very much.
I am using Centos 6.5 version (both my local and remote). Any help is appreciated.
| Create the destination directory with Ansible -
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /etc/some_directory
state: directory
mode: '0755'
Ref: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_module.html
| Ansible | 31,352,169 | 18 |
I am creating a playbook which first creates a new username. I then want to run "moretasks.yml" as that new user that I just created. Currently, I'm setting remote_user for every task. Is there a way I can set it for the entire set of tasks once? I couldn't seem to find examples of this, nor did any of my attempts to move remote_user around help.
Below is main.yml:
---
- name: Configure Instance(s)
hosts: all
remote_user: root
gather_facts: true
tags:
- config
- configure
tasks:
- include: createuser.yml new_user=username
- include: moretasks.yml new_user=username
- include: roottasks.yml #some tasks unrelated to username.
moretasks.yml:
---
- name: Task1
copy:
src: /vagrant/FILE
dest: ~/FILE
remote_user: "{{newuser}}"
- name: Task2
copy:
src: /vagrant/FILE
dest: ~/FILE
remote_user: "{{newuser}}"
| You could split this up into to separate plays? (playbooks can contain multiple plays)
---
- name: PLAY 1
hosts: all
remote_user: root
gather_facts: true
tasks:
- include: createuser.yml new_user=username
- include: roottasks.yml #some tasks unrelated to username.
- name: PLAY 2
hosts: all
remote_user: username
gather_facts: false
tasks:
- include: moretasks.yml new_user=username
There is a gotcha using separate plays: you can't use variables set with register: or set_fact: in the first play to do things in the second play (this statement is not entirely true, the variables are available in hostvars, but I recommend not using variables between roles). Defined variables like in group_vars and host_vars work just fine.
Another tip I'd like to give is to look into using roles http://docs.ansible.com/playbooks_roles.html. While it might seem more complicated at first, it's much easier to re-use them (as you seem to be doing with the "createuser.yml"). Looking at the type of things you are trying to achieve, the 'include all the things' path won't last much longer.
| Ansible | 27,307,773 | 18 |
I am using Ansible and I want to run a task only once. I follow the documentation about how to configure and run a task only once
- name: apt update
shell: apt-get update
run_once: true
But when I run Ansible, it always runs this task. How can I run my task only once.
| The run_once option will run every time your Playbook/tasks runs, but will only run it once during the specific run itself. So every time you run the play, it will run, but only on the first host in the list. If you're looking for a way to only run that command once, period, you'll need to use the creates argument. Using your example, this can be achieved by using the following -
- name: apt update
shell: apt-get update && touch /root/.aptupdated
args:
creates: /root/.aptupdated
In this case the file /root/.aptupdated is created. The task will now check to see if that exists, and if it does it will not run.
On a related note if the task you are trying to run is the apt-get update, you may want to use the native apt module. You can then do something like this -
- name: apt update
apt: update_cache=yes cache_valid_time=86400
Now this will only run if the cache is older than one day.
| Ansible | 26,409,164 | 18 |
we can change the path of roles by modifying roles_path in ansible.cfg.
But the document doesn't seems to mention anything about changing the path of group_vars and host_vars.
How can I change those paths?
I will integrate the files related to ansible with rails app repsitory.
I want to gather the roles and vars directory under single directory but leave hosts file and ansible.cfg at the top directory so that the top directory is easy to see and still I can run ansible-playbook at the top directory without moving to deep directory.
Thanks, in advance.
| You cannot change the path for host_vars nor group_vars.
Those paths are always relative to your hostfile. You can set a standard hostfile in your ansible config:
hostfile = /path/to/hostfile/hostfile.ini
in this case your default host_vars are to be found at
/path/to/hostfile/host_vars/
You might as well use multiple hostfiles, assume you got:
/path/to/your/project/inventory/inventory.ini
with the host_vars at
/path/to/your/project/inventory/host_vars/
In this case you might call ansible from anywhere using:
ansible -i /path/to/your/project/inventory/inventory.ini my_playbook.yml
Just remember: the host_vars and group_vars are related to your inventory (hostfile) and therefore you can change your inventory location and put the according host_vars below it.
| Ansible | 25,346,796 | 18 |
I can't realize how to write a task, that answers mysql_secure_installation script questions.
I only have
shell: mysql_secure_installation <<< '1111' executable=/bin/bash
and no ideas on how to continue answering.
What would be the best way to solve this? Thanks in advance!
| I think you best bet is to write a playbook (or better, change your mysql role) that will reproduce mysql_secure_installation script. There are several reasons for this :
the script will always return 'changed', everytime you run your playbook, which is not something you want
writing tasks is more flexible : you can add, remove, change and adapt what you want to do according to your setup
you can learn in the process
Basically, mysql_secure_installation does this :
sets the root password
removes anonymous users
removes root remote access
removes the test database
Assuming you have set up mysql_root_password, and added python-mysqldb like so :
- name: Adds Python MySQL support on Debian/Ubuntu
apt: pkg="python-mysqldb" state=present
when: ansible_os_family == 'Debian'
- name: Adds Python MySQL support on RedHat/CentOS
yum: name=MySQL-python state=present
when: ansible_os_family == 'RedHat'
this can be accomplished like this :
Setting the root password
- name: Sets the root password
mysql_user: user=root password="{{ mysql_root_password }}" host=localhost
no_log: yes
Removing anonymous users
- name: Deletes anonymous MySQL server user for ansible_fqdn
mysql_user: user="" host="{{ ansible_fqdn }}" state="absent"
- name: Deletes anonymous MySQL server user for localhost
mysql_user: user="" state="absent"
Removing root remote access
- name: Secures the MySQL root user for IPV6 localhost (::1)
mysql_user: user="root" password="{{ mysql_root_password }}" host="::1"
no_log: yes
- name: Secures the MySQL root user for IPV4 localhost (127.0.0.1)
mysql_user: user="root" password="{{ mysql_root_password }}" host="127.0.0.1"
no_log: yes
- name: Secures the MySQL root user for localhost domain (localhost)
mysql_user: user="root" password="{{ mysql_root_password }}" host="localhost"
no_log: yes
- name: Secures the MySQL root user for server_hostname domain
mysql_user: user="root" password="{{ mysql_root_password }}" host="{{ ansible_fqdn }}"
no_log: yes
Removing the test database
- name: Removes the MySQL test database
mysql_db: db=test state=absent
This should do it. Note that I took a quick glance à the mysql_secure_installation on my system. I might have skipped something or there might be other steps included in other versions. YMMV !
| Ansible | 25,136,498 | 18 |
My organization's website is a Django app running on front end webservers + a few background processing servers in AWS.
We're currently using Ansible for both :
system configuration (from a bare OS image)
frequent manually-triggered code deployments.
The same Ansible playbook is able to provision either a local Vagrant dev VM, or a production EC2 instance from scratch.
We now want to implement autoscaling in EC2, and that requires some changes towards a "treat servers as cattle, not pets" philosophy.
The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.
The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :
Bake a new fully-deployed AMI for each deploy, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, so up & running more quickly.
Use a base AMI that doesn't change very often, automatically get the latest app code from git upon bootup, start webserver. Once it's up just do manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc) ? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice ?
Use Docker ? I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance ?
How do you do it ? Any insights / best practices ?
Thanks !
| This question is very opinion based. But just to give you my take, I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling UP will happen faster.
Docker is another approach but in my opinion it adds an extra layer in your application that you may not need if you are already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have some extra capacity in a server and Docker will allow you to run that extra application on the same server without interfering with existing ones.
Having said that some people find Docker useful not in the sort of way to optimize the resources in a single server but rather in a sort of way that it allows you to pre-bake your applications in containers. So when you do deploy a new version or new code all you have to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.
My two cents.
| Ansible | 23,056,177 | 18 |
In the lineinfile module, it replaces the full line.
If the line is long I have to repeat the whole line again.
Let us suppose I want to replace the single word in the file:
#abc.conf
This is my horse
this is the playbook:
- lineinfile: dest=abc.conf
state=present
regexp='horse'
line='This is my dog'
backup=yes
is there any way to achieve someting like sed 's/horse/dog/g' ?
| New module replace available since 1.6 version:
- replace:
dest=abc.conf
regexp='horse'
replace='dog'
backup=yes
| Ansible | 22,398,302 | 18 |
I am currently searching for a good distributed file system.
It should:
be open-source
be horizontally scalable (replication and sharding)
have no single point of failure
have a relatively small footprint
Here are the four most promising candidates in my opinion:
GridFS (based on MongoDB)
GlusterFS
Ceph
HekaFS
The filesystem will be used mainly for media files (images and audio). There are very small as well as medium sized files (1 KB - 10 MB). The amount of files should be around several millions.
Are there any benchmarks regarding performance, CPU-load, memory-consumption and scalability? What are your experiences using these or other distributed filesystems?
| I'm not sure your list is quite correct. It depends on what you mean by a file system.
If you mean a file system that is mountable in an operating system and usable by any application that reads and writes files using POSIX calls, then GridFS doesn't really qualify. It is just how MongoDB stores BSON-formatted objects. It is an Object system rather than a File system.
There is a project to make GridFS mountable, but it is a little weird because GridFS doesn't have concepts for things like hierarchical directories, although paths are allowed. Also, I'm not sure how distributed writes on gridfs-fuse would be.
GlusterFS and Ceph are comparable and are distributed, replicable mountable file systems. You can read a comparison between the two here (and followup update of comparison), although keep in mind that the benchmarks are done by someone who is a little biased. You can also watch this debate on the topic.
As for HekaFS, it is GlusterFS that is set up for cloud computing, adding encryption and multitenancy as well as an administrative UI.
| Ceph | 17,425,153 | 42 |
Ceph teuthology installation fails with following error on Ubuntu 14.04, kernel 4.4.0-51-generic:
ImportError: <module 'setuptools.dist' from '/usr/lib/python2.7/dist-packages/setuptools/dist.pyc'> has no 'check_specifier' attribute
| It was due to older setuptools version. I updated setuptools as follows:
sudo pip install setuptools --upgrade
It installed setuptools-31.0.0 and that worked.
| Ceph | 41,141,657 | 26 |
I'm running proxmox and I try to remove a pool which I created wrong.
However it keeps giving this error:
mon_command failed - pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool1_U (500)
OK
But:
root@kvm-01:~# ceph -n mon.0 --show-config | grep mon_allow_pool_delete
mon_allow_pool_delete = true
root@kvm-01:~# ceph -n mon.1 --show-config | grep mon_allow_pool_delete
mon_allow_pool_delete = true
root@kvm-01:~# ceph -n mon.2 --show-config | grep mon_allow_pool_delete
mon_allow_pool_delete = true
root@kvm-01:~# cat /etc/ceph/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.0.0.0/24
filestore xattr use omap = true
fsid = 41fa3ff6-e751-4ebf-8a76-3f4a445823d2
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 10.0.0.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.0]
host = kvm-01
mon addr = 10.0.0.1:6789
mon allow pool delete = true
[mon.2]
host = kvm-03
mon addr = 10.0.0.3:6789
mon allow pool delete = true
[mon.1]
host = kvm-02
mon addr = 10.0.0.2:6789
mon allow pool delete = true
So that's my full config. Any idea why I am unable to delete my pools?
| Another approach:
ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
ceph osd pool rm test-pool test-pool --yes-i-really-really-mean-it
| Ceph | 45,012,905 | 16 |
The WAL (Write-Ahead Log) technology has been used in many systems.
The mechanism of a WAL is that when a client writes data, the system does two things:
Write a log to disk and return to the client
Write the data to disk, cache or memory asynchronously
There are two benefits:
If some exception occurs (i.e. power loss) we can recover the data from the log.
The performance is good because we write data asynchronously and can batch operations
Why not just write the data into disk directly? You make every write directly to disk. On success, you tell client success, if the write failed you return a failed response or timeout.
In this way, you still have those two benefits.
You do not need to recover anything in case of power off. Because every success response returned to client means data really on disk.
Performance should be the same. Although we touch disk frequently, but WAL is the same too (Every success write for WAL means it is success on disk)
So what is the advantage of using a WAL?
| Performance.
Step two in your list is optional. For busy records, the value might not make it out of the cache and onto the disk before it is updated again. These writes do not need to be performed, with only the log writes performed for possible recovery.
Log writes can be batched into larger, sequential writes. For busy workloads, delaying a log write and then performing a single write can significantly improve throughput.
This was much more important when spinning disks were the standard technology because seek times and rotational latency were a bit issue. This is the physical process of getting the right part of the disk under the read/write head. With SSDs those considerations are not so important, but avoiding some writes, and large sequential writes still help.
Update:
SSDs also have better performance with large sequential writes but for different reasons. It is not as simple as saying "no seek time or rotational latency therefore just randomly write". For example, writing large blocks into space the SSD knows is "free" (eg. via the TRIM command to the drive) is better than read-modify-write, where the drive also needs to manage wear levelling and potentially mapping updates into different internal block sizes.
| Ceph | 58,694,102 | 16 |
I am getting both of these errors at the same time. I can't decrease the pg count and I can't add more storage.
This is a new cluster, and I got these warning when I uploaded about 40GB to it. I guess because radosgw created a bunch of pools.
How can ceph have too many pgs per osd, yet have more object per pg than average with a too few pgs suggestion?
HEALTH_WARN too many PGs per OSD (352 > max 300);
pool default.rgw.buckets.data has many more objects per pg than average (too few pgs?)
osds: 4 (2 per site 500GB per osd)
size: 2 (cross site replication)
pg: 64
pgp: 64
pools: 11
Using rbd and radosgw, nothing fancy.
| I'm going to answer my own question in hopes that it sheds some light on the issue or similar misconceptions of ceph internals.
Fixing HEALTH_WARN too many PGs per OSD (352 > max 300) once and for all
When balancing placement groups you must take into account:
Data we need
pgs per osd
pgs per pool
pools per osd
the crush map
reasonable default pg and pgp num
replica count
I will use my set up as an example and you should be able to use it as a template for your own.
Data we have
num osds : 4
num sites: 2
pgs per osd: ???
pgs per pool: ???
pools per osd: 10
reasonable default pg and pgp num: 64 (... or is it?)
replica count: 2 (cross site replication)
the crush map
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
root ourcompnay
site a
rack a-esx.0
host prdceph-strg01
osd.0 up 1.00000 1.00000
osd.1 up 1.00000 1.00000
site b
rack a-esx.0
host prdceph-strg02
osd.2 up 1.00000 1.00000
osd.3 up 1.00000 1.00000
Our goal is to fill in the '???' above with what we need to serve a HEALTH OK cluster. Our pools are created by the rados gateway when it initialises.
We have a single default.rgw.buckets.data where all data is being stored the rest of the pools are adminitrastive and internal to cephs metadata and book keeping.
PGs per osd (what is a reasonable default anyway???)
The documentation would have us use this calculation to determine our pg count per osd:
(osd * 100)
----------- = pgs UP to nearest power of 2
replica count
It is stated that to round up is optimal. So with our current setup it would be:
(4 * 100)
----------- = (200 to the nearest power of 2) 256
2
osd.1 ~= 256
osd.2 ~= 256
osd.3 ~= 256
osd.4 ~= 256
This is the recommended max number of pgs per osd. So... what do you actually have currently? And why isn't it working? And if you set a
'reasonable default' and understand the above WHY ISN'T IT WORKING!!! >=[
Likely, a few reasons. We have to understand what those 'reasonable defaults' above actually mean, how ceph applies them and to where. One might misunderstand from the above that I could create a new pool like so:
ceph osd pool create <pool> 256 256
or I might even think I could play it safe and follow the documentation which states that (128 pgs for < 5 osds) can use:
ceph osd pool create <pool> 128 128
This is wrong, flat out. Because it in no way explains the relationship or balance between what ceph is actaully doing with these numbers
technically the correct answer is:
ceph osd pool create <pool> 32 32
And let me explain why:
If like me you provisioned your cluster with those 'reasonable defaults' (128 pgs for < 5 osds) as soon as you tried to do anything with rados it created a whole bunch of pools and your cluster spazzed out.
The reason is because I misunderstood the relationship between everything mentioned above.
pools: 10 (created by rados)
pgs per pool: 128 (recommended in docs)
osds: 4 (2 per site)
10 * 128 / 4 = 320 pgs per osd
This ~320 could be a number of pgs per osd on my cluster. But ceph might distribute these differently. Which is exactly what's happening and
is way over the 256 max per osd stated above. My cluster's HEALTH WARN is HEALTH_WARN too many PGs per OSD (368 > max 300).
Using this command we're able to see better the relationship between the numbers:
pool :17 18 19 20 21 22 14 23 15 24 16 | SUM
------------------------------------------------< - *total pgs per osd*
osd.0 35 36 35 29 31 27 30 36 32 27 28 | 361
osd.1 29 28 29 35 33 37 34 28 32 37 36 | 375
osd.2 27 33 31 27 33 35 35 34 36 32 36 | 376
osd.3 37 31 33 37 31 29 29 30 28 32 28 | 360
-------------------------------------------------< - *total pgs per pool*
SUM :128 128 128 128 128 128 128 128 128 128 128
There's a direct correlation between the number of pools you have and the number of placement groups that are assigned to them.
I have 11 pools in the snippet above and they each have 128 pgs and that's too many!! My reasonable defaults are 64! So what happened??
I was misunderstandning how the 'reasonable defaults' were being used. When I set my default to 64, you can see ceph has taking my crush map into account where
I have a failure domain between site a and site b. Ceph has to ensure that everything that's on site a is at least accessible on site b.
WRONG
site a
osd.0
osd.1 TOTAL of ~ 64pgs
site b
osd.2
osd.3 TOTAL of ~ 64pgs
We needed a grand total of 64 pgs per pool so our reasonable defaults should've actually been set to 32 from the start!
If we use ceph osd pool create <pool> 32 32 what this amounts to is that the relationship between our pgs per pool and pgs per osd with those 'reasonable defaults' and our recommened max pgs per osd start to make sense:
So you broke your cluster ^_^
Don't worry we're going to fix it. The procedure here I'm afraid might vary in risk and time depending on how big your cluster. But the only way
to get around altering this is to add more storage, so that the placement groups can redistribute over a larger surface area. OR we have to move everything over to
newly created pools.
I'll show an example of moving the default.rgw.buckets.data pool:
old_pool=default.rgw.buckets.data
new_pool=new.default.rgw.buckets.data
create a new pool, with the correct pg count:
ceph osd pool create $new_pool 32
copy the contents of the old pool the new pool:
rados cppool $old_pool $new_pool
remove the old pool:
ceph osd pool delete $old_pool $old_pool --yes-i-really-really-mean-it
rename the new pool to 'default.rgw.buckets.data'
ceph osd pool rename $new_pool $old_pool
Now it might be a safe bet to restart your radosgws.
FINALLY CORRECT
site a
osd.0
osd.1 TOTAL of ~ 32pgs
site b
osd.2
osd.3 TOTAL of ~ 32pgs
As you can see my pool numbers have incremented since they are added by pool id and are new copies. And our total pgs per osd is way under the ~256 which gives us room to add custom pools if required.
pool : 26 35 27 36 28 29 30 31 32 33 34 | SUM
-----------------------------------------------
osd.0 15 18 16 17 17 15 15 15 16 13 16 | 173
osd.1 17 14 16 15 15 17 17 17 16 19 16 | 179
osd.2 17 14 16 18 12 17 18 14 16 14 13 | 169
osd.3 15 18 16 14 20 15 14 18 16 18 19 | 183
-----------------------------------------------
SUM : 64 64 64 64 64 64 64 64 64 64 64
Now you should test your ceph cluster with whatever is at your disposal. Personally I've written a bunch of python over boto that tests the infrastructure and return buckets stats and metadata rather quickly. They have ensured to me that the cluster is back to working order without any of the issues it suffered from previously. Good luck!
Fixing pool default.rgw.buckets.data has many more objects per pg than average (too few pgs?) once and for all
This quite literally means, you need to increase the pg and pgp num of your pool. So... do it. With everything mentioned above in mind. When you do this however note that the cluster will start backfilling and you can watch this process %: watch ceph -s in another terminal window or screen.
ceph osd pool set default.rgw.buckets.data pg_num 128
ceph osd pool set default.rgw.buckets.data pgp_num 128
Armed with the knowledge and confidence in the system provided in the above segment we can clearly understand the relationship and the influence of such a change on the cluster.
pool : 35 26 27 36 28 29 30 31 32 33 34 | SUM
----------------------------------------------
osd.0 18 64 16 17 17 15 15 15 16 13 16 | 222
osd.1 14 64 16 15 15 17 17 17 16 19 16 | 226
osd.2 14 66 16 18 12 17 18 14 16 14 13 | 218
osd.3 18 62 16 14 20 15 14 18 16 18 19 | 230
-----------------------------------------------
SUM : 64 256 64 64 64 64 64 64 64 64 64
Can you guess which pool id is default.rgw.buckets.data? haha ^_^
| Ceph | 39,589,696 | 14 |
I configured Ceph with the recommended values (using a formula from the docs). I have 3 OSDs, and my config (which I've put on the monitor node and all 3 OSDs) includes this:
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 150
osd pool default pgp num = 150
When I run ceph status I get:
health HEALTH_WARN
too many PGs per OSD (1042 > max 300)
This is confusing for two reasons. First, because the recommended formula did not satisfy Ceph. Second, and most puzzling, is that it says I have 1042 PGs per OSD, when my configuration says 150.
What am I doing wrong?
| Before setting PG count you need to know 3 things.
1. Number of OSD
ceph osd ls
Sample Output:
0
1
2
Here Total number of osd is three.
2. Number of Pools
ceph osd pool ls or rados lspools
Sample Output:
rbd
images
vms
volumes
backups
Here Total number of pool is five.
3. Replication Count
ceph osd dump | grep repli
Sample Output:
pool 0 'rbd' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 38 flags hashpspool stripe_width 0
pool 1 'images' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 40 flags hashpspool stripe_width 0
pool 2 'vms' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 42 flags hashpspool stripe_width 0
pool 3 'volumes' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 36 flags hashpspool stripe_width 0
pool 4 'backups' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 30 pgp_num 30 last_change 44 flags hashpspool stripe_width 0
You can see each pool has replication count two.
Now Let get into calculation
Calculations:
Total PGs Calculation:
Total PGs = (Total_number_of_OSD * 100) / max_replication_count
This result must be rounded up to the nearest power of 2.
Example:
No of OSD: 3
No of Replication Count: 2
Total PGs = (3 * 100) / 2 = 150. Nearest Power of 150 to 2 is 256.
So Maximum Recommended PGs is 256
You can set PG for every Pool
Total PGs per pool Calculation:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool count
This result must be rounded up to the nearest power of 2.
Example:
No of OSD: 3
No of Replication Count: 2
No of pools: 5
Total PGs = ((3 * 100) / 2 ) / 5 = 150 / 5 = 30 . Nearest Power of 30 to 2 is 32.
So Total No of PGs per pool is 32.
Power of 2 Table:
2^0 1
2^1 2
2^2 4
2^3 8
2^4 16
2^5 32
2^6 64
2^7 128
2^8 256
2^9 512
2^10 1024
Useful Commands
ceph osd pool create <pool-name> <pg-number> <pgp-number> - To create a new pool
ceph osd pool get <pool-name> pg_num - To get number of PG in a pool
ceph osd pool get <pool-name> pgp_num - To get number of PGP in a pool
ceph osd pool set <pool-name> pg_num <number> - To increase number of PG in a pool
ceph osd pool set <pool-name> pgp_num <number> - To increase number of PGP in a pool
*usually pg and pgp number is same
| Ceph | 40,771,273 | 12 |
Keycloak configuration and data is stored in a relational database, which is usally persisted to the hard disk. This includes data like realm settings, users, group- and role-memberships, auth flows and so on. But the user sessions will only be stored in an ephemeral in-memory infinispan cache. Therefore the session data in this cache is lost, when the keycloak server restarts.
There are many reasons why a restart of the Keycloak server is required. Major OS upgrades, Keycloak server upgrades to new versions, applying changes to keycloak e-mail templates or re-scheduling keycloak pods to other worker nodes in kubernetes or other cloud-based environments.
How to persist the session data to survive restarts. Ideally without having to maintain a custom infinspan server or using keycloak "offline sessions".
One solution could be to simply use so-called keycloak "offline sessions", but these sessions also have huge disadvantages:
they remain valid, even if the user logs out
logging out users with the keycloak admin console is no longer possible
See: https://www.keycloak.org/docs/latest/server_admin/#_offline-access
Will this problem still be present when keycloak > 17 is out and uses the all new quarkus distribution? Because in the following articles claim goals like Container-First Approach, Zero-Downtime Upgrade and Storage re-architecture.
https://www.keycloak.org/2021/10/keycloak-x-update
https://www.keycloak.org/2020/12/first-keycloak-x-release.adoc
| As you already wrote in your question, using Infinispan would be the go-to solution. Infinispan can be used in two ways:
Infinispan running within Keycloak, which is the OOTB way of Keycloak --> Problem: when all Keycloak instances shut down, Infinispan is also down and sessions are lost
Infinispan running as an external cluster, with Keycloak connecting to this external cluster --> Problem: when all Infinispan nodes shut down (e.g. because of an incident or because of an update), then sessions are also lost
Only having Infinispan does not solve the session loss problem. But Infinispan caches can be backed by a database, e.g. a Postgres DB. Then sessions are stored in the ISPN cache and in a corresponding DB table. This can prevent session loss, both for the internal and the external Infinispan cluster.
So a solution to your problem could be to configure the internal ISPN caches to use a database persistence. This could look like this in your cache-ispn.xml:
<cache-container name="keycloak">
<!-- other config ... -->
<replicated-cache name="sessions">
<expiration/>
<persistence>
<string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:14.0" dialect="POSTGRES">
<connection-pool
connection-url="${env.ISPN_DATABASE_CONNECTION_URL}"
username="${env.ISPN_DATABASE_USERNAME:ispn}"
password="${env.INFINISPAN_DB_OWNER_PASSWORD:ispn}"
driver="org.postgresql.Driver"
properties-file="conf/ispn/infinispan-cache.properties"/>
<string-keyed-table create-on-start="true" prefix="ispn">
<id-column name="id" type="VARCHAR(255)"/>
<data-column name="data" type="bytea"/>
<timestamp-column name="ts" type="BIGINT"/>
<segment-column name="seg" type="INT"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</persistence>
</replicated-cache>
<!-- other config ... -->
</cache-container
| Keycloak | 70,581,540 | 14 |
I have deployed keycloak on my EKS cluster and able to access dashboard successfully and created a new realm already.
So I thought of testing my keycloak, and went to https://www.keycloak.org/app/ for testing.
I have created a client with the root URL "https://www.keycloak.org/app/" and created one User also.
I have tested successfully my user using account login of my realm.
then I went to https://www.keycloak.org/app/ entered my keycloak URL as https://keycloak.test.nip.io , the realm as Test(same name of my realm), and then client name as a portal(same name client created on keycloak).
When I hit Sign in, it redirects to my keycloak URL but shown We are Sorry... Page not found
Anyone knows why I am receiving this error and how can I avoid that.
| I too had this error. I followed instructions somewhere for configuring the keycloak client application's url, realm, and clientId properties. In the instructions it said to configure the url to http://localhost:8080/auth. I think this must have changed somewhere along the way.
Changing the url property to http://localhost:8080 fixed the error :)
| Keycloak | 65,711,806 | 14 |
I am managing a Keycloak realm with only a single, fully-trusted external IdP added that is intended to be the default authentication mechanism for users.
I do not want to allow user to register, i.e. I want to manually create a local Keycloak user, and that user should then be allowed to link his external IdP account to the pre-existing Keycloak account, having the email address as common identifier. Users with access to the external IdP but without an existing Keycloak account should not be allowed to connect.
I tried the following First Broker Login settings, but whenever a user tries to login, he gets an error message (code: invalid_user_credentials).
Do you have any idea what my mistake might be?
| According to the doc: https://www.keycloak.org/docs/latest/server_admin/index.html#detect-existing-user-first-login-flow, you must create a new flow like this:
et voilà :)
| Keycloak | 52,382,531 | 14 |
I have a use case where user should be disabled when he enter wrong password 5 consecutive times.
I cant find any keycloak password policy to disable user when he enter wrong password 5 consecutive times.
| To enable Consecutive Failed Login Defence you need to enable "Max Login Failures" from Brute Force Detection.
Steps:
Login to Keycloak Admin Console
Select Realms from List
Go To Realm Settings >> Security Defenses >> Brute Force Detection
Enable Brute Force Detection
Set Max Login Failures to 5
Refer screenshot for steps
| Keycloak | 69,254,441 | 13 |
I created users and roles in Keycloak which I want to export.
When I tried to export them using the realm's "Export" button in UI I got a JSON file downloaded.
But I couldn't find any users or roles in the exported file realm.json
How can I export a realm JSON including users and roles from Keycloak?
| Update: The /auth path was removed starting with Keycloak 17 Quarkus distribution. So you might need to remove the /auth from the endpoint calls presented on this answer.
You will not be able to do that using the export functionality. However, you can get that information using the Keycloak Admin REST API; to call that API, you need an access token from a user with the proper permissions. For now, I will be using the admin user from the master realm, but later I will explain how you can use another user:
curl https://$KEYCLOAK_HOST/auth/realms/master/protocol/openid-connect/token \
-d "client_id=admin-cli" \
-d "username=$ADMIN_NAME" \
-d "password=$ADMIN_PASSWORD" \
-d "grant_type=password"
You will get a JSON response with the admin's token. Extract the value of property access_token from that response. Let us save it in the variable $ACCESS_TOKEN for later reference.
To get the list of users from your realm $REALM_NAME:
curl -X GET https://$KEYCLOAK_HOST/auth/admin/realms/$REALM_NAME/users \
-H "Content-Type: application/json" \
-H "Authorization: bearer $ACCESS_TOKEN"
To get the realm roles:
curl -X GET https://$KEYCLOAK_HOST/auth/admin/realms/$REALM_NAME/roles \
-H "Content-Type: application/json" \
-H "Authorization: bearer $ACCESS_TOKEN"
Now you just need to save the JSON responses from those endpoints into JSON files.
Assigning the proper user permissions
For those that do not want to get an access token from the master admin user, you can get it from another user but that user needs the permission view-users from the realm-management client. For that you can:
(OLD Keycloak UI)
Go to Users, and then the user in question
Go to the tab Role Mappings
In client roles select realm-management
Select the role view-users and click on Add selected
(New Keycloak UI)
Go to Users, and then the user in question
Go to the tab Role Mappings
Click on Assign role
In Search by role name type view-users
Select the role and assign it
| Keycloak | 65,200,310 | 13 |
My company uses Keycloak for authentication connected with LDAP and returning a user object filled with corporative data.
Yet in this period we are all working from home and in my daily work having to authenticate in my corporative server every time I reload the app, has proven to be an expensive overhead. Especially with intermittent internet connections.
How can I fake the Keycloak call and make keycloak.protect() work as it has succeeded?
I can install a Keyclock server in my machine, but I'd rather not do that because it would be another server running in it besides, vagrant VM, Postgres server, be server, and all the other things I leave open.
It would be best to make a mock call and return a fixed hard-coded object.
My project's app-init.ts is this:
import { KeycloakService } from 'keycloak-angular';
import { KeycloakUser } from './shared/models/keycloakUser';
<...>
export function initializer(
keycloak: KeycloakService,
<...>
): () => Promise<any> {
return (): Promise<any> => {
return new Promise(async (res, rej) => {
<...>
await keycloak.init({
config: environment.keycloakConfig,
initOptions: {
onLoad: 'login-required',
// onLoad: 'check-sso',
checkLoginIframe: false
},
bearerExcludedUrls: [],
loadUserProfileAtStartUp: false
}).then((authenticated: boolean) => {
if (!authenticated) return;
keycloak.getKeycloakInstance()
.loadUserInfo()
.success(async (user: KeycloakUser) => {
// ...
// load authenticated user data
// ...
})
}).catch((err: any) => rej(err));
res();
});
};
I just need one fixed logged user. But it has to return some fixed customized data with it. Something like this:
{ username: '111111111-11', name: 'Whatever Something de Paula',
email: '[email protected]', department: 'sales', employee_number: 7777777 }
EDIT
I tried to look at the idea of @BojanKogoj but AFAIU from Angular Interceptor page and other examples and tutorials, it has to be injected in a component. Keycloak initialization is called on app initialization, not in a component. Also Keycloak's return is not the direct return of init() method. It passes through other objects in the .getKeycloakInstance().loadUserInfo().success() sequence.
Or maybe it's just me that didn't fully understand it. If anyone can come with an example of an interceptor that can intercept the call and return the correct result, that could be a possibility.
Edit2
Just to complement that what I need is for the whole keycloak's system to work. Please notice that the (user: KeycloakUser) => { function is passed to success method of keycloak's internal system. As I said above, routes have a keycloak.protect() that must work. So it's not just a simple case of returning a promise with a user. The whole .getKeycloakInstance().loadUserInfo().success() chain has to be mocked. Or at least that's how I understand it.
I included an answer with the solution I made based on @yurzui's answer
Will wait a couple of days to award the bounty to see if someone can came up with an even better solution (which I doubt).
| You can leverage Angular environment(or even process.env) variable to switch between real and mock implementations.
Here is a simple example of how to do that:
app-init.ts
...
import { environment } from '../environments/environment';
export function initializer(
keycloak: KeycloakService
): () => Promise<any> {
function authenticate() {
return keycloak
.init({
config: {} as any,
initOptions: {onLoad: 'login-required', checkLoginIframe: false},
bearerExcludedUrls: [],
loadUserProfileAtStartUp: false
})
.then(authenticated => {
return authenticated ? keycloak.getKeycloakInstance().loadUserInfo() : Promise.reject();
});
}
// we use 'any' here so you don't have to define keyCloakUser in each environment
const { keyCloakUser } = environment as any;
return () => {
return (keyCloakUser ? Promise.resolve(keyCloakUser) : authenticate()).then(user => {
// ...
// do whatever you want with user
// ...
});
};
}
environment.ts
export const environment = {
production: false,
keyCloakUser: {
username: '111111111-11',
name: 'Whatever Something de Paula',
email: '[email protected]',
}
};
environment.prod.ts
export const environment = {
production: true,
};
Update
If you want to mock KeycloakService on client side then you can tell Angular dependency injection to handle that:
app.module.ts
import { environment } from '../environments/environment';
import { KeycloakService, KeycloakAngularModule } from 'keycloak-angular';
import { MockedKeycloakService } from './mocked-keycloak.service';
@NgModule({
...
imports: [
...
KeycloakAngularModule
],
providers: [
{
provide: KeycloakService,
useClass: environment.production ? KeycloakService : MockedKeycloakService
},
{
provide: APP_INITIALIZER,
useFactory: initializer,
multi: true,
deps: [KeycloakService]
}
],
bootstrap: [AppComponent]
})
export class AppModule { }
mocked-keycloak.service.ts
import { Injectable} from '@angular/core';
import { KeycloakService } from 'keycloak-angular';
@Injectable()
class MockedKeycloakService extends KeycloakService {
init() {
return Promise.resolve(true);
}
getKeycloakInstance() {
return {
loadUserInfo: () => {
let callback;
Promise.resolve().then(() => {
callback({
userName: 'name'
});
});
return {
success: (fn) => callback = fn
};
}
} as any;
}
}
| Keycloak | 61,917,978 | 13 |
Recently my application using Keycloak stopped working with a 400 token request after authenticating.
What I found so far is that within the token request, the Keycloak cookies (AUTH_SESSION_ID, KEYCLOAK_IDENTITY, KEYCLOAK_SESSION) are not sent within the request headers causing the request for a token to fail and the application gets a session error.
By digging more, I found that Chrome blocks now cookies without SameSite attribute set, which is the case for the keycloak cookies and that's why they are never parsed within the token acquisition request after authenticating.
The error I get:-
https://blog.chromium.org/2019/10/developers-get-ready-for-new.html
https://adzerk.com/blog/chrome-samesite/
This is very serious as it blocks applications secured by Keycloak library to be able to communicate with the keycloak server.
Update: With the new google chrome cookie SameSite attribute, any third party library using cookies without SameSite attribute properly set, the cookie will be ignored.
https://blog.chromium.org/2019/10/developers-get-ready-for-new.html
https://www.chromium.org/updates/same-site
| If you are facing this issue while using keycloak-js adapter.
Then the reason for this issue:
By default, the JavaScript adapter creates a hidden iframe that is used to detect if a Single-Sign Out has occurred. This does not require any network traffic, instead the status is retrieved by looking at a special status cookie.
Workaround (Not a fix):
This feature can be disabled by setting checkLoginIframe: false in the options passed to the init method.
eg.,
keycloak.init({
onLoad: 'login-required',
checkLoginIframe: false
})
| Keycloak | 60,622,192 | 13 |
I'm trying to use keycloak in my node project but I'm always getting the following message : Cannot read property 'keycloak-token' of undefined.
Before that, I've launched keycloak, I created a new realm 'Test' and a new user 'id_a' which for I've set a new password. I can access to the account with keycloak but I can't with my code. Can you please help me ?
var session = require('express-session');
var Keycloak = require('keycloak-connect');
const express = require('express');
var memoryStore = new session.MemoryStore();
let kcConfig = {
clientId: "id_a",
bearerOnly: true,
serverUrl: 'http://localhost:8080/auth',
realm: 'Test'
};
let keycloak = new Keycloak({ store: memoryStore }, kcConfig);
var app = express();
app.use( keycloak.middleware() );
app.get( '/complain', keycloak.protect(), function(req, res) {
res.send('hello world');
});
app.listen(3001, function(){
console.log('Server started on port 3001...')
});
the output is : 'TypeError: Cannot read property 'keycloak-token' of undefined' and I expect hello world
| I had the same issue, but now it's working. You need to instantiate a session before running the keycloak middleware. The missing piece of code is as follows:
// Create a session-store to be used by both the express-session
// middleware and the keycloak middleware.
var memoryStore = new session.MemoryStore();
app.use(session({
secret: 'some secret',
resave: false,
saveUninitialized: true,
store: memoryStore
}));
// Provide the session store to the Keycloak so that sessions
// can be invalidated from the Keycloak console callback.
//
// Additional configuration is read from keycloak.json file
// installed from the Keycloak web console.
var keycloak = new Keycloak({
store: memoryStore
});
app.use(keycloak.middleware({
logout: '/logout',
admin: '/'
}));
Checkout the keycloak quick start for nodeJS for full code reference:
https://github.com/keycloak/keycloak-quickstarts/blob/latest/service-nodejs/app.js
| Keycloak | 56,286,958 | 13 |
Which is the best option for SSO implementation Keycloack Vs CAS Vs Okta? I'm specifically looking for the disadvantages of each service to identify the best suitability for my system.
| Both Keycloak and Okta should provide what you're looking for. I'm not sure about CAS as I haven't used it in 10 years. Since both Keycloak and Okta use OAuth 2.0/OIDC, you might even be able to use Keycloak in development, and Okta in production.
I've implemented OAuth 2.0 / OIDC support in JHipster. It uses Keycloak (in a Docker container) by default, and provides instructions for switching to Okta. Thanks to the power of Spring Security and Spring Boot, you only need to override some properties to switch between the two!
| Keycloak | 53,650,790 | 13 |
I would like to let my users have a choice which authentication method to use. For example, they could be presented with a menu to pick an option (username/pass, username/pass+OTP, etc).
Then, Keycloak should, based on their choice, assign specific scope to the token.
Is this possible to do with Keycloak (probably by somehow utilizing auth methods chaining) and how? I couldn’t find this in the documentation but it seems as a reasonable use-case to me.
| Here is what i did and it works,
'My goal was give ability to client to choose authentication flow, choose between otp based email and sms.'
I created a new authentication flow, see screenshot :
select 'Alternative' on both flows.
On login form new link will appear 'try another way'
Now the client can choose between flows. see screenshot :
| Keycloak | 51,980,195 | 13 |
I'm using Keycloak version 1.6.1, newly installed as a standalone application.
Keycloak should act as an IdP (Identity provider) for an SP (Service Provider) called Tableau.
I have read from this page: http://blog.keycloak.org/2015/03/picketlink-and-keycloak-projects-are.html
... Keycloak from being Identity Broker grew into being fully fledged
Identity Provider
While it was an Identity Broker, it is now also an Identity Provider.
My question is then:
I have exported the SP XML Metadata from Tableau, which I imported into Keycloak, but when it comes to the export of the IdP XML Metadata from Keycloak (which should be imported into Tableau) I cannot find the button/command/guide anything about how to export this XML file.
I have worked with other IdPs and they all support this export of IdP Metadata which you can see an example of here: https://docs.oracle.com/cd/E19636-01/819-7664/g2enua/index.html
If I search for Keycloak and the keyword IDPSSODescriptor I find this:
grepcode.com/file/repo1.maven.org/maven2/org.keycloak/keycloak-saml-protocol/1.1.0.Beta2/idp-metadata-template.xml
Which is exactly the 'template' I need, with the correct links on all ${idp.sso.HTTP-POST} etc. places.
Should I create the file manually - if so how do I find the correct POST, REDIRECT etc. URLs?
Or is there some way of exporting this file I haven't seen?
| Sometimes it's a good thing to specify in writing what you need - which I did here on Stack Overflow.
I found the URL to where on Keycloak one can export the IdP XML
https://keycloak-url/realms/{REALM-NAME}/protocol/saml/descriptor
That gave me the IDPSSODescriptor.
I'll leave this thread here, so people can benefit from my mistakes.
| Keycloak | 33,542,812 | 13 |
I'm currently trying to develop a Spring Boot Rest Api which is secured with keycloak.
I get an error when I try to call a api which the user has to be identify.
The error message is following:
2020-04-10 16:09:00.324 WARN 44525 --- [nio-8080-exec-7]
o.keycloak.adapters.KeycloakDeployment : Failed to load URLs from
https://{{keycloakserver}}.de/auth/realms/{{realm}}/.well-known/openid-configuration
java.lang.RuntimeException: java.lang.RuntimeException: Stub!
at org.keycloak.adapters.KeycloakDeployment.getClient(KeycloakDeployment.java:327) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.KeycloakDeployment.getOidcConfiguration(KeycloakDeployment.java:219) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.KeycloakDeployment.resolveUrls(KeycloakDeployment.java:178) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.KeycloakDeployment.getRealmInfoUrl(KeycloakDeployment.java:232) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.rotation.AdapterTokenVerifier.createVerifier(AdapterTokenVerifier.java:107) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.rotation.AdapterTokenVerifier.verifyToken(AdapterTokenVerifier.java:47) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.BearerTokenRequestAuthenticator.authenticateToken(BearerTokenRequestAuthenticator.java:103) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.BearerTokenRequestAuthenticator.authenticate(BearerTokenRequestAuthenticator.java:88) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.RequestAuthenticator.authenticate(RequestAuthenticator.java:67) [keycloak-adapter-core-9.0.2.jar:9.0.2]
at org.keycloak.adapters.springsecurity.filter.KeycloakAuthenticationProcessingFilter.attemptAuthentication(KeycloakAuthenticationProcessingFilter.java:154) [keycloak-spring-security-adapter-9.0.2.jar:9.0.2]
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:212) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.keycloak.adapters.springsecurity.filter.KeycloakPreAuthActionsFilter.doFilter(KeycloakPreAuthActionsFilter.java:96) [keycloak-spring-security-adapter-9.0.2.jar:9.0.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:97) [spring-web-5.1.10.RELEASE.jar:5.1.10.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) [spring-web-5.1.10.RELEASE.jar:5.1.10.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:74) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) [spring-web-5.1.10.RELEASE.jar:5.1.10.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) [spring-security-web-5.1.6.RELEASE.jar:5.1.6.RELEASE]
I don't know what Failed to load URLs from means. I can access this side when I click on the link and the configuration file is shown.
Setup
Keycloak:
Keycloak Server is in the web, so no localhost.
I have a realm (test-realm) created
I have a client (test-client) created
I have a user (test-user) created
I have a role in the client (ADMIN) created
I have assigned the role (ADMIN) to the user (test-user)
The client protocol for the client is openid-connect and the access type is confidental.
Spring Boot:
The Spring Boot rest application is running on localhost:8080.
I added in the applications.properties following keycloak configs.
keycloak.realm={{test-realm}}
keycloak.auth-server-url = https://{{keycloakserver}}.de/auth
keycloak.resource = {{test-client}}
keycloak.ssl-required=external
keycloak.bearer-only=true
keycloak.principal-attribute=preferred_username
keycloak.use-resource-role-mappings = true
To make sure the test-user can only access one api call I use following config.
@Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http.authorizeRequests()
.antMatchers("/getTest")
.hasRole("ADMIN")
.anyRequest()
.authenticated();
}
Tests
When I call http://localhost:8080/getTest with Postman I get a correct 401 Unauthorized.
Then I called the same URL with Authorization and the access token of the logged in test-user.
With this second call I get the error message above.
Does anybody know anything about this?
If I missed a config value that you need to know, just ask.
Thanks for your help.
Edit:
SecurityConfig.java
import org.keycloak.adapters.KeycloakConfigResolver;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.keycloak.adapters.springboot.KeycloakSpringBootProperties;
import org.keycloak.adapters.springsecurity.KeycloakConfiguration;
import org.keycloak.adapters.springsecurity.KeycloakSecurityComponents;
import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;
/**
* Created by johannes on 07.04.20 for test App.
*/
@EnableWebSecurity
@ComponentScan(basePackageClasses = KeycloakSecurityComponents.class)
@Configuration
@KeycloakConfiguration
public class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter {
@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
auth.authenticationProvider(keycloakAuthenticationProvider);
}
@Bean
@Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new RegisterSessionAuthenticationStrategy(
new SessionRegistryImpl());
}
@Bean
@Primary
public KeycloakConfigResolver keycloakConfigResolver(KeycloakSpringBootProperties properties) {
return new CustomKeycloakSpringBootConfigResolver(properties);
}
@Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http.authorizeRequests()
.antMatchers("/getTest")
.hasRole("ADMIN")
.anyRequest()
.authenticated();
}
}
CustomKeycloakSpringBootConfigResolver:
import org.keycloak.adapters.KeycloakDeployment;
import org.keycloak.adapters.KeycloakDeploymentBuilder;
import org.keycloak.adapters.spi.HttpFacade;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.keycloak.adapters.springboot.KeycloakSpringBootProperties;
import org.springframework.context.annotation.Configuration;
/**
* Created by johannes on 10.04.20 for test App.*/
@Configuration
public class CustomKeycloakSpringBootConfigResolver extends KeycloakSpringBootConfigResolver {
private final KeycloakDeployment keycloakDeployment;
public CustomKeycloakSpringBootConfigResolver(KeycloakSpringBootProperties properties) {
keycloakDeployment = KeycloakDeploymentBuilder.build(properties);
}
@Override
public KeycloakDeployment resolve(HttpFacade.Request facade) {
return keycloakDeployment;
}
}
TestController.java (this is just the test getter):
@GetMapping("/getTest")
public @ResponseBody ResponseEntity getTest() {
return ResponseEntity.status(ResponseValues.ITEMDELETEFAILED.getResponseCode()).body(ResponseValues.ITEMDELETEFAILED.getResponseMessage());
}
Request was made with postman, this is the code:
curl --location --request GET 'http://localhost:8080/getTest' \
--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUI...' \
--header 'Cookie: JSESSIONID=41E8E82178FA181817...'
| I have the same problem, and I try a lot to find the answer at google, stackoverflow etc...
Finally, I catch the clue, to make it work, just remove the path of the keycloak.auth-server-url as http://192.168.0.119:8080 instead of http://192.168.0.119:8080/auth or something else.
... : Loaded URLs from http://192.168.0.119:8080/realms/spmia-realm/.well-known/openid-configuration
keycloak.realm=spmia-realm
keycloak.auth-server-url=http://192.168.0.119:8080
keycloak.ssl-required=external
keycloak.resource=ostock
keycloak.credentials.secret=FnUBprsgArHa7PkmR9HPWeXY0nJ22Ks1
keycloak.use-resource-role-mappings=true
keycloak.bearer-only=true
...
keycloak:
image: quay.io/keycloak/keycloak:18.0
restart: on-failure
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_DB: postgres
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: keycloak
KC_DB_URL: jdbc:postgresql://database:5432/keycloak
command:
- "start-dev"
depends_on:
database:
condition: service_healthy
ports:
- "8080:8080"
...
...
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-spring-boot-starter</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.keycloak.bom</groupId>
<artifactId>keycloak-adapter-bom</artifactId>
<version>18.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
...
| Keycloak | 61,142,611 | 12 |
We run Keycloak docker image in AWS ECS and we need a way to export a realm and all users for automation purposes using ansible. We can run the following command with ansible to run the export
docker exec -i 702f2fd7858d \
/bin/bash -c "export JDBC_PARAMS=?currentSchema=keycloak_service &&
/opt/jboss/keycloak/bin/standalone.sh \
-Djboss.socket.binding.port-offset=100 \
-Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.realmName=API \
-Dkeycloak.migration.usersExportStrategy=REALM_FILE \
-Dkeycloak.migration.file=/tmp/my_realm.json"
but the docker container continues to run after the export. We cannot grep the logs looking for the export process finishing as we use an AWS Log Driver for Docker that prevents access to any logs. It's a pity that the Keycloak REST API does not support the inclusion of users in the existing partial-export endpoint or at least to have an endpoint that triggers the export of a realm including users into a mounted filed system.
| I was facing the same problem a few days ago and implemented a working solution:
#!/usr/bin/env bash
#
# backup-keycloak.sh
# Copy the export bash script to the (already running) keycloak container
# to perform an export
docker cp docker-exec-cmd.sh keycloak:/tmp/docker-exec-cmd.sh
# Execute the script inside of the container
docker exec -it keycloak /tmp/docker-exec-cmd.sh
# Grab the finished export from the container
docker cp keycloak:/tmp/realms-export-single-file.json .
The Bash script to perform the export inside of the container is the following:
#!/usr/bin/env bash
#
# docker-exec-cmd.sh
set -o errexit
set -o errtrace
set -o nounset
set -o pipefail
# If something goes wrong, this script does not run forever, but times out
TIMEOUT_SECONDS=300
# Logfile for the keycloak export instance
LOGFILE=/tmp/standalone.sh.log
# destionation export file
JSON_EXPORT_FILE=/tmp/realms-export-single-file.json
# Remove files from old backups inside the container
# You could also move the files or change the name with timestamp prefix
rm -f ${LOGFILE} ${JSON_EXPORT_FILE}
# Start a new keycloak instance with exporting options enabled.
# Use the port offset argument to prevent port conflicts
# with the "real" keycloak instance.
timeout ${TIMEOUT_SECONDS}s \
/opt/jboss/keycloak/bin/standalone.sh \
-Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.file=${JSON_EXPORT_FILE} \
-Djboss.socket.binding.port-offset=99 \
> ${LOGFILE} &
# Grab the keycloak export instance process id
PID="${!}"
# Wait for the export to finish
# It will wait till it sees the string, which indicates
# a successful finished backup.
# If it will take too long (>TIMEOUT_SECONDS), it will be stopped.
timeout ${TIMEOUT_SECONDS}s \
grep -m 1 "Export finished successfully" <(tail -f ${LOGFILE})
# Stop the keycloak export instance
kill ${PID}
| Keycloak | 60,766,292 | 12 |
I am trying to create a user via the Keycloak API, and I would like to assign a realm-level role to them when they are first added. However, it doesn't seem to work like the documentation says it should.
I know that I could simply make a second add-role-to-user API request after the initial create-user one, but:
The documentation indicates that I shouldn't need to do this.
The second API request could fail, leaving the user in an "incomplete" state.
It would make the code I'm writing more complex than it needs to be.
To test this in irb, using the keycloak Ruby gem, I first request an access token from Keycloak:
require 'keycloak'
json = Keycloak::Client.get_token_by_client_credentials
access_token = JSON.parse(json)['access_token']
All of the following create a user within Keycloak, but without the "owner" role:
Keycloak::Admin.generic_post('users', nil, { username: 'someone', realmRoles: ['owner'] }, access_token)
Keycloak::Admin.generic_post('users', nil, { username: 'someone', realmRoles: ['1fff5f5f-7357-4f73-b45d-65ccd01f3bc8'] }, access_token)
Keycloak::Admin.generic_post('users', nil, { username: 'someone', realmRoles: ['{"id":"1fff5f5f-7357-4f73-b45d-65ccd01f3bc8","name":"owner","description":"Indicates that a user is the owner of an organisation.","composite":false,"clientRole":false,"containerId":"MyRealmName"}'] }, access_token)
Attempting to use a role-hash instead of a string causes an error:
Keycloak::Admin.generic_post('users', nil, { username: 'someone', realmRoles: [{"id"=>"1fff5f5f-7357-4f73-b45d-65ccd01f3bc8", "name"=>"owner", "description"=>"Indicates that a user is the owner of an organisation.", "composite"=>false, "clientRole"=>false, "containerId"=>"MyRealmName"}] }, access_token)
Traceback (most recent call last):
16: from /home/thomas/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/irb-1.0.0/exe/irb:11:in `<top (required)>'
15: from (irb):8
14: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/keycloak-3.0.0/lib/keycloak.rb:541:in `generic_post'
13: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/keycloak-3.0.0/lib/keycloak.rb:943:in `generic_request'
12: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/keycloak-3.0.0/lib/keycloak.rb:915:in `block in generic_request'
11: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient.rb:71:in `post'
10: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute'
9: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute'
8: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit'
7: from /home/thomas/.rvm/rubies/ruby-2.6.3/lib/ruby/2.6.0/net/http.rb:920:in `start'
6: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient/request.rb:725:in `block in transmit'
5: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient/request.rb:807:in `process_result'
4: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/keycloak-3.0.0/lib/keycloak.rb:916:in `block (2 levels) in generic_request'
3: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/keycloak-3.0.0/lib/keycloak.rb:958:in `rescue_response'
2: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:103:in `return!'
1: from /home/thomas/.rvm/gems/ruby-2.6.3/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:223:in `exception_with_response'
RestClient::InternalServerError (500 Internal Server Error)
Keycloak prints the following, indicating that - as expected - the roles should be an array of strings, not hashes:
08:53:27,889 ERROR [org.keycloak.services.error.KeycloakErrorHandler] (default task-22) Uncaught server error: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.String` out of START_OBJECT token
at [Source: (io.undertow.servlet.spec.ServletInputStreamImpl); line: 1, column: 37] (through reference chain: org.keycloak.representations.idm.UserRepresentation["realmRoles"]->java.util.ArrayList[0])
The same thing happens if I pass a single string instead of an array, like:
Keycloak::Admin.generic_post('users', nil, { username: 'someone', realmRoles: 'owner' }, access_token)
Am I doing something wrong, or is this simply a bug in the Keycloak API?
Reference
https://www.keycloak.org/docs-api/9.0/rest-api/index.html#_createuser
https://www.keycloak.org/docs-api/9.0/rest-api/index.html#_userrepresentation
Similar questions
Keycloak : unable to map user roles when creating user for api
Keycloak: roles not assigned when user is created via CLI
| You did nothing wrong. It is a bug in the Keycloak API.
This request should work:
Keycloak::Admin.generic_post('users', nil, { username: 'someone', realmRoles: ['owner'] }, access_token)
Unfortunately the API documentation is wrong because the 'realmRoles' attribute doesn't work when trying to create/update a user/group.
You can find more informations about the behavior on the official bug tracker of Keycloak :
https://issues.jboss.org/browse/KEYCLOAK-3410
https://issues.jboss.org/browse/KEYCLOAK-10876
https://issues.jboss.org/browse/KEYCLOAK-5038
...
For now the only solution is to make multiple requests on the API, using the RoleMappers to map a role to a user.
Documentation about those operations : https://www.keycloak.org/docs-api/18.0/rest-api/index.html#_role_mapper_resource
| Keycloak | 57,390,389 | 12 |
Well, as the title suggests, this is more of an issue record. I was trying to follow the instructions on this README file of Keycloak docker server images, but encountered a few blockers.
After pulling the image, below command to start a standalone instance failed.
docker run jboss/keycloak
The error stack trace:
-b 0.0.0.0
=========================================================================
Using PostgreSQL database
=========================================================================
...
04:45:06,084 INFO [io.smallrye.metrics] (MSC service thread 1-5) Converted [2] config entries and added [4] replacements
04:45:06,096 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 33) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "datasources"),
("data-source" => "KeycloakDS")
]) - failure description: "WFLYCTL0113: '' is an invalid value for parameter user-name. Values must have a minimum length of 1 characters"
...
Caused by: java.lang.RuntimeException: Failed to connect to database
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:382)
...
Caused by: javax.naming.NameNotFoundException: datasources/KeycloakDS -- service jboss.naming.context.java.jboss.datasources.KeycloakDS
at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:106)
...
I was wondering how it uses a PostgreSQL database, and assumed it might spin up its own instance. But the error looks like it has a problem connecting to the database.
Changing to the embedded H2 DB made it work.
docker run -e DB_VENDOR="h2" --name docker-keycloak-h2 jboss/keycloak
The docker-entrypoint.sh file shows that it uses below logic to determine what DB to use.
if (getent hosts postgres &>/dev/null); then
export DB_VENDOR="postgres"
...
And further down the flow, this change-database.cli file indicates that it's actually expecting a running PostgreSQL instance to use.
connection-url=jdbc:postgresql://${env.DB_ADDR:postgres}:${env.DB_PORT:5432}/${env.DB_DATABASE:keycloak}${env.JDBC_PARAMS:}
So I began wondering how PostgreSQL was chosen as a default initially. Executing below commands in a running Keycloak docker container revealed some interesting things.
[root@71961b81189c bin]# getent hosts postgres
69.172.201.153 postgres.mbox.com
[root@71961b81189c bin]# echo $?
0
Not sure what this postgres.mbox.com is but apparently it's not an expected PostgreSQL server to be resolved by getent. Not sure whether this is a recent linux issue either. The hosts entry in the Name Service Switch Configuration file /etc/nsswitch.conf looks like below inside the container.
hosts: files dns myhostname
It is the dns data source that resolved postgres to postgres.mbox.com.
This is why the DB vendor determination logic failed which eventually caused the container failing to start. The instructions on this README file do not work as of the day this post is published.
Below are the working commands to start a Keycloak server in docker properly with PostgreSQL as the database.
docker network create keycloak-network
docker run -d --name postgres --net keycloak-network -e POSTGRES_DB=keycloak -e POSTGRES_USER=keycloak -e POSTGRES_PASSWORD=password postgres
docker run --name docker-keycloak-postgres --net keycloak-network -e DB_USER=keycloak -e DB_PASSWORD=password jboss/keycloak
| I ran into the same issue. As it turned out, the key to the solution was the missing parameter "DB_USER=keycloak".
The Application tried to authenticate against the database using the username ''. This was indicated by the first error message.
WFLYCTL0113: '' is an invalid value for parameter user-name
Possibly the 4.x and 5.0.0 versions set the default user name to "keycloak" which was no longer the case in 6.0.0.
After adding the parameter DB_USER=keycloak to the list of environment variables, keycloak started up without any problems.
| Keycloak | 56,180,225 | 12 |
How to add custom attributes in Keycloak via REST API?
| I guess you mean adding user attributes to the admin console by extending the theme - https://www.keycloak.org/docs/3.1/server_development/topics/custom-attributes.html Since that configures the admin console itself it does involve some configuration of files loaded by the keycloak app for a custom theme so I don't think the REST API alone will be enough.
As @Xtreme Biker points out, anything you can do via clicks in the admin console you can do via the REST API as the console uses that API. You can perform the relevant actions in the admin console and check the network tab in the browser console to see what the REST calls are (note you may need to tell your browser not to clear the log between page loads). So if you can do it just with clicks in the browser then the REST API is enough. If you also need to modify configuration files then you'll need to do that part outside of the REST API.
| Keycloak | 53,883,487 | 12 |
Currently I try to create a user from curl command via Keycloak's Admin REST API.
I can authenticate myself as an admin, I have a good answer, but when I want to create a user, I have an error like: "404 - Not Found".
Here are my curl commands:
#!/bin/bash
echo "* Request for authorization"
RESULT=`curl --data "username=pierre&password=pierre&grant_type=password&client_id=admin-cli" http://localhost:8080/auth/realms/master/protocol/openid-connect/token`
echo "\n"
echo "* Recovery of the token"
TOKEN=`echo $RESULT | sed 's/.*access_token":"//g' | sed 's/".*//g'`
echo "\n"
echo "* Display token"
echo $TOKEN
echo "\n"
echo " * user creation\n"
curl http://localhost:8080/apiv2/users -H "Authorization: bearer $TOKEN" --data '{"firstName":"xyz","lastName":"xyz", "email":"[email protected]", "enabled":"true"}'
I used the official API documentation, located at this address: https://www.keycloak.org/docs-api/4.4/rest-api/index.html
I have this error:
my realm is good
How I can fix it?
Thanks in advance.
| try this, I added the content type header and modify the url :
#!/bin/bash
echo "* Request for authorization"
RESULT=`curl --data "username=admin&password=Pa55w0rd&grant_type=password&client_id=admin-cli" http://localhost:8080/auth/realms/master/protocol/openid-connect/token`
echo "\n"
echo "* Recovery of the token"
TOKEN=`echo $RESULT | sed 's/.*access_token":"//g' | sed 's/".*//g'`
echo "\n"
echo "* Display token"
echo $TOKEN
echo "\n"
echo " * user creation\n"
curl -v http://localhost:8080/auth/admin/realms/apiv2/users -H "Content-Type: application/json" -H "Authorization: bearer $TOKEN" --data '{"firstName":"xyz","lastName":"xyz", "email":"[email protected]", "enabled":"true"}'
| Keycloak | 52,440,546 | 12 |
I'm trying to deploy a very simple REST service secured with keycloak and am getting the following error:
Caused by:
org.keycloak.authorization.client.util.HttpResponse.Exception:
Unexpected response from server: 400 / Bad Request / Response from
server: ("error":"invalid_client","error_description":"Bearer-only not
allowed")
What does this error mean? How can I fix it?
| Since you have not shared your keycloak config, I am guessing the above error is because you created a bearer only client in keycloak.
Keycloak doesn't allow "bearer only" clients to obtain tokens from the server. Try to change your client to "confidential" on the server and set bearer-only on your adapter configuration (keycloak.json).
You can refer this thread for more info: http://keycloak-user.88327.x6.nabble.com/keycloak-user-can-we-use-authorization-with-bearer-only-td2123.html
| Keycloak | 52,414,165 | 12 |
I have not been able to divine the way I might add extra claims from my application database. Given my limited understanding, I see two ways:
After successful authentication have keycloak pull extra claims from the application database somehow. This app database is postgres, for example.
Have the application update the jwt with extra claims using a shared key.
I would like some feedback both paths. I feel that the fist option may be safer. However I am not sure where to begin that implementation journey.
| Answering my own question here. I cross-posted this question to the Keycloak users mailing list here (http://lists.jboss.org/pipermail/keycloak-user/2017-April/010315.html) and got an answer that seems reasonable.
This is pasted from the answer I received there.
I use the first option. I do it with a protocol mapper, which is a convenient place to do it because there the token is already built by keycloak but hasn't been signed yet. This is the procedure :
User logs in
My custom protocol mapper gets called, where I overwrite the transformAccessToken method
Here I log in the client where the protocol mapper is in into keycloak, as a service. Here don't forget to use another client ID instead the one you're building the protocol mapper for, you'll enter an endless recursion otherwise.
I get the access token into the protocol mapper and I call the rest endpoint of my application to grab the extra claims, which is secured
Get the info returned by the endpoint and add it as extra claims
| Keycloak | 43,376,233 | 12 |
I'd like to authenticate a legacy java (6) application against a node-js one currently secured using keycloak OIDC bearer only (both apps belonging to same realm).
I've been told to use keycloak-authz-client library resolving a keycloak OIDC JSON as below
{
"realm": "xxx",
"realm-public-key": "fnzejhbfbhafbazhfzafazbfgeuizrgyez...",
"bearer-only": true,
"auth-server-url": "http://xxx:80/auth",
"ssl-required": "external",
"resource": "resourceName"
}
However, the keycloak java client required java 8 and my current runtime is a jre6. Recompiling the lib including transitive dependencies does not looks like a good idea and I end up so using keycloak oauth2 REST endpoint.
As far as I know oauth2 I would go with a client_credentials flows exchanging a client secret against an access_token once at application initialization and refreshing / renewing when expired.
Coming to keycloak documentation :
Access Type
This defines the type of the OIDC client.
confidential
Confidential access type is for server-side clients that need to perform a browser login and require a client secret when they turn an
access code into an access token, (see Access Token Request in the
OAuth 2.0 spec for more details). This type should be used for
server-side applications. public
Public access type is for client-side clients that need to perform a browser login. With a client-side application there is no way to
keep a secret safe. Instead it is very important to restrict access by
configuring correct redirect URIs for the client. bearer-only
Bearer-only access type means that the application only allows bearer token requests. If this is turned on, this application cannot
participate in browser logins.
It seems that confidential access type is the one suitable for my needs (should be used for server-side applications) however I don't get how it is related to browser login (which is my mind is related to authenticating using third parties identity providers as facebook and co).
The confidential client settings also require a valid redirect uri the browser will redirect to after successful login or lagout. As the client I want to authenticate is an application I don't see the point.
Generally speaking I don't get the whole access type things. Is it related only to the client or to the resource owner also (Is my node.js application stuck to bearer-only as existing clients use this access type ? will it accept the bearer authentication using the access_token obtained with client_credentials flow ? I suppose it will).
Can someone clarify keycloak OIDC access type and where I went wrong if I did ?
What is the proper way to delegate access for my legacy application to some resources (not limited to a specific user ones) of another application using keycloak ?
| You are mixing up the OAuth 2.0 concepts of Client Types and Grants. Those are different, albeit interconnected, concepts. The former refers to the application architecture, whereas the latter to the appropriate grant to handle a particular Authorization/Authentication use-case.
One chooses and combines those options; first one choses the client type (e.g., public, confidential), and then the grant (e.g., Authorization code flow). Both client types share some of the same grants, with the caveat that the confidential client will require also a client secret to be provided during the execution of the Authentication/Authorization grant.
From the Oauth 2.0 specification:
OAuth defines two client types, based on their ability to
authenticate securely with the authorization server (i.e., ability to
maintain the confidentiality of their client credentials):
confidential
Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
restricted access to the client credentials), or capable of secure
client authentication using other means.
public
Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the
resource owner, such as an installed native application or a web
browser-based application), and incapable of secure client
authentication via any other means.
As one can read the client type refers to the type of the application architecture. Why do you need those types? The answer is to add an extra layer of security.
Let us look at the example of the Authorization Code Grant. Typically the flow is as follows:
The user goes to an application;
The user gets redirect to the Keycloak login page;
The user authenticates itself;
Keycloak check the username and password, and if correct, sends back to the application an authorization code;
The application receives that code and calls Keycloak in order to exchange the code for tokens.
One of the "security issue" with that flow is that the exchange of code for token happens on the frontend channel which due to the nature of browsers it is susceptible to a hacker intercepting that code and exchange it for the tokens before the real application does it. There are ways of mitigating this but it is out of the scope of this question.
Now, If your application is a single-page application, then it cannot safely store a secret, therefore we have to use a public client type. However, if the application has a backend where the client secret can be safely stored, then we could use a confidential client.
So for the same flow (i.e., Authorization Code Grant), one can make it more secure by using a confidential client. This is because the application will now have to send to Keycloak a client secret as well, and this happens on the backend channel, which it is more secure than the frontend channel.
What is the proper way to delegate access for my legacy application to
some resources (not limited to a specific user ones) of another
application using keycloak ?
The proper grant is to use the so called Client Credential Grant:
4.4. Client Credentials Grant
The client can request an access token using only its client
credentials (or other supported means of authentication) when the
client is requesting access to the protected resources under its
control, or those of another resource owner that have been previously
arranged with the authorization server (the method of which is beyond
the scope of this specification).
Since this grant uses the client credentials (e.g., client secret) you can only use it if you have selected confidential as the client type.
| Keycloak | 41,695,223 | 12 |
Context: I'm creating a cloud platform to support multiple applications with SSO. I'm using Keycloak for authentication and Netflix Zuul for authorization (API Gateway) thru Keycloak Spring Security Adapter.
Each microservice expect an Authorization header, which contains a valid JWT, from which it will take the username (sub) to process the request. Each microservice-to-microservice call should go thru Netflix Zuul first, passing the Authorization header to maintain a stateless validation. That strategy allow to every microservice to know who is the user (sub) who is invoking the microservice indirectly.
Problem/Question 1: What happens if a microservice is invoked from a queue message? One idea that I had is to storage in the queue the information related to the message + userInfo, and, create a dedicated microservice to process that kind of messages, with that approach this special microservice should read the userInfo from the queue and process the message.
UPDATE 1: Per an email reply from another forum, storing the JWT in a queue isn't a good idea, since it could be mined easily.
Problem/Question 2: But, what happens if the previous special microservice wants to call another normal microservice which expect to receive a JWT in a header? Should this special microservice create by himself a JWT to impersonate the user and be able to call the regular microservices?
Another solution that I thought was to storage the original JWT in the queue, but, what happens if the queue calls to the special microservice later? Just after the JWT is not valid anymore (it expired) and the microservice called will reject the request?
Possible solutions: (Updated per João Angelo discussion, see below)
I should authenticate the requests from my users (Authorization code flow) and my services (Client credentials grant), both requests should contain user information in the payload. When the request it comes from the user, I need to validate that the payload user info match with the JWT claims. When the request comes from a service, I just need to trust in that service (as long as it is under my control).
I will appreciate very much your help. Thanks.
| Disclaimer: I never used Keycloak, but the tag wiki says it's compliant with OAuth2 so I'll trust that information.
At a really high-level view, you seem to have two requirements:
authenticate actions triggered by an end user while he's using your system.
authenticate actions triggered by your system at an unknown time and where there is no requirement for a end-user to be online.
You already met the first one, by relying on a token-based authentication system and I would do exactly the same for the second point, the only difference would be that the tokens would be issued to your system using the OAuth2 client credentials grant instead of the other grants that are targeted at scenarios where there is an end-user.
(source: Client Credentials Grant)
In your case, Keycloak would play the role of Auth0 and your client applications are microservices which can maintain client secrets used to authenticate themselves in the authorization server and obtain access tokens.
One thing to have in mind is that if your system relies on the sub claim for much more than authentication and authorization then you may need to make some adjustments. For example, I've seen systems where performing action A required to know that it was targeted at user X and Y, but the payload for the action only received user Y and assumed user X was the current authenticated principal. This works fine when everything is synchronous, but by merely switching the payload to specify both users would mean that the action could be done asynchronously by a system authenticated principal.
| Keycloak | 40,458,770 | 12 |
I am creating an email theme for keycloak.
So, when a user forgets their password and requests for a link to reset it; an email is sent to the user.
Now, I am cutomizing the email that he/she gets. I want to add the user's name.
Can I do that?
I do have access to variables including:
link to reset password
link expiration time
realm name
How do I get the user's name so that the email template says
Hello John,
blah blah blah
| You can add user.username in your .ftl file
open (email/text) *.ftl file and add user.username as one of the parameter like
${msg("passwordResetBody",link, linkExpiration, realmName,user.username)}
and then update actual message body at (email/messages/messages_en.properties) with parameter number like {2} or {3} (in this case it is {3} )
| Keycloak | 37,199,200 | 12 |
I have a nextjs application with next-auth to manage the authentication.
Here my configuration
....
export default NextAuth({
// Configure one or more authentication providers
providers: [
KeycloakProvider({
id: 'my-keycloack-2',
name: 'my-keycloack-2',
clientId: process.env.NEXTAUTH_CLIENT_ID,
clientSecret: process.env.NEXTAUTH_CLIENT_SECRET,
issuer: process.env.NEXTAUTH_CLIENT_ISSUER,
profile: (profile) => ({
...profile,
id: profile.sub
})
})
],
....
Authentication works as expected, but when i try to logout using the next-auth signOut function it doesn't works. Next-auth session is destroyed but keycloak mantain his session.
| After some research i found a reddit conversation https://www.reddit.com/r/nextjs/comments/redv1r/nextauth_signout_does_not_end_keycloak_session/ that describe the same problem.
Here my solution.
I write a custom function to logout
const logout = async (): Promise<void> => {
const {
data: { path }
} = await axios.get('/api/auth/logout');
await signOut({ redirect: false });
window.location.href = path;
};
And i define an api path to obtain the path to destroy the session on keycloak /api/auth/logout
export default (req, res) => {
const path = `${process.env.NEXTAUTH_CLIENT_ISSUER}/protocol/openid-connect/logout?
redirect_uri=${encodeURIComponent(process.env.NEXTAUTH_URL)}`;
res.status(200).json({ path });
};
UPDATE
In the latest versions of keycloak (at time of this post update is 19.*.* -> https://github.com/keycloak/keycloak-documentation/blob/main/securing_apps/topics/oidc/java/logout.adoc) the redirect uri becomes a bit more complex
export default (req, res) => {
const session = await getSession({ req });
let path = `${process.env.NEXTAUTH_CLIENT_ISSUER}/protocol/openid-connect/logout?
post_logout_redirect_uri=${encodeURIComponent(process.env.NEXTAUTH_URL)}`;
if(session?.id_token) {
path = path + `&id_token_hint=${session.id_token}`
} else {
path = path + `&client_id=${process.env.NEXTAUTH_CLIENT_ID}`
}
res.status(200).json({ path });
};
Note that you need to include either the client_id or id_token_hint parameter in case that post_logout_redirect_uri is included.
| Keycloak | 71,872,587 | 11 |
We can use PostgreSQL or MySQL as DB for keycloak but I want to use mongo DB as database for keycloak.
is there any way to implement this ?
| Although MongoDB was once supported in Keycloak, it has since been removed. Per the offical Keycloak documentation, a relational database is required for the persistent datastore:
https://www.keycloak.org/docs/latest/server_installation/index.html#_database
| Keycloak | 62,259,957 | 11 |
I want to use Keycloak in a microservices based environment, where authentication is based on OpenID endpoints REST calls ("/token", no redirection to keycloak login page), a flow that I thought of would be something like this:
1. Front-end SPA retrieves the tokens from the "/token" endpoint and stores in browser's localStorage, then sends it with every request.
2. Gateway-level authentication: Acess Token is passed from the front end to the gateway, gateway consults Keycloak server to check if the token is still valid (not invalidated by a logout end-point call).
3. Micro-service based authorization: Acess Token is passed from the Gateway to the microservices, using Spring Boot adapter the microservices check the signature of the token offline (bearer-only client?) then based on the role in the token do the authorization.
My questions are: Does this flow make sense or can you suggest another flow? What type of Keycloak clients to use? What's an ideal way to pass Tokens using Spring Boot Adapter, and should it be done like that in the first place? Please keep in mind that I am not a Keycloak expert, I've done my research but I still have doubts.
| Your Front-end SPA should be public-client and springboot micro service should be Bearer only Client and Gateway could be Confidential Client.
You can check the Keycloak provided oidc adapters. For springboot you use the keycloak provided adapter
Similar solution using api gateway is discussed here
| Keycloak | 60,153,377 | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.