question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I am trying to explore features of KeyCloak server and want to get information about access token by using /openid-connect/token/introspect endpoint.
So, I am sending next request to this endpoint
curl -v --data "token_type_hint=access_token&client_id=product-
app&username=user&token=MY ACCESS TOKEN"
http://localhost:8080/auth/realms/springdemo/protocol/openid-
connect/token/introspect
So this is what I have as a response:
{"error":"invalid_request","error_description":"Client not allowed."}*
Connection #0 to host localhost left intact
and this is what I see in KeyCloak's logs:
12:00:18,045 WARN [org.keycloak.events] (default task-13)
type=INTROSPECT_TOKEN _ERROR,
realmId=springdemo, clientId=product-app, userId=null, ipAddress=127.0.0
.1, error=invalid_request, detail='Client not allowed.',
client_auth_method=client-secret
So, I can't get it - how should I properly make request to keycloak in this case regarding that product-app has public access. Please, explain!
| It seems like you are not able to use /openid-connect/token/introspect endpoint when your client has public access type.
So, switch to the CONFIDENTIAL ACCESS TYPE and use
curl -v --data "client_secret=YOUR_SECRET9&client_id=product-
app&username=user&token=YOUR_TOKEN"
http://localhost:8080/auth/realms/springdemo/protocol/openid-
connect/token/introspect
It works fine.
| Keycloak | 51,132,711 | 28 |
I have an application which is getting Auth from Keycloak.
My Access Type is public so I do not have any client secret.
I have given access to "Direct Access Grants Enabled" as ON
Refer below:
I am getting error from postman as below:
{
"error": "invalid_grant",
"error_description": "Invalid user credentials"
}
Note : My credentials is right as I am using same to login on keycloak
I have also check the key and they seems correct as per below reference:
https://gist.github.com/rac021/623e4f4c87069acd0c38d952568f8a3d
while I have setup as per my understanding.
References:
Please let me know if I missing anything or you need any more details.
Any workaround will be appreciated !!!!
| The error message "Invalid user credentials" is reliable. That is, you either specified a wrong username or password.
Check that the user really exists in the realm you are addressing with the URL. Particularly if it is not the master realm which usually will be used to login to keycloak admin console.
| Keycloak | 48,146,410 | 27 |
I realized there are many iterations of this questions. But I can't seem to understand the answer correctly.
We have secured our rabbitmq and rest endpoints with a oauth2 spring server similar to this post. But it doesn't have all of the features we need and want. So we would like to use Keycloak. I have been successful with securing the rest endpoint by just going to the new version of spring security 5.1 and specifing the security.oauth2.resource.jwk.key-set-uri and setting the necessary dependencies and configuration.
While trying to secure the RabbitMQ, I have been running into problems checking the bearer token from the message header because the keycloak jwks endpoint isn't returning the true RSA public key.
RabbitMQ uses the CustomMessageListenerContainer to get the token from the message header and uses the DefaultTokenServices to check the token.
From my understanding, the endpoint that responds with the key is https://keycloak-server/auth/realms/my-realm/protocol/openid-connect/certs
Doing a HttpGet on this endpoint, I get a response that looks like the following
{
"keys": [{
"kid": "7JUbcl_96GNk2zNh4MAORuEz3YBuprXilmTXjm0gmRE",
"kty": "RSA",
"alg": "RS256",
"use": "sig",
"n": "nE9gEtzZvV_XisnAY8Hung399hwBM_eykZ9J57euboEsKra8JvDmE6w7SSrk-aTVjdNpjdzOyrFd4V7tFqev1vVJu8MJGIyQlbPv07MTsgYE5EPM4DxdQ7H6_f3vQjq0hznkFvC-hyCqUhxPTXM5NgvH86OekL2C170xnd50RLWw8FbrprP2oRjgBnXMAif1Dd8kwbKKgf5m3Ou0yTVGfsCRG1_LSj6gIEFglxNHvGz0RejoQql0rGMxcW3MzCvc-inF3FCafQTrG5eWHqp5xXEeMHz0JosQ7BcT8MVp9lHT_utiazhQ1uKZEb4uoYOyy6mDDkx-wExpZkOx76bk_Yu-N25ljY18hNllnV_8gVMkX46_vcc-eN3DRZGNJ-Asd_sZrjbXbAvBbKwVxZeOTaXiUdvl8O0G5xX2xPnS_WA_1U4b_V1t28WtnX4bqGlOejW2kkjLvNrpfQ5fnvLjkl9I2B16Mbh9nS0LJD0RR-AkBsv3rKEnMyEkW9UsfgYKLFKuH32x_CXi9uyvNDas_q8WS3QvYwAGEMRO_4uICDAqupCVb1Jcs9dvd1w-tUfj5MQOXB-srnQYf5DbFENTNM1PK390dIjdLJh4k2efCJ21I1kYw2Qr9lHI4X2peTinViaoOykykJiol6LMujUcfqaZ1qPKDy_UnpAwGg9NyFU",
"e": "AQAB"
}
]
}
From my understanding, the field with key "n" is supposed to be an RSA256 key. Adding it to a RSAVerifier eventually gets an error of "Caused by: org.springframework.security.jwt.codec.InvalidBase64CharacterException: Bad Base64 input character decimal 95 in array position 2."
However, if I login to keycloak admin page and go into the realm settings-> keys and click the public key, a popup shows the public key minus the "-----BEGIN PUBLIC KEY-----" and "-----END PUBLIC KEY-----" headers and footers. Hard coding this enables everything to work.
Is the key encoded?
I've tried doing a Base64Utils.decodeFromUrlSafeString and a Base64Utils.decodeFromString. The first returning something smaller and doesn't lool like the key and the later creating an Illegal argument exception Illegal base64 character 5f.
Update:
The n being returned is the modulous and e is the public exponent of the public key. But how does one get the actual key string?
| The keys are also directly on https://keycloak-server/auth/realms/my-realm
In a format directly exploitable with your code:
{
"realm": "my-realm",
"public_key": "MIIBI...",
"token-service": "https://keycloak-server/auth/realms/my-realm/protocol/openid-connect",
"account-service": "https://keycloak-server/auth/realms/my-realm/account",
"tokens-not-before": 0
}
| Keycloak | 54,318,633 | 27 |
I'm trying to run example app from:
https://github.com/keycloak/keycloak-quickstarts/tree/latest/app-springboot
I'm getting error:
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 1 of method setKeycloakSpringBootProperties in org.keycloak.adapters.springboot.KeycloakBaseSpringBootConfiguration required a bean of type 'org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver' that could not be found.
Action:
Consider defining a bean of type 'org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver' in your configuration.
Process finished with exit code 1
| I don't have a solution at the moment, but I can see that the exact same issue has been registered on the Keycloak Jira a couple of months ago: https://issues.jboss.org/browse/KEYCLOAK-10595. The problem seems to be caused by the code delivered with this PR: https://github.com/keycloak/keycloak/pull/6075.
The author of the PR described the problem in this way:
"The only remaining problem is, that the resolver is usually contained in the configuration using the KeycloakAutoConfiguration (in my example the SharedConfiguration) so you are trying to access the bean while the configuration is stil being created. This can be solved by moving the resolver bean into another configuration which has to be loaded before the KeycloakAutoConfiguration."
(source: https://issues.jboss.org/browse/KEYCLOAK-10334?focusedCommentId=13738518&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13738518)
UPDATE (OLD)
On the issue from the Keycloak Jira (https://issues.jboss.org/browse/KEYCLOAK-11282), a temporary workaround has been suggested.
@Configuration
public class MyKeycloakSpringBootConfigResolver extends KeycloakSpringBootConfigResolver {
private final KeycloakDeployment keycloakDeployment;
public MyKeycloakSpringBootConfigResolver(KeycloakSpringBootProperties properties) {
keycloakDeployment = KeycloakDeploymentBuilder.build(properties);
}
@Override
public KeycloakDeployment resolve(HttpFacade.Request facade) {
return keycloakDeployment;
}
}
LATEST UPDATE
A simpler way to solve the problem is to declare a KeycloakSpringBootConfigResolver in a separate configuration class. This option will fix issues with both Spring Boot and Spring Security.
@Configuration
public class KeycloakConfig {
@Bean
public KeycloakSpringBootConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
}
| Keycloak | 57,787,768 | 27 |
Switched to m1 mac a week ago and I cannot get my application up and running with docker because of the jboss/keycloak image not working as expected. Getting the following message from the container when trying to access localhost:8080
12:08:12,456 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-5) MSC000001: Failed to start service org.wildfly.network.interface.private: org.jboss.msc.service.StartException in service org.wildfly.network.interface.private: WFLYSRV0082: failed to resolve interface private
12:08:12,526 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([("interface" => "private")]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.network.interface.private" => "WFLYSRV0082: failed to resolve interface private"}}
12:08:13,463 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: Keycloak 12.0.4 (WildFly Core 13.0.3.Final) started (with errors) in 20826ms - Started 483 of 925 services (54 services failed or missing dependencies, 684 services are lazy, passive or on-demand)
Tried with all image versions and all behave the same. Has anyone managed to run this image without issues? Thanks
| Also you can build the keycloak docker image locally, I was able to start keycloak after doing that. Here are the steps I follow;
Clone Keycloak containers repository: git clone [email protected]:keycloak/keycloak-containers.git
Open server directory (cd keycloak-containers/server)
Checkout at desired version, eg. git checkout 12.0.4
Build docker image docker build -t jboss/keycloak:12.0.4 .
Run Keycloak docker run --rm -p 9080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak:12.0.4
| Keycloak | 67,044,893 | 27 |
I would like to ask, if somebody knows, why there are no roles within the user details in REST ADMIN API request. I saw some posts dealing with this topic, but there were either no clear answer or they propose to use keycloak-admin-client, but that seems not very convenient. Maybe I need to map the roles in Admin console or use claims? Roles are one of the most important user attribute so whats the reason they are not retrieved as other user attributes?Any suggestion? Thanks
GET /auth/admin/realms/{realm}/users
{
"id": "efa7e6c0-139f-44d8-baa8-10822ed2a9c1",
"createdTimestamp": 1516707328588,
"username": "testuser",
"enabled": true,
"totp": false,
"emailVerified": false,
"firstName": "Test",
"lastName": "User",
"email": "[email protected]",
"attributes": {"xxx": ["123456"]},
"disableableCredentialTypes": ["password"],
"requiredActions": []
}
| You are not getting roles in the user details because the REST API is strictly resource based and roles are separate objects that are just associated to a user. The following REST URLs can be used to get a user's roles
Getting the associated realm roles:
GET /auth/admin/realms/{realm}/users/{user-uuid}/role-mappings/realm
Getting the associated role of a specific client:
GET /auth/admin/realms/{realm}/users/{user-uuid}/role-mappings/clients/{client-uuid}
| Keycloak | 48,458,138 | 26 |
I am using the latest keycloak image in docker and can access the standard admin console at http://localhost:9080. However, I cant seem to access any of the paths specified in the documentation for Admin REST api. For instance, the base path /auth and Resource Get clients belonging to the realm Returns a list of clients belonging to the realm: /{realm}/clients I am getting a 404. So is for any other method in the documentation. The only path returning a valid 200 json response is http://localhost:9080/auth/realms/{realm-name}/ which according to the documentation be reachable at basepath + "/{realm-name}". Am I missing something or trying to access with a wrong base path. The keycloak version in docker is 3.4.3.Final which is the latest version of keycloak according to the documentation.
| I'm almost sure you are trying to call the endpoint like this:
http://localhost:9080/auth/admin/realms/demo/clients
However, you've missed this part/auth/admin/realms
Please, don't forget to authorize your call first as stated here
UPDATE
Here are my steps to see the results:
$ docker run -d -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak
Getting access_token:
$ curl -X POST \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'username=admin&password=admin&client_id=admin-cli&grant_type=password' \
http://localhost:9080/auth/realms/master/protocol/openid-connect/token
EDIT: With keycloak 17.0+ the /auth path segment should be omitted, so the correct URL is http://localhost:9080/realms/master/protocol/openid-connect/token
Reference: https://stackoverflow.com/a/71634718/3692110
Copy and paste obtained access_token to Authorization header:
$ curl -X GET \
-H 'Authorization: Bearer <access_token_goes_here>' \
http://localhost:9080/auth/admin/realms/master/clients
| Keycloak | 48,507,224 | 26 |
I am trying to setup Keycloak as a IdP (Identity Provider) and Nextcloud as a service. I want to setup Keycloak as to present a SSO (single-sign-on) page.
I am running a Linux-Server with a Intel compatible CPU. What is the correct configuration?
Keycloak will be running as https://kc.example.com
Nextcloud will be running as https://nc.example.com
| Prerequisite:
To use this answer you will need to replace example.com with an actual domain you own. Also, replace [email protected] with your working e-mail address.
It is assumed you have docker and docker-compose installed and running.
Setup your services with Docker
In addition to keycloak and nextcloud I use:
nginx as a reverse-proxy
letsencyrpt to generate the SSL-certificates for the sub-domains.
I'm setting up all the needed services with docker and docker-compose. This is how the docker-compose.yml looks like this:
version: '2'
nginx-proxy:
image: jwilder/nginx-proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- "/etc/nginx/vhost.d"
- "./proxy-default.conf:/etc/nginx/conf.d/my-proxy.default.conf:ro"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "./le-cert:/etc/nginx/certs:ro"
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
letsencrypt-nginx-proxy-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
depends_on:
- nginx-proxy
container_name: le-proxy-companion
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./le-cert:/etc/nginx/certs:rw"
volumes_from:
- nginx-proxy
keycloak:
image: jboss/keycloak
links:
- keycloak-postgres:postgres
ports:
- 8080:8080
volumes:
- ./keycloak:/opt/jboss/keycloak
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- "PROXY_ADDRESS_FORWARDING=true"
- VIRTUAL_PORT=8080
- VIRTUAL_HOST=kc.example.com
- LETSENCRYPT_HOST=kc.example.com
- [email protected]
keycloak-postgres:
image: postgres
environment:
- POSTGRES_DB=keycloak
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=keycloak
nextcloud:
image: hoellen/nextcloud
environment:
- UPLOAD_MAX_SIZE=10G
- APC_SHM_SIZE=128M
- OPCACHE_MEM_SIZE=128
- CRON_PERIOD=15m
- TZ=Europe/Berlin
- DOMAIN=nc.example.com
- ADMIN_USER=admin
- ADMIN_PASSWORD=admin
- DB_TYPE=mysql
- DB_NAME=nextcloud
- DB_USER=nextcloud
- DB_PASSWORD=nextcloud
- DB_HOST=nc-db
volumes:
- ./nc/nc-data:/data
- ./nc/nc-config:/config
- ./nc/nc-apps:/apps2
- ./nc/nc-themes:/nextcloud/themes
environment:
- VIRTUAL_HOST=nc.example.com
- LETSENCRYPT_HOST=nc.example.com
- [email protected]
nc-db:
image: mariadb
volumes:
- ./nc/nc-db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=nextcloud
- MYSQL_PASSWORD=nextcloud
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
I put my docker-files in a folder docker and within this folder a project-specific folder. Here keycloak. Create them with:
mkdir -p ~/docker/keycloak
Create the docker-compose.yml-File with your preferred editor in this folder. Start the services with:
cd ~/docker/keycloak
docker-compose up -d
Wait a moment to let the services download and start. Check if everything is running with:
docker-compose ps
If a service isn't running. Issue a second docker-compose up -d and check again.
Configure Keycloak, add a new Realm
Open a browser and go to https://kc.example.com. Click on Administration Console. As specified in your docker-compose.yml, Username and Password is admin.
On the top-left of the page, you need to create a new Realm. Click Add. Enter my-realm as the name. Click Save.
Click on the Keys-tab. Look at the RSA-entry. We will need to copy the Certificate of that line. Click on Certificate and copy-paste the content to a text editor for later use.
Prepare a Private Key and Certificate for Nextcloud
Open a terminal and issue:
openssl req -nodes -new -x509 -keyout private.key -out public.cert
This creates two files: private.key and public.cert which we will need later for the nextcloud service.
Configure Nextcloud
Open a browser and go to https://nc.example.com. As specified in your docker-compose.yml, Username and Password is admin.
You need to activate the SSO & Saml Authenticate which is disabled by default.
Important From here on don't close your current browser window until the setup is tested and running. If you close the browser before everything works you probably not be able to change your settings in nextcloud anymore. In such a case you will need to stop the nextcloud- and nextcloud-db-container, delete their respective folders, recreate them and start all over again.
Click on the top-right gear-symbol and then on the + Apps-sign. On the left now see a Menu-bar with the entry Security. Click it. You now see all security-related apps. Click on the Activate button below the SSO & SAML authentication App.
Click on the top-right gear-symbol again and click on Admin. Click on SSO & SAML authentication.
Use the following values:
Attribute to map UID to: username
Enable "Use SAML auth for the Nextcloud desktop clients (requires user re-authentication)"
Copy the content of public.cert into the 'X.509 Certificate'-field
Copy the content of private.key into the 'Private key of Service Provider'-field.
Identifier of the IdP: https://kc.example.com/realms/my-realm
URL Target of the IdP where the SP will send the Authentication Request Message: https://kc.example.com/realms/my-realm/protocol/saml
URL Location of IdP where the SP will send the SLO Request: https://kc.example.com/realms/my-realm/protocol/saml
Public X.509 certificate of the IdP: Copy the certificate from Keycloak from the Keys-tab of my-realm. You will need to add '-----BEGIN CERTIFICATE-----' in front of the key and '-----END CERTIFICATE-----' to the end of it.
In-Service Provider Data:
Attribute, displayname: username
Attribute, email address: email
Security Settings, enable the following options:
Indicates whether the <samlp:AuthnRequest> messages sent by this SP will be signed. [Metadata of the SP will offer this info]
Indicates whether the <samlp:logoutRequest> messages sent by this SP will be signed.
Indicates whether the <samlp:logoutResponse> messages sent by this SP will be signed.
Indicates a requirement for the <samlp:Response>, <samlp:LogoutRequest> and <samlp:LogoutResponse> elements received by this SP to be signed.
Indicates a requirement for the <saml:Assertion> elements received by this SP to be signed. [Metadata of the SP will offer this info]
Check there is a Metadata valid beside the Download metadata XML-Button
Click the Download metadata XML-Button. This generates and sends an XML file. Save it for use in the next step.
Configure Keycloak, Client
Access the Administrator Console again. Click on Clients and on the top-right click on the Create-Button.
Next to Import, click the Select File-Button. Select the XML-File you've created on the last step in Nextcloud.
and click Save.
You are presented with a new screen. Change the following fields:
Name: Nextcloud
Valid Redirect URIs: https://nc.example.com/ *
Click Save
On the Tab Mappers:
Click Delete-Button on the preassigned role list (if it exists)
Click Create
Name: username
Mapper Type: User Property
Property: username
SAML Attribute Name: username
SAML Attribute NameFormat: Basic
Click Save
Click Create
Name: email
Mapper Type: User Property
Property: email
SAML Attribute Name: email
SAML Attribute NameFormat: Basic
Click Save
Click Create
Name: Roles
Mapper Type: Role List
Role attribute name: Roles
Friendly Name: roles
SAML Attribute NameFormat: Basic
Single Role Attrubute: On
Click Save
Configure Keycloak, Add user
On the left side, click on Users
On the top-right, click Add users
Set the following values:
Username: user
Email: [email protected]
Click Save
On the tab Credentials:
New Password: user
Password Confirmation: user
Temporary: Off
Click Reset Password
A Window pops up:
Click Change Password
Test run
Open a new browser window in incognito/private mode. Eg. for google-chrome press Ctrl-Shift-N, in Firefox press Ctrl-Shift-P. Keep the other browser window with the nextcloud setup page open. Else you might lock yourself out.
Access https://nc.example.com with the incognito/private browser window. You are presented with the keycloak username/password page. Enter user as a name and password. You should be greeted with the nextcloud welcome screen.
Acknowledgement
This guide wouldn't have been possible without the wonderful http://int128.hatenablog.com/entry/2018/01/16/194048 blog entry. I've read it with google-translator in English.
Thanks goes also to RMM. His wiki entry allowed me to create correct keys for nextcloud and enable message-signing, thus improving this answer.
| Keycloak | 48,400,812 | 25 |
I understand that keycloak has built-in clients and we add the users later on.
But in general, what is the difference between a client and a user in Keycloak?
| According to the Keycloak documentation
User - Users are entities that are able to log into your system
Client - Clients are entities that can request Keycloak to authenticate a user. Most
often, clients are applications and services that want to use Keycloak to secure
themselves and provide a single sign-on solution. Clients can also be entities that
just want to request identity information or an access token so that they can
securely invoke other services on the network that are secured by Keycloak
| Keycloak | 49,107,701 | 25 |
Can someone please explain the cookies set by Keycloak:
KEYCLOAK_SESSION,Oauth_token_request_state, KEYCLOAK_IDENTITY.
What is the relevance of each cookies?
| They are cookies for internal use of Keycloak.
KEYCLOAK_IDENTITY contains a token (JWT) with the user ids. You can view its content using jwt.io (for example). This cookie lives with your browser session and can also be refreshed with SSO. (for example, if you change some of your personal data in the "Manage my account")
KEYCLOAK_SESSION your session id associated to the concerned realm.
Oauth_token_request_state is part of the Oauth spec in order to avoid hacking of the redirect link after login
| Keycloak | 50,589,548 | 25 |
I have tried to access the keycloak API from the postman. but it is showing 400 bad request.
I was calling api in the below format.
http://{hostname}:8080/auth/realms/master/protocol/openid-connect/token?username=admin&password=admin&client_id=admin-cli&grant_type=password
In the headers I have set the content_type as application/x-www-form-urlencoded
I am getting the response as below.
{
"error": "invalid_request",
"error_description": "Missing form parameter: grant_type"
}
Can any one help me.Any help will be appreciated. thanks in advance
| A bit late for this question, but you did ask about postman and not curl.
So you have to put the options in x-www-form-urlencoded
| Keycloak | 49,313,554 | 23 |
How can I get user keycloak attributes (username, firstname, email...) based on user id?
The user I'm using in the Keycloak session has already the role view-users assigned so I should be able to list at least all users, is there any Keycloak class that I can use?
What I'm trying to achieve here is to avoid to replicate the keycloak users database to another local database, but doesn't seem possible to access any other user info, besides the one in the current session...
| You can use the Admin REST API. The detailed description of the relevant API is available here. Also you can use the JAVA wrapper API. Please find couple of examples below.
Example 1, REST:
Get an access token:
curl \
-d "client_id=admin-cli" \
-d "username=admin" \
-d "password=secret" \
-d "grant_type=password" \
"http://localhost:8080/auth/realms/master/protocol/openid-connect/token"
Get all users:
curl \
-H "Authorization: bearer eyJhbGciOiJSUzI...." \
"http://localhost:8080/auth/admin/realms/master/users"
Sample output:
[
{
"id":"349f67de-36e6-4552-ac54-e52085109616",
"username":"admin",
"enabled":true,
...
},
{
"id":"08afb701-fae5-40b4-8895-e387ba1902fb",
"username":"lbalev",
"enabled":true,
....
}
]
Get a user based by user id:
curl \
-H "Authorization: bearer eyJhbGciOiJSU...." \
"http://localhost:8080/auth/admin/realms/master/users/349f67de-36e6-4552-ac54-e52085109616"
Example 2, JAVA API:
Get a user based on user ID:
public class TestUserAccess {
private static final String SERVER_URL = "http://localhost:8080/auth";
private static final String REALM = "master";
private static final String USERNAME = "admin";
private static final String PASSWORD = "secret";
private static final String CLIENT_ID = "admin-cli";
public static void main(String[] args) {
Keycloak keycloak = KeycloakBuilder
.builder()
.serverUrl(SERVER_URL)
.realm(REALM)
.username(USERNAME)
.password(PASSWORD)
.clientId(CLIENT_ID)
.resteasyClient(new ResteasyClientBuilder().connectionPoolSize(10).build())
.build();
UsersResource usersResource = keycloak.realm(REALM).users();
UserResource userResource = usersResource.get("08afb701-fae5-40b4-8895-e387ba1902fb");
System.out.println(userResource.toRepresentation().getEmail());
}
}
The relevant dependencies for the example above are (please note that the versions might not be up-to-date):
dependencies {
compile group: 'org.keycloak', name: 'keycloak-admin-client', version: '3.3.0.CR2'
compile group: 'org.jboss.resteasy', name: 'resteasy-jaxrs', version: '3.1.4.Final'
compile group: 'org.jboss.resteasy', name: 'resteasy-client', version: '3.1.4.Final'
compile group: 'org.jboss.resteasy', name: 'resteasy-jackson2-provider', version: '3.1.4.Final'
}
| Keycloak | 55,643,277 | 23 |
I'm trying to interact with Keycloak via its REST API. I have the master realm and the default admin user, and a test realm. Firstly, I get an access token for the admin account and test realm:
let data = {
grant_type : 'password',
client_id : 'test-realm',
username : 'admin',
password : 'admin'
};
let headers = {
'Content-Type': 'application/x-www-form-urlencoded'
};
axios.post(
'https://someurl.com:8080/auth/realms/master/protocol/openid-connect/token',
qs.stringify(data),
headers
)
That works ok. Then I try to make a call to create a user (or do anything else) and I get a 401 unauthorized error:
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': `Bearer ${accessToken}`
};
data = {
rep: {
email: "[email protected]",
username: "[email protected]"
},
path: 'test-realm'
};
axios.post('https://someurl.com:8080/auth/admin/realms/test-realm/users',
qs.stringify(data),
headers
)
Is that not the correct way to include the token? Is the access token the one you use for authenticating other API calls? Shouldn't the admin account's token work for authenticating calls to other clients with the master realm? Would it be some setting in the master realm that I have to change in the admin console? Any help appreciated.
| I got a 401 error because I generated the offline token by using http://localhost:8080 and then I tried to request the api by using http://keycloak:8080 which is not allowed. Unfortunately the log doesn't tell you that.
To debug JWT tokens I recommend https://jwt.io/
| Keycloak | 46,882,610 | 22 |
I want to write unit tests for my spring controller. I'm using keycloak's openid flow to secure my endpoints.
In my tests I'm using the @WithMockUser annotation to mock an authenticated user. My problem is that I'm reading the userId from the token of the principal. My unit test now fails because the userId I read from the token is null;
if (principal instanceof KeycloakAuthenticationToken) {
KeycloakAuthenticationToken authenticationToken = (KeycloakAuthenticationToken) principal;
SimpleKeycloakAccount account = (SimpleKeycloakAccount) authenticationToken.getDetails();
RefreshableKeycloakSecurityContext keycloakSecurityContext = account.getKeycloakSecurityContext();
AccessToken token = keycloakSecurityContext.getToken();
Map<String, Object> otherClaims = token.getOtherClaims();
userId = otherClaims.get("userId").toString();
}
Is there anything to easily mock the KeycloakAuthenticationToken?
| @WithmockUser configures the security-context with a UsernamePasswordAuthenticationToken. This can be just fine for most use-cases but when your app relies on another Authentication implementation (like your code does), you have to build or mock an instance of the right type and put it in the test security-context: SecurityContextHolder.getContext().setAuthentication(authentication);
Of course, you'll soon want to automate this, building your own annotation or RequestPostProcessor
... or ...
take one "off the shelf", like in this lib of mine, which is available from maven-central:
<dependency>
<!-- just enough for @WithMockKeycloackAuth -->
<groupId>com.c4-soft.springaddons</groupId>
<artifactId>spring-security-oauth2-test-addons</artifactId>
<version>3.0.1</version>
<scope>test</scope>
</dependency>
<dependency>
<!-- required only for WebMvc "fluent" API -->
<groupId>com.c4-soft.springaddons</groupId>
<artifactId>spring-security-oauth2-test-webmvc-addons</artifactId>
<version>3.0.1</version>
<scope>test</scope>
</dependency>
You can use it either with @WithMockKeycloackAuth annotations:
@RunWith(SpringRunner.class)
@WebMvcTest(GreetingController.class)
@ContextConfiguration(classes = GreetingApp.class)
@ComponentScan(basePackageClasses = { KeycloakSecurityComponents.class, KeycloakSpringBootConfigResolver.class })
public class GreetingControllerTests extends ServletUnitTestingSupport {
@MockBean
MessageService messageService;
@Test
@WithMockKeycloackAuth("TESTER")
public void whenUserIsNotGrantedWithAuthorizedPersonelThenSecretRouteIsNotAccessible() throws Exception {
mockMvc().get("/secured-route").andExpect(status().isForbidden());
}
@Test
@WithMockKeycloackAuth("AUTHORIZED_PERSONNEL")
public void whenUserIsGrantedWithAuthorizedPersonelThenSecretRouteIsAccessible() throws Exception {
mockMvc().get("/secured-route").andExpect(content().string(is("secret route")));
}
@Test
@WithMockKeycloakAuth(
authorities = { "USER", "AUTHORIZED_PERSONNEL" },
claims = @OpenIdClaims(
sub = "42",
email = "[email protected]",
emailVerified = true,
nickName = "Tonton-Pirate",
preferredUsername = "ch4mpy",
otherClaims = @Claims(stringClaims = @StringClaim(name = "foo", value = "bar"))))
public void whenAuthenticatedWithKeycloakAuthenticationTokenThenCanGreet() throws Exception {
mockMvc().get("/greet")
.andExpect(status().isOk())
.andExpect(content().string(startsWith("Hello ch4mpy! You are granted with ")))
.andExpect(content().string(containsString("AUTHORIZED_PERSONNEL")))
.andExpect(content().string(containsString("USER")));
}
Or MockMvc fluent API (RequestPostProcessor):
@RunWith(SpringRunner.class)
@WebMvcTest(GreetingController.class)
@ContextConfiguration(classes = GreetingApp.class)
@ComponentScan(basePackageClasses = { KeycloakSecurityComponents.class, KeycloakSpringBootConfigResolver.class })
public class GreetingControllerTest extends ServletKeycloakAuthUnitTestingSupport {
@MockBean
MessageService messageService;
@Test
public void whenUserIsNotGrantedWithAuthorizedPersonelThenSecretMethodIsNotAccessible() throws Exception {
mockMvc().with(authentication().roles("TESTER")).get("/secured-method").andExpect(status().isForbidden());
}
@Test
public void whenUserIsGrantedWithAuthorizedPersonelThenSecretMethodIsAccessible() throws Exception {
mockMvc().with(authentication().roles("AUTHORIZED_PERSONNEL")).get("/secured-method")
.andExpect(content().string(is("secret method")));
}
}
| Keycloak | 49,144,953 | 22 |
We deployed keycloak server 4.6.0.Final for authentication for our web application How can i configure to get server logs? I cannot find any logs from server.log or Audit.log files. Do I need to configure any place to show the keycloak server log details.
| When starting the Keycloak instance you can pass an environment variables to set log level for Keycloak.
docker run -e KEYCLOAK_LOGLEVEL=DEBUG jboss/keycloak
For the Kubernetes Deployment:
Add the following env variable to Kuberenetes deployment manifest.
keycloak:
extraEnv: |
- name: KEYCLOAK_LOGLEVEL
value: DEBUG
- name: WILDFLY_LOGLEVEL
value: DEBUG
More informations : https://github.com/devsu/docker-keycloak/blob/master/server/README.md
| Keycloak | 58,060,682 | 22 |
There is a web server running locally, and I want to have Keycloak (on another domain) login page inside the iframe. I tried the following setting in the Keycloak Real Settings > Security Defenses > Headers > Content-Security-Policy
frame-src 'self' http://127.0.0.1 http://192.168.1.140 http://localhost *.home-life.hub http://trex-macbook.home-life.hub localhost; frame-ancestors 'self'; object-src 'none';
Basically, I put my local IP addresses and host names as sources to frame-src.
The login page is not shown and I get this error in the browser console
Refused to display 'http://keycloak.example.com:8080/auth/realms/master/protocol/openid-connect/auth?client_id=es-openid&response_type=code&redirect_uri=https%3A%2F%2Fkibana.example.com%3A5601%2Fauth%2Fopenid%2Flogin&state=3RV-_nbW-RvmB8EfUwgkJq&scope=profile%20email%20openid' in a frame because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'self'".
My custom headers are present
My server and UI (server rendered) code:
'use strict';
const Hapi = require('@hapi/hapi');
const init = async () => {
// Run server on all interfaces
const server = Hapi.server({
port: 3000,
});
await server.start();
// server.ext('onPreResponse', (req, h) => {
// req.response.header('Content-Security-Policy', "default-src 'self' *.example.com");
// console.log('req.response.headers', req.response.headers);
// return h.continue;
// });
server.route({
method: 'GET',
path: '/home',
handler: () => {
return `<html>
<head>
<title>searchguard kibana openid keycloak</title>
</head>
<body>
<p>
<iframe src="https://kibana.example.com:5601" width="800" height="600"></iframe>
</p>
</body>
</html>`;
},
});
server.route({
method: '*',
path: '/{path*}',
handler: (req, h) => {
return h.redirect('/home');
},
});
console.log('Server running on %s', server.info.uri);
};
process.on('unhandledRejection', (err) => {
console.log(err);
process.exit(1);
});
init();
The iframe should show a page on kibana.example.com in the end. The Keycloak is used as an identity provider for the kibana.example.com.
| Try to change:
frame-ancestors 'self';
to
frame-ancestors 'self' http://127.0.0.1 http://192.168.1.140 http://localhost *.home-life.hub http://trex-macbook.home-life.hub localhost;
Generally, tweak frame-ancestors CSP configuration.
| Keycloak | 60,659,225 | 22 |
I have secured an enterprise application with Keycloak using standard wildfly based Keycloak adapters. Issue that I am facing is that the rest web services when invoked, needs to know the username that is currently logged in. How do I get the logged in user information from Keycloak?
I tried using SecurityContext , WebListener etc. But none of them are able to give me the required details.
| You get all user information from the security context.
Example:
public class Greeter {
@Context
SecurityContext sc;
@GET
@Produces(MediaType.APPLICATION_JSON)
public String sayHello() {
// this will set the user id as userName
String userName = sc.getUserPrincipal().getName();
if (sc.getUserPrincipal() instanceof KeycloakPrincipal) {
KeycloakPrincipal<KeycloakSecurityContext> kp = (KeycloakPrincipal<KeycloakSecurityContext>) sc.getUserPrincipal();
// this is how to get the real userName (or rather the login name)
userName = kp.getKeycloakSecurityContext().getIdToken().getPreferredUsername();
}
return "{ message : \"Hello " + userName + "\" }";
}
For the security context to be propagated you have to have a security domain configured as described in the:
JBoss/Wildfly Adapter configuration
| Keycloak | 31,864,062 | 21 |
I am implementing a custom login page for keycloak (version 2.5), by following this guide. I added my own custom styling, now I am trying to add the Dutch locale. Currently no Dutch locale is provided, so I provided following properties files:
themes/mytheme/login/messages/messages_en.properties
themes/mytheme/account/messages/messages_en.properties
themes/mytheme/email/messages/messages_en.properties
with the locale_nl=Nederlands property. After that I added the messages_nl.properties files with the translation strings.
Next I added the locales=en,nl,de property to following files:
themes/mytheme/login/messages/theme.properties
themes/mytheme/account/messages/theme.properties
themes/mytheme/email/messages/theme.properties
There is only one thing left to do: add the Dutch locale in the admin console. But I can't select the NL locale after I enabled internationalization. I can only select the English and German locale, my just created Dutch locale is not available:
According to Multilingual support and adding custom Locales in Keycloak, I should be able to add my own locale by just typing the locale and hitting 'enter', but that does not do anything.
I am missing a step here?
| After reading the code, I understood only adding the files is not enough. You need to enable your theme not only for the login theme, but also for the account and email themes:
As I only changed the login theme to my own 'custom-theme', the Dutch locale did not show up.
| Keycloak | 41,855,394 | 21 |
I am trying to implement keycloak as an SSO for my company. I have created two realms, realm A and realm B. I want to use same set of users for both realms ie I need give access to users for both realms. Is it possible to do this in Keycloak?
| No that's not possible. Users are always realm specific. The only way would be to keep the users in an external store and integrate this external store via federation (UserStorageSpi) into both realms. But then you'll have to do all user management on the external store, as it is the primary source of your user data.
| Keycloak | 47,121,643 | 21 |
Who knows how to obtain the id_token with Keycloak?
I have been working with Keycloak in Java (Spring, JEE) and postman.
The basics work fine but I need the id_token since there are some claims that they are not present in the access_token but they are present in the id_token.
Using the keycloak-core library I could obtain the Keycloak context, but the id_token attribute always is null.
Some idea?
| If you are using keycloak version 3.2.1,
then below mail chain will help you.
Hi All
I am using below curl command
curl -k https://IP-ADDRESS:8443/auth/realms/Test123/protocol/openid-connect/token -d "grant_type=client_credentials" -d "client_id=SURE_APP" -d "client_secret=ca3c4212-f3e8-43a4-aa14-1011c7601c67"
In the above command's response id_token is missing ,which is require for kong to tell who i am?
In my keycloak realm->client-> Full Scope Allowed ->True
Ok I found it we have to add
scope=openid
then only it will work
| Keycloak | 49,322,417 | 21 |
I like to manage keycloak from my own application:create user & clients, display users & client. As this is not a real user but a machine I would like to use a service account with a client credential grant as proposed in How to get Keycloak users via REST without admin account . To realize this I:
create a realm
inside the real created a client
configured the access type of the client to "confidential" saved and activated the "Service Accounts Enabled" option that will apear after the save.
enable under scopes the client-roles of the "real-management" (see screenshot)
requested an access token with the "username:password" base64 encoded in the header
curl -X POST 'http://accounts.d10l.de/auth/realms/d10l/protocol/openid-connect/token' \
-H "Content-Type: application/x-www-form-urlencoded" \
-H "Authorization: Basic ZGV2ZWxvcGVyLXBvcnRhbDpmZGRmYzM4Yy05MzAyLTRlZmQtYTM3Yy1lMWFmZGEyMmRhMzc=" \
-d 'grant_type=client_credentials' \
| jq -r '.access_token'
Try to access the users using the access token:
curl -I GET 'http://accounts.d10l.de/auth/admin/realms/d10l/users/' \
-H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIxRVdoNENFUjIweDY5SlBCekU4dU9GdXF4R2NVNlVfWmpTNTQ5bmd2QjNjIn0.eyJqdGkiOiI0NDM0ZDFhNS0xZTA5LTQ4MzQtYWI2Yy0zOTk1YmEwMTgxMzAiLCJleHAiOjE1MzY0MzYwMDEsIm5iZiI6MCwiaWF0IjoxNTM2NDM1NzAxLCJpc3MiOiJodHRwOi8vYWNjb3VudHMuZDEwbC5kZS9hdXRoL3JlYWxtcy9kMTBsIiwiYXVkIjoiZGV2ZWxvcGVyLXBvcnRhbCIsInN1YiI6IjliYWI0YWM1LTRiNWMtNGIxOS05ZTc3LWFjOWFmNzlkNzFhZiIsInR5cCI6IkJlYXJlciIsImF6cCI6ImRldmVsb3Blci1wb3J0YWwiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiIyOWM2YWI3Mi05N2RiLTQ2NWUtYTE1Yy03ZWE5NzA0NmZlYzQiLCJhY3IiOiIxIiwiYWxsb3dlZC1vcmlnaW5zIjpbXSwicmVzb3VyY2VfYWNjZXNzIjp7fSwic2NvcGUiOiJlbWFpbCBwcm9maWxlIiwiY2xpZW50SWQiOiJkZXZlbG9wZXItcG9ydGFsIiwiY2xpZW50SG9zdCI6IjE3Mi4xNy4wLjEiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsInByZWZlcnJlZF91c2VybmFtZSI6InNlcnZpY2UtYWNjb3VudC1kZXZlbG9wZXItcG9ydGFsIiwiY2xpZW50QWRkcmVzcyI6IjE3Mi4xNy4wLjEiLCJlbWFpbCI6InNlcnZpY2UtYWNjb3VudC1kZXZlbG9wZXItcG9ydGFsQHBsYWNlaG9sZGVyLm9yZyJ9.D_XnpF1rwCayup8h4UXM4AGWkY_xQo40X-yIlWhmqaxkVh1FQy24932VDRCAmxYHcrwazRMqO7snXmre3_8YF5R9Dt8GYjiBorECvQ9X_nBwunmHqnGxIeE64c2GXiz6zSjdgQJQE8fH10NsLyFWHQ-lBPsBwZBsrkKQ5QUEU2qjE7rDRPtYLJPB94BSE4QGfedmRIbvg39snVkClBDUmuBTq_Rc4p7kV69h0a2Mb1sgEr3MdB4RcsOe3gJPZVVtu7gZuGqcAQKMYgtybArF3OXz37w8hjUp6FABxDcvY7K-jsGxXn0hSU0OB7wxAWY9vP4ar4tQYlKxNjs46rPLWw"
But the response is a 403:
url: (6) Could not resolve host: GET
HTTP/1.1 403 Forbidden
content-length: 0
date: Sat, 08 Sep 2018 19:42:06 GMT
How/Is it possible accessing the Admin REST API from a new service account through a client credential grant?
| Keycloak differentiates between the Scopes/Scope mapping & the roles management.
The Scopes tab: you see in the question above only manages the roles that a client is allowed to request.
For the client credential grant to work these roles must be assigned to the client in the "Service Account Roles" Tab.
So in the end the client receive a token that is the intersection of both of those configurations.
Source: https://www.keycloak.org/docs/latest/server_admin/index.html#_service_accounts
| Keycloak | 52,238,839 | 21 |
I create token using http://localhost:8080/auth/realms/{realm_name}/protocol/openid-connect/token endpoint.
grant_type=client_credentials
client-id: ------------
client-secret: 78296d38-cc82-4010-a817-65c283484e51
Now I want to get users of realm. Then I send request to http://localhost:8080/auth/admin/realms/{realm_name}/users?username=demo endpoint with token.
But I got 403 forbidden response with "error": "unknown_error". How to solve it?
| The service account associated with your client needs to be allowed to view the realm users.
Go to http://localhost:8080/auth/admin/{realm_name}/console/#/realms/{realm_name}/clients
Select your client (which must be a confidential client)
In the settings tab, switch Service Account Enabled to ON
Click on save, the Service Account Roles tab will appear
In Client Roles, select realm_management
Scroll through available roles until you can select view_users
Click on Add selected
You should have something like this :
You client is now allowed to access users through the REST API.
| Keycloak | 66,452,108 | 21 |
I have surfed through google without finding any concrete answers or examples, so again trying my luck here (often get lucky).
The problem
I have a single spring boot RESTful service running behind an apache
reverse proxy. This RESTful service is running HTTP only. Say it's running on
local ip 172.s port 8080.
I have also configured an apache reverse proxy. Say it's running on
local ip 172.a and public ip 55.a. This proxy responds to both port 80, but all the HTTP traffic is automatically redirected to 443.
I have another server running a standalone Keycloak server. Also
this server is configured to be public accessible through the
reverse proxy. Say it's running on local ip 172.k. This Keycloak server is running on HTTP-only. The HTTP requests are handled using SSL over the reverse proxy.
Last, I have another frontend-webapp running on local ip 172.f. This frontend-webapp is running under Nodejs, and is also configured through the reverse proxy. It's also running only HTTP, but client(browser) is using SSL through the reverse proxy, just as for the Keycloak and RESTful service. This frontend is consuming the RESTful service, and is also configured to authenticate using the keycloak javascript adapter.
The RESTful service is configured as bearer-only using Spring Boot Keycloak adapter, while the frontend app is configured with access type public.
The RESTful service server, Keycloak server, and the frontend server are not public accessible; they are accessible only through the reverse proxy. But they can communicate with each other (since they are in the same private network).
In the frontend keycloak.json file, the auth-server-url is set to the proxy url https://example.com/auth, and the frontend is able to successfully get a valid token. Now when I try to consume the RESTful service, I get a error in RESTful adapter that the token issuer is invalid. In the http-header I am, of course, sending the Authorization: Bearer <token>. The reason I am getting this error is that in RESTful keycloak configuation, I have configured the auth-server-url to use the local url http://172.k:9080/auth, so this url is different from the one in the token (which is https://example.com/auth).
Question
I cannot include the same auth-server-url in the RESTful service as for the frontend, because that will require me to also setup HTTPs on the RESTful service (because that url is https), and that will complicate stuff a lot, including the need to setup certificates and stuff like that. Also I think it's inefficient and not practical to setup SSL on local only servers.
So my question is how I can make the adapter talk to the Keycloak without going through the reverse proxy. I want the RESTful adapter to talk to the Keyclok server for token verification through auth-server-url: http://172.k:9080/auth.
Earlier there was a different url for backend, that got removed: https://issues.jboss.org/browse/KEYCLOAK-2623
| You need to inform keycloak about the location of the reverse proxy. Then in its response it will set location to there instead of its local address. To do that in the latest keycloak set the environment variable KEYCLOAK_FRONTEND_URL to point to
the string https://example.com/auth (yes, it needs the whole address.
To make this work, also set PROXY_ADDRESS_FORWARDING to the value true
If it's a Docker container, that means:
environment:
...
PROXY_ADDRESS_FORWARDING: "true"
KEYCLOAK_FRONTEND_URL: "https://example.com/auth"
Alternately, you can set KEYCLOAK_HOSTNAME to example.com and that will leave the port number, for which???(not sure how to do this part yet, if you find out please let me know... ) )
EDIT: Note that in some cases, you might want to set this only for a specific client. While configuring each client from inside keycloak, you can set its Frontend_URL from the first options tab.
| Keycloak | 42,342,367 | 20 |
I have been looking through the Keycloak documentation but cannot see how to do this. With Java, I'd like to take a valid userid and password and then generate a token. How can I do this?
| --EDIT 2018-08-31--
You can use the Authorization Client Java API. Once you have created an AuthzClient object, you can pass the username and password to the AuthzClient#authorization(username, password) or AuthzClient#obtainAccessToken(username, password) method to authenticate the user and get the access token (and/or ID token in the first case):
// create a new instance based on the configuration defined in keycloak-authz.json
AuthzClient authzClient = AuthzClient.create();
// send the authorization request to the server in order to
// obtain an access token granted to the user
AccessTokenResponse response = authzClient.obtainAccessToken("alice", "alice");
On a side note, if possible, you'd rather reuse one of the Keycloak Java Adapters to cover more features, such as other authentication methods (the user is typically redirected to Keycloack WUI where you can enforce very flexible authentication and authorization policies).
| Keycloak | 52,085,735 | 20 |
I am trying to achieve fairly simple usecase of role based client application (VueJS multi-page applications) control using the keycloak.
As shown in image, I have three different roles and three different clients in single realm.
The arrow in the image represents which role can access which client.
So my main objectives are,
User with role Viewer should only be able to log-in to the Viewer Application. If the same user tries to access the Operator Application or Admin application then keycloak should simply deny this user from doing so.
The same rules should follow for users with Admin and Operator role. Users of Admin role should be able to log-in to any of these application by keycloak.
To achieve this usecase I tried following ways,
First by appropriate role mapping to users and role creation in the clients. In this case, I create realm level roles and then client level roles, then assigned appropriate roles to the users created in the user section.
Enabling the Authorization. In the policies, I removed default policy that grant all users access to the client. And create a User policy and Client policy to restrict the access to client application
Also tried with Group based authorization policy. In this case, I created a group with client role and then assigned user to these groups. And enabled them from the Authorization group policy.
But, unfortunately none of this works. Meaning my user with Viewer role can log-in to my admin application. Which is just strange.
| You can do this without extensions.
Copy the desired flow (e.g. the browser flow)
Create a new sub flow (e.g. for the browser forms) and call it Access By Role and select generic as type.
For the new sub flow ensure that CONDITIONAL is selected in the flow overview.
For the new sub flow add execution Condition - User Role, make it REQUIRED and configure it:
alias: admin-role-missing
role: admin (or whatever your role is)
negate: true
Add another execution: Deny Access and make it REQUIRED as well.
The final result should look similar to this:
This will deny access if the condition "admin-role-missing" is true.
You an also learn more from the docs: explicitly-deny-allow-access-in-conditional-flows
Also, don't forget to go to your client and select the flow in the authentication overrides.
| Keycloak | 57,287,497 | 20 |
I'm using Angular 8.0.3 and keycloak 6.0.1 to make the front authentication.
Problem
I managed to get to the keycloak login page from my application. After logging in with my login details, an error occurs :
-localhost/:1 Access to XMLHttpRequest at 'https://localhost:8080/auth/realms/pwe-realm/protocol/openid-connect/token' from origin 'http://localhost:4200' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
-Keycloak init failed An error happened during Keycloak initialization.
Could you help me please ?
My Keycloak Configuration :
1 Realm : pwe-realm
2 client :
-pwe-api (for my back end)
-pwe-web (for my authentication front end)
pwe-web configuration:
Client Protocol: openid-connect
Access Type: public
Valid redicrect Uris: http//:localhost:4200/ (I tried also "*")
My code (I am using this librairy : keycloak-angular):
environments.ts :
import {KeycloakConfig} from 'keycloak-angular';
const keycloakConfig: KeycloakConfig = {
url: 'https://localhost:8080/auth',
realm: 'pwe-realm',
clientId: 'pwe-web'
};
export const environment = {
production: false,
keycloakConfig
};
app.moudle.ts
//imports
const keycloakService = new KeycloakService();
@NgModule({
declarations: [
AppComponent,
...
],
imports: [
KeycloakAngularModule,
BrowserModule,
AppRoutingModule,
...
],
providers: [
{
provide: KeycloakService,
useValue: keycloakService,
}
],
entryComponents: [AppComponent]
})
export class AppModule implements DoBootstrap {
async ngDoBootstrap(app) {
const { keycloakConfig } = environment;
try {
await keycloakService.init({ config: keycloakConfig });
app.bootstrap(AppComponent);
} catch (error) {
console.error('Keycloak init failed', error);
}
}
}
| I wasted half a day on this while developing with Vue.js against a server on localhost.
You probably need to set the Web Origins on your Keycloak server for your Keycloak client:
Login to the Keycloak admin screen, select the realm pwe-realm and then your client pwe-web.
Scroll to the Web Origin settings and type the plus sign. Do not click on the (+) button, but literally type + . That adds all the Valid Redirect URIs that you defined above to the Web Origins headers. You will also need to add the URI of your angular application to the Valid Redirect URIs list.
Press Save at the bottom of the screen.
It should work immediately.
| Keycloak | 59,018,604 | 20 |
The role uma_authorization is apparently created by default in Keycloak. What is this role? Can I safely delete it?
| UMA - User-Managed Access.
Keycloak docs.
| Keycloak | 65,941,530 | 20 |
Simple question:
Why does the following code work... (it returns the access token just fine)
curl --data "grant_type=client_credentials&client_id=synchronization_tool&client_secret=8f6a6e73-66ca-4f8f-1234-ab909147f1cf" http://localhost:8080/auth/realms/master/protocol/openid-connect/token
And this one doesn't?
curl -d '{"grant_type":"client_credentials","client_secret":"8f6a6e73-66ca-4f8f-1234-ab909147f1cf","client_id":"synchronization_tool"}' http://localhost:8080/auth/realms/master/protocol/openid-connect/token -H "Content-Type: application/json"
It gives gives me:
"error":"invalid_request","error_description":"Missing form parameter: grant_type"}
Aren't they supposed to be two completely analogous requests?
| curl -d 'client_id=xxx' -d 'username=xxx' -d 'password=xxx' -d 'grant_type=password' \
'http://localhost:8080/auth/realms/YOUR_REALM_NAME/protocol/openid-connect/token' | \
python -m json.tool
This works for me, and it will give you the access_token and session_token
| Keycloak | 50,256,433 | 19 |
Having a few minor issues with role based authorization with dotnet core 2.2.3 and Keycloak 4.5.0.
In Keycloak, I've defined a role of 'tester' and a client role 'developer' with appropriate role mappings for an 'admin' user. After authenticating to Keycloak; if I look at the JWT in jwt.io, I can see the following:
{
"realm_access": {
"roles": [
"tester"
]
},
"resource_access": {
"template": {
"roles": [
"developer"
]
},
...
},
...
}
In .NET core, I've tried a bunch of things such as adding [Authorize(Roles = "tester")] or [Authorize(Roles = "developer")] to my controller method as well as using a policy based authorization where I check context.User.IsInRole("tester") inside my AuthorizationHandler<TRequirement> implementation.
If I set some breakpoints in the auth handler. When it gets hit, I can see the 'tester' and 'developer' roles listed as items under the context.user.Claims IEnumerable as follows.
{realm_access: {"roles":["tester"]}}
{resource_access: {"template":{"roles":["developer"]}}}
So I should be able to successfully do the authorization in the auth handler by verifying the values for realm_access and resource_access in the context.user.Claims collection, but this would require me to deserialize the claim values, which just seem to be JSON strings.
I'm thinking there has to be better way, or I'm not doing something quite right.
| "AspNetCore.Authorization" expects roles in a claim (field) named "roles". And this claim must be an array of string (multivalued). You need to make some configuration on Keycloak side.
The 1st alternative:
You can change the existing role path.
Go to your Keycloak Admin Console > Client Scopes > roles > Mappers > client roles
Change "Token Claim Name" as "roles"
Multivalued: True
Add to access token: True
The 2nd alternative:
If you don't want to touch the existing path, you can create a new Mapper to show the same roles at the root as well.
Go to your Keycloak Admin Console > Client Scopes > roles > Mappers > create
Name: "root client roles" (or whatever you want)
Mapper Type: "User Client Role"
Multivalued: True
Token Claim Name: "roles"
Add to access token: True
| Keycloak | 56,327,794 | 19 |
I'm upgrading my Keycloak from 16 to 20.
In 16 I could use this screen to add a custom mapper.
In 20.0.2 I can't find this form in the admin panel. There is a Client scopes tab for each client, and it has an add button. But it does not allow me to define my custom mapper. I just adds predefined mappers.
Where is that form?
How can I add custom mapper to a client in Keycloak 20.0.2?
| IMO the old ui was a bit more intuitive in this regard. With the new one you need to:
Go to your Realm
Go to Clients and click on your client
switch to 'Client Scopes'
In the 'Assigned client scope' click on your client-id-dedicated:
then you go to the following menu:
Click on 'Configure a new Mapper' and then select 'User Attribute' and you get something as follow:
| Keycloak | 74,872,231 | 19 |
I'm trying to test Keycloak REST API.
Instaled the version 2.1.0.Final.
I can access the admin through browser with SSL without problems.
I'm using the code above:
Keycloak keycloakClient = KeycloakBuilder.builder()
.serverUrl("https://keycloak.intra.rps.com.br/auth")
.realm("testrealm")
.username("development")
.password("development")
.clientId("admin-cli")
.resteasyClient(new ResteasyClientBuilder().connectionPoolSize(10).build())
.build();
List<RealmRepresentation> rr = keycloakClient.realms().findAll();
And got the error:
javax.ws.rs.ProcessingException: RESTEASY003145: Unable to find a MessageBodyReader of content-type application/json and type class org.keycloak.representations.AccessTokenResponse
javax.ws.rs.client.ResponseProcessingException: javax.ws.rs.ProcessingException: RESTEASY003145: Unable to find a MessageBodyReader of content-type application/json and type class org.keycloak.representations.AccessTokenResponse
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.extractResult(ClientInvocation.java:141)
at org.jboss.resteasy.client.jaxrs.internal.proxy.extractors.BodyEntityExtractor.extractEntity(BodyEntityExtractor.java:60)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(ClientInvoker.java:104)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(ClientProxy.java:76)
at com.sun.proxy.$Proxy20.grantToken(Unknown Source)
at org.keycloak.admin.client.token.TokenManager.grantToken(TokenManager.java:85)
at org.keycloak.admin.client.token.TokenManager.getAccessToken(TokenManager.java:65)
at org.keycloak.admin.client.token.TokenManager.getAccessTokenString(TokenManager.java:60)
at org.keycloak.admin.client.resource.BearerAuthFilter.filter(BearerAuthFilter.java:52)
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(ClientInvocation.java:413)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(ClientInvoker.java:102)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(ClientProxy.java:76)
at com.sun.proxy.$Proxy22.findAll(Unknown Source)
at br.com.rps.itsm.sd.SgpKeycloakClient.doGet(SgpKeycloakClient.java:71)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.ws.rs.ProcessingException: RESTEASY003145: Unable to find a MessageBodyReader of content-type application/json and type class org.keycloak.representations.AccessTokenResponse
at org.jboss.resteasy.core.interception.ClientReaderInterceptorContext.throwReaderNotFound(ClientReaderInterceptorContext.java:42)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.getReader(AbstractReaderInterceptorContext.java:75)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:52)
at org.jboss.resteasy.plugins.interceptors.encoding.GZIPDecodingInterceptor.aroundReadFrom(GZIPDecodingInterceptor.java:59)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:55)
at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readFrom(ClientResponse.java:251)
at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readEntity(ClientResponse.java:181)
at org.jboss.resteasy.specimpl.BuiltResponse.readEntity(BuiltResponse.java:213)
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.extractResult(ClientInvocation.java:105)
I added the dependencies above, but do not solve my problem:
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
<version>3.0.19.Final</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jackson-provider</artifactId>
<version>3.0.19.Final</version>
</dependency>
Any clues?
| I used the dependency to fix this issue
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jackson2-provider</artifactId>
<version>3.1.0.Final</version>
</dependency>
| Keycloak | 39,861,900 | 18 |
I am trying to access the create a user in the keycloak programmatically. But I am getting 403 as a status code. I am following the below link.
https://technology.first8.nl/programmatically-adding-users-in-keycloak/
Can anyone help me? Thanks In advance
I have using the following code to create user
Keycloak kc = Keycloak.getInstance(
"http://{server name}:8080/auth",
"{realm name}", // the realm to log in to
"{useraname}",
"{password}", // the user
"{client id}",
"{client secret key}");
CredentialRepresentation credential = new CredentialRepresentation();
credential.setType(CredentialRepresentation.PASSWORD);
credential.setValue("test123");
UserRepresentation user = new UserRepresentation();
user.setUsername("codeuser");
user.setFirstName("sampleuser1");
user.setLastName("password");
user.setCredentials(Arrays.asList(credential));
user.setEnabled(true);
Response result = kc.realm("{realm name}").users().create(user);
response.status is coming as 403
| I faced the same issue. This is how i fixed it.
Create a role that has at least a realm-management role of manage-users
UI update for server 9.0.2
Go to your client's Scope tab and add the role to your Realm Roles
| Keycloak | 49,511,606 | 18 |
I'am googling for a while in order to find a documentation of all available say "variables" I can use in the various Keycloak templates.
by variable I mean all the ${xxx.yyy} things I can use to inject some dynamic values inside the template.
Through the documentation I can find here and there some of them (like ${user.attributes} or ${url.resourcesPath}) but are there others than these ?
Does anyone have a reference link ?
Many Thanks
| You can look for the template providers in Keycloak's code.
All the templates are "ftl" files filled with a map called "attributes". Keycloak has a couple of classes which fill those templates with Beans depending on the page or action as CharlyP mentioned. For example:
FreeMarkerEmailTemplateProvider class fills the email templates.
FreeMarkerLoginFormsProvider class fills the login templates.
| Keycloak | 50,562,384 | 18 |
I'm kind of desesperate to make this keycloak work. I can authenticate but for some reason, my token introspection always fail.
For example if I try to authenticate:
curl -d 'client_id=flask_api' -d 'client_secret=98594477-af85-48d8-9d95-f3aa954e5492' -d '[email protected]' -d 'password=superpassE0' -d 'grant_type=password' 'http://keycloak.dev.local:9000/auth/realms/skilltrock/protocol/openid-connect/token'
I get my access_token as expected:
{
"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnLVZJQ0VETnJ4NWRfN1pWQllCTC1tNDdTZWFNT3NDVlowSFdtZF9QQkZrIn0.eyJqdGkiOiIwNTBkYWI5MS1kMjA5LTQwYjctOTBkOS1mYTgzMWYyMTk1Y2MiLCJleHAiOjE1NDQ1MjIyNDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6ImFjY291bnQiLCJzdWIiOiI3NDA0MWNkNS1lZDBhLTQzMmYtYTU3OC0wYzhhMTIxZTdmZTAiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJmbGFza19hcGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJiOGI0MzA2Ny1lNzllLTQxZmItYmNkYi0xMThiMTU2OWU3ZDEiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsIm5hbWUiOiJqZWFuIHBpZXJyZSIsInByZWZlcnJlZF91c2VybmFtZSI6ImplYW5AZ21haWwuY29tIiwiZ2l2ZW5fbmFtZSI6ImplYW4iLCJmYW1pbHlfbmFtZSI6InBpZXJyZSIsImVtYWlsIjoiamVhbkBnbWFpbC5jb20ifQ.x1jW1cTSWSXN5DsXT3zk1ra4-BcxgjXbbqV5cjdwKTovoNQn7LG0Y_kR8-8Pe8MvFe7UNmqrHbHh21wgZy1JJFYSnnPKhzQaiT5YTcXCRybSdgXAjnvLpBjVQGVbMse_obzjjE1yTdROrZOdf9ARBx6EBr3teH1bHMu32a5wDf-fpYYmHskpW-YoQZljzNyL353K3bmWMlWSGzXx1y7p8_T_1WLwPMPr6XJdeZ5kW0hwLcaJVyDhX_92CFSHZaHQvI8P095D4BKLrI8iJaulnhsb4WqnkUyjOvDJBqrGxPvVqJxC4C1NXKA4ahk35tk5Pz8uS33HY6BkcRKw7z6xuA",
"expires_in":300,
"refresh_expires_in":1800,
"refresh_token":"eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJlYmY4ZDVlOC01MTM4LTRiNTUtYmZhNC02YzcwMzBkMTIwM2YifQ.eyJqdGkiOiI3NWQ1ODgyMS01NzJkLTQ1NDgtOWQwYS0wM2Q3MGViYWE4NGEiLCJleHAiOjE1NDQ1MjM3NDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6Imh0dHA6Ly9rZXljbG9hay5kZXYubG9jYWw6OTAwMC9hdXRoL3JlYWxtcy9za2lsbHRyb2NrIiwic3ViIjoiNzQwNDFjZDUtZWQwYS00MzJmLWE1NzgtMGM4YTEyMWU3ZmUwIiwidHlwIjoiUmVmcmVzaCIsImF6cCI6ImZsYXNrX2FwaSIsImF1dGhfdGltZSI6MCwic2Vzc2lvbl9zdGF0ZSI6ImI4YjQzMDY3LWU3OWUtNDFmYi1iY2RiLTExOGIxNTY5ZTdkMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJlbWFpbCBwcm9maWxlIn0.omhube2oe79dXlcChOD9AFRdUep53kKPjD0HF14QioY",
"token_type":"bearer",
"not-before-policy":0,
"session_state":"b8b43067-e79e-41fb-bcdb-118b1569e7d1",
"scope":"email profile"
}
But if I try to introspect the access_token like given below, keycloack return always {"active":false}. I really don't understand this behavior.
curl -X POST -u "flask_api:98594477-af85-48d8-9d95-f3aa954e5492" -d "token=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnLVZJQ0VETnJ4NWRfN1pWQllCTC1tNDdTZWFNT3NDVlowSFdtZF9QQkZrIn0.eyJqdGkiOiIwNTBkYWI5MS1kMjA5LTQwYjctOTBkOS1mYTgzMWYyMTk1Y2MiLCJleHAiOjE1NDQ1MjIyNDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6ImFjY291bnQiLCJzdWIiOiI3NDA0MWNkNS1lZDBhLTQzMmYtYTU3OC0wYzhhMTIxZTdmZTAiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJmbGFza19hcGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJiOGI0MzA2Ny1lNzllLTQxZmItYmNkYi0xMThiMTU2OWU3ZDEiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsIm5hbWUiOiJqZWFuIHBpZXJyZSIsInByZWZlcnJlZF91c2VybmFtZSI6ImplYW5AZ21haWwuY29tIiwiZ2l2ZW5fbmFtZSI6ImplYW4iLCJmYW1pbHlfbmFtZSI6InBpZXJyZSIsImVtYWlsIjoiamVhbkBnbWFpbC5jb20ifQ.x1jW1cTSWSXN5DsXT3zk1ra4-BcxgjXbbqV5cjdwKTovoNQn7LG0Y_kR8-8Pe8MvFe7UNmqrHbHh21wgZy1JJFYSnnPKhzQaiT5YTcXCRybSdgXAjnvLpBjVQGVbMse_obzjjE1yTdROrZOdf9ARBx6EBr3teH1bHMu32a5wDf-fpYYmHskpW-YoQZljzNyL353K3bmWMlWSGzXx1y7p8_T_1WLwPMPr6XJdeZ5kW0hwLcaJVyDhX_92CFSHZaHQvI8P095D4BKLrI8iJaulnhsb4WqnkUyjOvDJBqrGxPvVqJxC4C1NXKA4ahk35tk5Pz8uS33HY6BkcRKw7z6xuA" http://localhost:9000/auth/realms/skilltrock/protocol/openid-connect/token/introspect
return
{"active":false}
Where I am wrong? I'm totally lost
| You need to make sure that you introspect the token using the same DNS hostname/port as the request. Unfortunately that's a not widely documented "feature" of Keycloak...
So use:
curl -u "flask_api:98594477-af85-48d8-9d95-f3aa954e5492" -d "token=<token>" http://keycloak.dev.local:9000/auth/realms/skilltrock/protocol/openid-connect/token/introspect
| Keycloak | 53,721,588 | 18 |
I'm using keycloak to secure my rest API, I followed this tutorial to PROGRAMMATICALLY ADDING USERS, but I get that error message:
ERROR [io.undertow.request] (default task-9) UT005023: Exception handling request to /service/secured: org.jboss.resteasy.spi.UnhandledException: javax.ws.rs.client.ResponseProcessingException: javax.ws.rs.ProcessingException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "access_token" (class org.keycloak.representations.AccessTokenResponse), not marked as ignorable (9 known properties: "notBeforePolicy", "otherClaims", "tokenType", "token", "expiresIn", "sessionState", "refreshExpiresIn", "idToken", "refreshToken"])
at [Source: org.apache.http.conn.EofSensorInputStream@9d6aba2; line: 1, column: 18] (through reference chain: org.keycloak.representations.AccessTokenResponse["access_token"])
at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:76)
at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:212)
at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:149)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:372)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.keycloak.adapters.undertow.UndertowAuthenticatedActionsHandler.handleRequest(UndertowAuthenticatedActionsHandler.java:66)
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:33)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.keycloak.adapters.undertow.ServletPreAuthActionsHandler.handleRequest(ServletPreAuthActionsHandler.java:69)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: javax.ws.rs.client.ResponseProcessingException: javax.ws.rs.ProcessingException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "access_token" (class org.keycloak.representations.AccessTokenResponse), not marked as ignorable (9 known properties: "notBeforePolicy", "otherClaims", "tokenType", "token", "expiresIn", "sessionState", "refreshExpiresIn", "idToken", "refreshToken"])
at [Source: org.apache.http.conn.EofSensorInputStream@9d6aba2; line: 1, column: 18] (through reference chain: org.keycloak.representations.AccessTokenResponse["access_token"])
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.extractResult(ClientInvocation.java:140)
at org.jboss.resteasy.client.jaxrs.internal.proxy.extractors.BodyEntityExtractor.extractEntity(BodyEntityExtractor.java:58)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(ClientInvoker.java:104)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(ClientProxy.java:62)
at com.sun.proxy.$Proxy93.grantToken(Unknown Source)
at org.keycloak.admin.client.token.TokenManager.grantToken(TokenManager.java:59)
at org.keycloak.admin.client.token.TokenManager.getAccessToken(TokenManager.java:36)
at org.keycloak.admin.client.token.TokenManager.getAccessTokenString(TokenManager.java:31)
at org.keycloak.admin.client.resource.BearerAuthFilter.filter(BearerAuthFilter.java:31)
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(ClientInvocation.java:384)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(ClientInvoker.java:102)
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(ClientProxy.java:62)
at com.sun.proxy.$Proxy92.create(Unknown Source)
at org.keycloak.quickstart.jaxrs.Resource.getSecured(Resource.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
I searched in the net and found this solution but it doesn't work, I also find that discussion (I need more reputation to post the link :http:/lists.jboss.org/pipermail/keycloak-user/2014-November/001120.html) that explain the cause of the problem but I don't find a solution.
can anyone help.
| The Keycloak Admin Client (starting from version 1.9.0) supports the resteasy-jackson2-provider JSON provider out of the box, and this resolves the issue.
Use it in place of resteasy-jackson-provider in your pom.
| Keycloak | 37,786,756 | 17 |
I've seen those two post that give a solution to this question but they do not provide detailed enough informations about how to do it for non Java developer like me:
Keycloak add extra claims from database / external source
How to register a custom ProtocolMapper in Keycloak?
Here is a recap of their solutions that could help others if filled with more details.
Process expected from 1st link
User logs in
My custom protocol mapper gets called, where I overwrite the transformAccessToken method
Here I log in the client where the protocol mapper is in into keycloak, as a service. Here don't forget to use another client ID
instead the one you're building the protocol mapper for, you'll enter
an endless recursion otherwise.
I get the access token into the protocol mapper and I call the rest endpoint of my application to grab the extra claims, which is
secured.
Get the info returned by the endpoint and add it as extra claims
Steps to achieve it from 2nd link
Implement the ProtocolMapper interface and add the file
"META-INF/services/org.keycloak.protocol.ProtocolMapper" containing the reference to the class.
At this point Keycloak recognizes the new implementation. And you
should be able to configure it via the admin console.
To add some data to the token add the following interfaces
org.keycloak.protocol.oidc.mappers.OIDCAccessTokenMapper
and implement the methods according to the interface
Then add the file "META-INF/jboss-deployment-structure.xml" with the
following content
<?xml version="1.0" encoding="UTF-8"?>
<jboss-deployment-structure>
<deployment>
<dependencies>
<module name="org.keycloak.keycloak-services"/>
</dependencies>
</deployment>
</jboss-deployment-structure>
And after doing all this the custom transformAccessToken() method is called
on every request to URL
http://:/auth/realms/testrealm/protocol/openid-connect/token
After reading this I have a few questions :
How do you ”Implement the ProtocolMapper”
Where do you add the files mentionned earlier ? ( can't see any META-INF/ directory in my Keycloak installation folder )
How and where do you ”add the following interfaces”
What does the custom transformAccessToken() looks like
Thank you all for your time.
Let me know if I miss summarise their answers.
Edit :
I'm starting a bounty with the hope that someone will be able to give me detailled steps on how to add extra claims from database in Keycloak 3.4.3 ( Detailed enough for a non Java dev )
Edit 2
A method descibed here could do the trick but lack details.
Keycloak create a custom identity provider mapper
| I hope this step by step guide helps you
I'm using Keycloak 4.5.0 - because I have this newer version installed - but I should not make a big difference. And I implemented a OIDCProtocolMapper in the example.
Just to summarize it - for the quick overview for others - each step is described more detailed later
You implement a CustomProtocolMapper class based on
AbstractOIDCProtocolMapper
META-INF/services File with the
name org.keycloak.protocol.ProtocolMapper must be available and
contains the name of your mapper
jboss-deployment-structure.xml need to be available to use
keycloak built in classes
Jar File is deployed in
/opt/jboss/keycloak/standalone/deployments/
Okay now more details :-)
Create your custom Mapper
I uploaded you my maven pom.xml (pom) - just import it into your IDE and all the dependencies should be loaded automatically. The dependencies are just provided and will be later used from keycloak directly at runtime
Relevant is the keycloak.version property - all keycloak dependencies are currently loaded in version 4.5.0.Final
Now i created a custom Protocol Mapper Class called CustomOIDCProtocolMapper. Find "full" code here
It should extend AbstractOIDCProtocolMapper and need to implement all abstract methods. Maybe you want to have a SAML Protocol Mapper then it's another base class (AbstractSAMLProtocolMapper)
one relevant method is transformAccessToken - here I set a additional Claim to the AccessToken. You need your logic here but yeah - depends on your database, etc. ;-)
Services File
The services File is important for keycloak to find your custom-Implementation
Place a file with the fileName org.keycloak.protocol.ProtocolMapper inside \src\main\resources\META-INF\services\
Inside this file you write to Name of your custom Provider - so keycloak knows that this class is available as Protocol Mapper
In my example the file content is just one line
com.stackoverflow.keycloak.custom.CustomOIDCProtocolMapper
Deployment Structure XML
In your custom mapper you use files from keycloak. In order to use them we need to inform jboss about this dependency.
Therefore create a file jboss-deployment-structure.xml inside \src\main\resources\META-INF\
Content:
<jboss-deployment-structure>
<deployment>
<dependencies>
<module name="org.keycloak.keycloak-services" />
</dependencies>
</deployment>
</jboss-deployment-structure>
Build and deploy your Extension
Build a jar File of your Extension (mvn clean package) - and place the jar in /opt/jboss/keycloak/standalone/deployments/ and restart keycloak
In the logfile you should see when it's deployed and (hopefully no) error messages
Now you can use your mapper - In my example I can create a Mapper in keycloak admin ui and select Stackoverflow Custom Protocol Mapper from dropdown
Just as info - this is not fully official supported by keycloak - so interfaces could possible change in later versions
I hope it's understandable and you will be able to succesfully implement your own mapper
EDIT:
Exported eclipse file structure zip
| Keycloak | 53,089,776 | 17 |
How to get client secret via Keycloak API?
In documentation I see:
GET /admin/realms/{realm}/clients/{id}/client-secret
My code is the following:
data = {
"grant_type" : 'password',
"client_id" : 'myclientid',
"username" : 'myusername',
"password" : 'mypassword'
}
response = requests.get("https://mylink.com/auth/admin/realms/{myrealm}/clients/{myclientid}/client-secret", data=data, headers= {"Content-Type": "application/json"})
I always get 401 error.
What do I do wrong?
| You can't get client_secret for public clients. Your client should have 'access_type` = 'confidential'
Go to CLIENTS section of your realm admin panel (<protocol>://<host>:<port>/auth/admin/master/console/#/realms/<your realm>/clients/<your client code>)
Change Access Type to confidential
Press 'SAVE'
Go to the "Credentials" tab
Make sure that 'Client Authenticator' = 'Client Id and Secret'
Voila! Here's your client secret:
UPD P.S. client_secret retrieving using API is possible through another client (which should have role for client info view)
| Keycloak | 53,538,100 | 17 |
Hi I'm trying to use the Keycloak API but I don't understand very well how it works. I want to obtain all the users of a realm. So I first obtain a token using this endpoint: /realms/master/protocol/openid-connect/token with this params in the request body:
client_id
grant_type
username
password
client_secret
The first question is: What client should I use?
Then I call this endpoint: /admin/realms/master/users with the token in the Authorization header, but I get a 403 status code and I don't understand why.
Thanks
| You need two steps
first get an access token from the admin-cli client of the master realm
second call the admin rest api with the access token, set Bearer as prefix in the
Authorization header.
# get an access token
curl -X POST \
https://<HOST>/auth/realms/master/protocol/openid-connect/token \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'cache-control: no-cache' \
-d 'grant_type=password&username=<USERNAME>l&password=<PASSWORD>&client_id=admin-cli'
# get all users of gateway realm, use the token from above and use Bearer as prefix
curl -X GET \
https://<HOST>/auth/admin/realms/gateway/users \
-H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkI...' \
-H 'cache-control: no-cache'
| Keycloak | 55,535,440 | 17 |
When a user is created through the Keycloak admin console, is there a way to notify a user via email that a profile has been created and the user can finish registration following by a link?
Currently, the user can get an email about profile being created only if after creation a password had been set for the user. And only after an initial login attempt. But for this login attempt, the user should know the password that was set.
| I accomplished the same thing by customizing the keycloak theme templates for the emails sent and for the login pages. Here's the Keycloak docs on how to customize the themes.
Here's the specifics of how I did it:
First, I customized the executeActions.ftl email template to be pretty and say "welcome to our application, click the link below to finish setting up your account". I continued to use the link and link expiration note from the default base template. You can see the default base template at https://github.com/keycloak/keycloak/blob/master/themes/src/main/resources/theme/base/email/html/executeActions.ftl
Second, we decided what standard keycloak actions would be "required" for new users. We decided that to finish registration, users would be required to do these actions:
Accept Terms and Condition
Enter their full name (Update their profile)
Setup a new password
Third, we setup our Keycloak realm to require all users go through the 3 steps.
In the Keycloak admin console, we set these up as "Required" actions (under Configure-->Authentication-->Required Actions), marking the "Terms and Conditions", "Update Profile" and "Update Password" actions as "Enabled" and "Default Action". We also put these actions in the exact order that we wanted them to appear in the "account setup" process that the user would go through screen by screen. For the other actions, we unchecked Default Action.
Fourth, I customized the following keycloak login templates that render the account setup pages. The keycloak-generated link that was embedded in the executeActions email (from step 1) will take the user to these "account setup" web pages:
info.ftl - The default is here. After clicking the link in the welcome email, the user ends up on a page generated by this template. This page usually renders web pages that display generic informational messages of all kinds,
but it also renders the FIRST and LAST page of the account setup process. So I modified it to check to see if the message.summary matched the first step or last step of the account setup process. If it was the first step, I'd render 'welcome' text on the page. If it was the last step, I'd render something like 'your account has been setup. Click here to login'. See below for how I modified info.ftl.
<!-- info.ftl -->
<#import "template.ftl" as layout>
<@layout.registrationLayout displayMessage=false; section>
<#if section = "header">
<#if messageHeader??>
${messageHeader}
<#else>
<#if message.summary == msg('confirmExecutionOfActions')>
${kcSanitize(msg('welcomeToOurApplication'))?no_esc}
<#elseif message.summary == msg('accountUpdatedMessage')>
${kcSanitize(msg('accountSuccessfullySetup'))?no_esc}
<#else>
${message.summary}
</#if>
</#if>
<#elseif section = "form">
<div id="kc-info-message">
<div class="kc-info-wrapper">
<#if message.summary == msg('confirmExecutionOfActions')>
${kcSanitize(msg('startSettingUpAccount'))?no_esc}
<#elseif message.summary == msg('accountUpdatedMessage')>
${kcSanitize(msg('accountIsReadyPleaseLogin'))?no_esc}
<#else>
${message.summary}
</#if>
</div>
<#if pageRedirectUri??>
... <!-- Omitted the rest because it's the same as the base template -->
I also customized the following templates, that correspond to steps in the account setup process.
terms.ftl - shows terms & conditions step
login-update-profile.ftl - shows the step where the user needs to enter/update his/her full name
login-updated-password.ftl - prompts user to change password.
Fifth, when the administrator creates a new user, he/she triggers the welcome email being sent to the user:
- In the Keycloak admin console, once you "Add" a new user, go to that user's "Credentials" tab, and under Credential Reset select the account setup actions required under "Reset Actions" and then click the "Send email" button.
Anyway, I hope this helps. I remember it taking me a little while to figure out because it is not a standard flow within keycloak.
| Keycloak | 57,393,543 | 17 |
I am trying to determine if the JwtBearer Service provided for .Net Core 3.0, actually uses the asymettric signing key that is provided by my oidc provider's well known configuration?
I can't find any documentation around this.
.AddJwtBearer(opt =>
{
opt.Authority = "http://localhost:8180/auth/realms/master";
opt.TokenValidationParameters = new Microsoft.IdentityModel.Tokens.TokenValidationParameters
{
ValidateIssuer = true,
ValidateAudience = false,
ValidateLifetime = true,
ValidateIssuerSigningKey = true
};
I am using Keycloak 4.8.3 as my oidc provider. The closest documentation I could find was here. https://developer.okta.com/blog/2018/03/23/token-authentication-aspnetcore-complete-guide
The relevant piece is here:
If you let the JwtBearer middleware auto-configure via the discovery document, this all works automatically!
Did that above code do all that? Is this still relevant in 3.0 since we don't register the middleware anymore??
I bet a lot of people don't know about Asymetric Signing keys, and why they are so important. We have abstracted away so much from the developer, that now I don't even know if my api is secure.
So the final question is:
Does the .AddJwtBearer service with "ValidateIssuerSigningKey" periodically check the wellknown or whatever discovery document to grab the latest asymettric signing key?
| I was wondering same - research/debugging showed that JwtBearer indeed trying to contact authority to get public key.
Here is function called during validation :
// System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler.cs
protected virtual SecurityKey ResolveIssuerSigningKey(string token, JwtSecurityToken jwtToken, TokenValidationParameters validationParameters)
{
if (validationParameters == null)
throw LogHelper.LogArgumentNullException(nameof(validationParameters));
if (jwtToken == null)
throw LogHelper.LogArgumentNullException(nameof(jwtToken));
return JwtTokenUtilities.FindKeyMatch(jwtToken.Header.Kid, jwtToken.Header.X5t, validationParameters.IssuerSigningKey, validationParameters.IssuerSigningKeys);
}
Obviously this logic to contact authority for public key is called only when you set Oauth authority in your configuration:
.AddJwtBearer(opt => {
opt.Authority = "https://authorityUri/";
});
AddJwtBearer middleware handler internally will add ".well-known/openid-configuration" string to o.Authority and will try to fetch JSON with details of authority server. (Google example: https://accounts.google.com/.well-known/openid-configuration).
Next step - get jwks_uri, (in case of google https://www.googleapis.com/oauth2/v3/certs) and fetch jwks file, which will have data used for signature validation (publicKey, algorithm, initial vector)..
After all this steps, JwtBearer validates token signature.
Just for info - JwtBearer can validate token without authority if you configure it with your own Signing Key issuer, like this:
.AddJwtBearer(opt => {
opt.TokenValidationParameters.IssuerSigningKey = GetKey();
//in this case you need to provide valid audience or disable validation
opt.TokenValidationParameters.ValidateAudience = false
//in this case you need to provide valid issuer or disable validation
opt.TokenValidationParameters.ValidateIssuer= false
})
Microsoft.IdentityModel.Tokens.SecurityKey = GetKey() {
string key = "Secret_Pass";
return new SymmetricSecurityKey(Encoding.UTF8.GetBytes(key));
}
In this case you need either provide issuer and audience or disable validation.
This configuration can be used for B2B cases - server to server communications - when you don't have Oauth server and issue tokens yourself using shared secret.
For the full picture look at this configuration - both authority and issuer key set:
.AddJwtBearer(opt => {
opt.Authority = "https://authorityUri/";
opt.TokenValidationParameters.IssuerSigningKey = GetKey();
});
In this case authority will not be touched and your locally generated key will be used to validate token, so priority is TokenValidationParameters.IssuerSigningKey. Means no reason to add Authority.
| Keycloak | 58,758,198 | 17 |
I cannot import any realms into Keycloak 18.0.0. That's the Quarkus, and not the Wildfly distribution anymore. Documentation here says it should be pretty simple, and by mounting my exported realm.json file into /opt/keycloak/data/import/...json it actually TRIES to import it, but it ends with :
"[org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled".
Known to be removed, and the old -Dkeycloak.profile.feature.upload_scripts=enabled won't work anymore. OK.
But then what's the way to do import any realms on startup? That'd be used to distribute a ready-made local stack without any handcrafting needed to launch. I could do it with running SQL commands, but that's way too hacky to my taste.
Compose file :
cp-keycloak:
image: quay.io/keycloak/keycloak:18.0.0
environment:
KC_DB: mysql
KC_DB_URL: jdbc:mysql://cp-keycloak-database:3306/keycloak
KC_DB_USERNAME: root
KC_DB_PASSWORD: root
KC_HOSTNAME: localhost
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
ports:
- 8082:8080
volumes:
- ./data/local_stack/init.keycloak.json:/opt/keycloak/data/import/main-realm.json:ro
entrypoint: "/opt/keycloak/bin/kc.sh start-dev --import-realm"
The output :
cp-keycloak_1 | 2022-05-05 14:07:26,801 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
cp-keycloak_1 | 2022-05-05 14:07:26,802 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to import realm: Main-Realm
cp-keycloak_1 | 2022-05-05 14:07:26,803 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled
Thanks
| This might be caused because inside of your realm .json there is references to some configuration that is using the deprecated upload script feature.
Try to removed it, export the json and them try to imported again (this time without the upload script feature.
From the comments (credits to jfrantzius):
See here for what you either need to remove or replace in your
realm-export.json:
https://github.com/keycloak/keycloak/issues/11664#issuecomment-1111062102
. We had to replace the entries, see also here
https://github.com/keycloak/keycloak/discussions/12041#discussioncomment-2768768
| Keycloak | 72,128,765 | 17 |
I am using Keycloak server to implement SSO. I am able to get access token for a specific client using client_credentials flow.
However, my observation is that the access token is granted for internal service account of the client. I would like to get access token for other users present in realm by providing some additional parameter to the token endpoint.
Below is the current request I make to token endpoint using Postman Chrome extension:
POST http://localhost:8080/auth/realms/<realm>/protocol/<protocol>/token
x-www-form-urlencoded
grant_type client_credentials
client_id <client_id>
client_secret <client_secret>
Please let me know if this possible. Also, I would like to convey that am totally new to Keycloak and openid-connect protocol.
| I think you're misunderstanding some Oauth concepts right here. The client_credentials grant should only be used for a service itself to grant access to an specific resource. Imagine this scenario:
End User -> Docs Service -> Docs Repo
The end user has access to some docs stored in the repo through the docs service. In this case, the service makes the decision to grant the user access to a specific document or not, since the repo is a mere content server. Obviously, both of them are secured through two different keycloak clients.
However, the docs service needs to have full access to the repo. He can access any document he requests. The solution is to give the docs service a service account role, let's say DOC_MANAGER and make the repo check for this role when a resource is requested. The service authenticates with client_credentials and gets access to the resource as a service.
But the end user will perform a standard login, using the Authorization code flow, for example, and get access to the doc through the service. The service will check for another role, let's say DOC_USER and check whether the user has access to this concrete resource or not, before going to the repo.
You can read more about keycloak service accounts here.
| Keycloak | 41,756,879 | 16 |
I have single page application that is built using Angularjs and integrated with Keycloak for authentication and authorization.
I am able to login into my application, get loggedin user roles etc. goes The moment refresh token call, it always returns in my else case, and user logout of the application. Though the token valid time is set very high.
I need to update the token, if user has opened the app. In case of failure or expire token i need to logout the user. if (refreshed) always returns false.
Below is the piece of code i am using.
var __env = {};
Object.assign(__env, window.__env);
var keycloakConfig = {
"url" : __env.keycloakUrl,
"realm" : __env.keycloakRealm,
"clientId" : __env.keycloakClientId,
"credentials" : {
"secret" : __env.keycloakSecret
}
};
var keycloak = Keycloak(keycloakConfig);
keycloak.init({
onLoad : 'login-required'
}).success(function(authenticated) {
if(authenticated){
keycloak.loadUserInfo().success(function(userInfo) {
bootstrapAngular(keycloak, userInfo, roles);
});
}
});
function bootstrapAngular(keycloak, userInfo, roles) {
angular.module('myApp').run(
function($rootScope, $http, $interval, $cookies) {
var updateTokenInterval = $interval(function() {
// refresh token if it's valid for less then 15 minutes
keycloak.updateToken(15).success(
function(refreshed) {
if (refreshed) {
$cookies.put('X-Authorization-Token',
keycloak.token);
}else{
$rootScope.logoutApp();
}
});
}, 600000);
updateTokenInterval;
$cookies.put('X-Authorization-Token', keycloak.token);
$rootScope.logoutApp = function() {
$cookies.remove('X-Authorization-Token');
$interval.cancel(updateTokenInterval);
keycloak.logout();
};
}
}
| I couldn't find explained it in the API docs but the timeout argument of keycloak.updateToken() function is expressed in seconds, not in minutes.
So if the Access Token Lifespan on server is at the default value of 5 minutes, you should use a value less than 300 seconds.
I learned it doing some experiments.
//Update the token when will last less than 3 minutes
keycloak.updateToken(180)
Btw I suggest you to use a Lifespan longer than 5 minutes for the token.
In your code You never see the token refreshed because the refresh is never triggered in the 15 seconds window in which will work.
| Keycloak | 42,499,818 | 16 |
Default port of Keycloak used to be on 8080. Now when I am starting keycloak using
./bin/standalone.sh
then it is getting start on 9990 port. // So I guess now keycloak default port is 9990 nowadays.
but funny part is whenever I am giving explicit keycloak port like below:
./bin/standalone.sh -Djboss.socket.binding.port-offset=8080
after this keycloak is starting on port 17101 . So weird.
I am struggling to start keycloak on 8080 port. How can I do that?
And one more thing :
surprisingly something called as undertow is running on 8080 port. When I am trying to start keycloak, I can trace that in stacktrace:
YUT0006: Undertow HTTP listener default listening on 127.0.0.1:8080
| I changed the http port of keycloak serve (I'm using 19.0.1 version distribution powered by Quarkus) by doing the following steps :
Open : keycloak-folder/conf/keycloak.conf
Set the port number : http-port=8180
Note that you can also set the port number at the command line :
on Linux/Unix:
$ bin/kc.sh start-dev --http-port=8180
on Windows:
$ bin\kc.bat start-dev --http-port=8180
| Keycloak | 47,508,036 | 16 |
I connected our active directory to keycloak (4.0.0.Beta1) and imported the users - this works fine.
But the username should be filled from sAMAccountName. So i changed the Username LDAP attribute to that.
But after clicking Synchronize all users i am getting this error in the console window:
8:20:13,372 ERROR [org.keycloak.storage.ldap.LDAPStorageProviderFactory] (default task-119) Failed during import user from LDAP: org.keycloak.models.ModelException: User returned from LDAP has null username! Check configuration of your LDA
mappings. Mapped username LDAP attribute: cn, user DN: CN=Mustermann Max,OU=Normung,OU=Mech,OU=Konstruktion,OU=Abteilungen,DC=company,DC=org, attributes from LDAP: {whenChanged=[2017037125253.0Z], whenCreated=[20140520092805.0
], mail=[[email protected]], givenName=[Max], sn=[Mustermann], userAccountControl=[66048], pwdLastSet=[130750516258418527]}
at org.keycloak.storage.ldap.LDAPUtils.getUsername(LDAPUtils.java:113)
at org.keycloak.storage.ldap.LDAPStorageProviderFactory$3.run(LDAPStorageProviderFactory.java:521)
at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
at org.keycloak.storage.ldap.LDAPStorageProviderFactory.importLdapUsers(LDAPStorageProviderFactory.java:514)
at org.keycloak.storage.ldap.LDAPStorageProviderFactory.syncImpl(LDAPStorageProviderFactory.java:469)
at org.keycloak.storage.ldap.LDAPStorageProviderFactory.sync(LDAPStorageProviderFactory.java:407)
...
I tried some mappers (especially username) but with no luck. It seems that there are only a few attributes read from the ldap server (see attributes from LDAP:... in the output).
Namely: whenChanged, whenCreated, mail, givenName, sn, userAccountControl, pwdLastSet.
How can i get the sAMAcountName attribute as username?
| I have just tested it in 4.1.0.Final and there it works when you change the Username LDAP attribute to sAMAccountName and additionally the
LDAP Attribute in the username mapper also to sAMAccountName.
I tried some mappers (especially username) but with no luck.
Your question suggest, that you already tried doing something in the username mappers. So you were definetly on the right track. Either there was a bug in your version, or the two fields didn't match correctly.
| Keycloak | 50,168,751 | 16 |
Im looking at the new Keycloak Beta 4 API. When i get the users account information, what is referred to as 'id' in the web ui comes back as 'sub' in the account object.
{ sub: '25a37fd0-d10e-40ca-af6c-821f20e01be8',
name: 'Barrack Obama',
preferred_username: '[email protected]',
given_name: 'Barrack',
family_name: 'Obama',
email: '[email protected]' }
What is 'sub' and is this a safe uuid to map database objects to?
| As per the keycloak documentation
Anatomy of Action Token
Action token is a standard Json Web Token signed with active realm key where the payload contains several fields:
typ - Identification of the action (e.g. verify-email)
iat and exp - Times of token validity
sub - ID of the user
azp - Client name
iss - Issuer - URL of the issuing realm
aud - Audience - list containing URL of the issuing realm
asid - ID of the authentication session (optional)
nonce - Random nonce to guarantee uniqueness of use if the operation can only be executed once (optional)
Please refer the following link https://www.keycloak.org/docs/latest/server_development/index.html#_action_token_anatomy
Reason may be they want to retain the uniqueness in the name.
| Keycloak | 50,397,205 | 16 |
We've decided to move to KeyCloak for our identity and access management solution, rather than implement it entirely within our Java EE web app. We're creating a multi-tenant solution, and would prefer to create security realms/users/groups programmatically through our workflow, rather than leveraging KeyCloak's self-registration functionality or web UI so that we can do things like grab credit card details for payment, etc. I know that we could likely leverage the admin REST APIs to accomplish this, but I wasn't sure if there was a simpler way to do it besides hand-coding REST calls. Does KeyCloak provide an admin client library that we could use? Or are we stuck implementing a REST client for the admin APIs ourselves?
| I found some info around the KeyCloak Java Admin Client. This gist has lots of useful examples showing how to managed users, realms, etc.
| Keycloak | 51,521,958 | 16 |
I have followed the guide https://ultimatesecurity.pro/post/okta-saml/ , to
configure OKTA Saml with keycloak. After this configuration, I see
Okta/saml login button on login page, clicking on which, the user is
redirected to Okta login/SSO.
Now, is there a way to avoid clicking on this button everytime such that
when the keycloak login page appears, user is auto redirected to Okta SSO
automatically instead of shown keycloak login form with okta redirect
button?
If not, is it possible to enter okta username password within the keycloak
form fields and keycloak have it validated internally from Okta?
This requirement is because the customer is using only Okta as IDP and does
not have any other like LDAP etc and clicking on a button seems overhead.
| The solution is pretty simple.
1. Go to Realm -> Configure -> Authentication
2. Go to Flows -> Browser-> Identity Provider Redirecter -> Actions -> Config
3. Enter the saml / okta identity provider name that you have created for your realm
(as you can see in my case it was saml-okta-41)
| Keycloak | 51,925,423 | 16 |
I'm trying to securize an Angular app with a Keycloak server. I've followed some tutorials that give more or less the same instructions in order to do so, but I got stuck with the following error:
Timeout when waiting for 3rd party check iframe message.
I start my keycloak server using the following docker-compose configuration:
auth:
container_name: nerthus-tech-auth
image: quay.io/keycloak/keycloak:17.0.1
command: start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://database:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: str0ngP@ssword
ports:
- 10010:8080
I've created a realm (nerthus) and a client (blog) which is public and has the following configuration:
Root URL: http://localhost:4200
Valid redirect URLs: http://localhost:4200/* (which in my understanding I should be able to abbreviate as *)
Web origins: +
On the Angular app side, I've installed the keycloak-angular and keycloak-js dependencies:
"keycloak-angular": "^9.1.0",
"keycloak-js": "^16.1.1"
I've also registered an initializer for Keycloak:
providers: [
{
provide: APP_INITIALIZER,
useFactory: initializeKeycloak,
multi: true,
deps: [KeycloakService]
}
]
function initializeKeycloak(keycloakService: KeycloakService) {
return () => keycloakService.init({
config: {
url: 'http://localhost:10010',
realm: 'nerthus',
clientId: 'blog'
}
});
}
For that one I also tried to use initOptions.onLoad (with "login-required" and "check-sso"), but that causes the app to require authentication to access any page, which is not the intended behaviour.
I want only the guarded page to require authentication. Hence the guard I set up:
@Injectable({
providedIn: 'root'
})
export class AuthGuard extends KeycloakAuthGuard {
constructor(protected override readonly router: Router,
protected override readonly keycloakAngular: KeycloakService) {
super(router, keycloakAngular);
}
async isAccessAllowed(route: ActivatedRouteSnapshot,
state: RouterStateSnapshot): Promise<boolean | UrlTree> {
if (!this.authenticated) {
await this.keycloakAngular.login({ redirectUri: window.location.origin + state.url });
}
return this.authenticated;
}
}
Surely I'm missing something, but I can't make it work. I tried to be succinct, so if some important information is missing, please ask me.
| My solution is set the checkLoginIframe to false. Below is my configuration:
keycloak.init({
config: {
url: 'http://localhost:10010',
realm: 'nerthus',
clientId: 'blog'
},
initOptions: {
pkceMethod: 'S256',
// must match to the configured value in keycloak
redirectUri: 'http://localhost:4200/your_url',
// this will solved the error
checkLoginIframe: false
}});
Hopes it will helps. =)
| Keycloak | 72,019,588 | 16 |
I'm trying to get the key from Keycloak open-id connect certs endpoint that allow me to validate a JWT token. The api to fetch the keys seam to work :
GET http://localhost:8080/auth/realms/my-realm/protocol/openid-connect/certs
{
"keys": [
{
"kid": "MfFp7IWWRkFW3Yvhb1eVrtyQQNYqk6BG-6HZFpl_JxI",
"kty": "RSA",
"alg": "RS256",
"use": "sig",
"n": "qDWXUhNtfuHNh0lm3o-oTnP5S8ENpzsyi-dGrjSeewxV6GNiKTW5INJ4hDQ7ZWkUFfJJhfhQWJofqgN9rUBQgbRxXuUvEkrzXQiT9AT_8r-2XLMwRV3eV_t-WRIJhVWsm9CHS2gzbqbNP8HFoB_ZaEt2FYegQSoAFC1EXMioarQbFs7wFNEs1sn1di2xAjoy0rFrqf_UcYFNPlUhu7FiyhRrnoctAuQepV3B9_YQpFVoiUqa_p5THcDMaUIFXZmGXNftf1zlepbscaeoCqtiWTZLQHNuYKG4haFuJE4t19YhAZkPiqnatOUJv5ummc6i6CD69Mm9xAzYyMQUEvJuFw",
"e": "AQAB"
}
]
}
but where is the key and how to decode it ?
$.keys[0].n does not look like base64 and I cannot figure out what it is ?
...if someone can tell me how to get the public key from that payload it will be great !
| Looking at https://github.com/keycloak/keycloak/blob/master/core/src/main/java/org/keycloak/jose/jwk/JWKParser.java it seams that returned key are pem encoded using :
modulus
exponent
Look at the mentionned java class to get a public key in java or https://github.com/tracker1/node-rsa-pem-from-mod-exp to get the public key in javascript.
| Keycloak | 39,890,232 | 15 |
So I want to have single sign in, in all the products using a auth server but that's not only for employees, keycloak should be used to that like auth0?
| There are also some advantages to Keycloak:
Keycloak is also available with support if you buy JBoss EAP (see http://www.keycloak.org/support.html). This might be cheaper than the enterprise version of Auth0. If you want to use custom DB, you need enterprise version of Auth0 anyway.
Keycloak has features which are not available in Auth0:
Fine-grained permissions and role-based access control (RBAC) and attribute-based access control (ABAC) configurable via web admin console or custom code or you can write yuour own Java and JavaScript policies. This can be also implemented in Auth0 via user rules (custom JavaScript) or Authorization plugin(no code, less possibilities). In Keycloak you can do more without code (there are more types of security policies available out of the box e.g. based on role, groups, current time, an origin of the request) and there is a good support for custom developed access control modules. Here some more detailed research would be interesting to compare them.
Keycloak also offers a policy enforcer component - which you can connect to from your backend and verify whether the access token is sufficient to access a given resource. It works best with Java Web servers, or you can just deploy an extra Java Server with Keycloak adapter which will work as a gatekeeper and decide which request go through and which are blocked. All this happens based on the rules which you can configure via Keycloak web interface. I am not sure such policy enforcer is included in Auth0. On top of that, Keycloak can tell your client application which permissions you need when you want to access a given resource so you do not need to code this in your client. The workflow can be:
Client application wants to access resource R.
Client application asks Keycloak policy enforcer which permission it needs to access resource R.
Kecloak policy enforcer tells the client application which permission P it needs.
The client application requests an access token with permission P from Keycloak.
The client makes a request to the resource server with the access token containing permission P attached.
Policy enforcer which guards the resource server can ask Keycloak whether permission P is enough to access resource R.
When Keycloak approves, the resource can be accessed.
Thus, more can be centralized and configured in Keycloak. With this workflow, your client and resource server can outsource more security logic and code to Keycloak. In Auth0 you probably need to implement steps 2,3,6 on your own.
| Keycloak | 46,453,490 | 15 |
I am trying to use clustered keycloak docker behind the A10 load balancer. I am trying access all the request by https from the client application. My issue is that the same setup is working when we try to access keycloak has HTTP but at the same time when we try to access this has HTTPS it is not working. Can anyone help me to solve this issue? Please let me know whether the issue is in the keycloak level or A10 load balancer level.
| I know this is an older question, but I couldn't find a satisfying answer anywhere and I wanted to share my solution. This eventually worked for me in an AWS Environment with an Application Load Balancer:
Run the keycloak docker container with the environment variable PROXY_ADDRESS_FORWARDING=true
As seen in the keycloak docker documentation:
When running Keycloak behind a proxy, you will need to enable proxy address forwarding.
| Keycloak | 47,068,266 | 15 |
I want to use keycloak to authenticate my users in our Superset environment.
Superset is using flask-openid, as implemented in flask-security:
http://flask-appbuilder.readthedocs.io/en/latest/_modules/flask_appbuilder/security/manager.html
https://pythonhosted.org/Flask-OpenID/
To enable a different user authentication than the regular one (database), you need to override the AUTH_TYPE parameter in your superset_config.py file. You will also need to provide a reference to your openid-connect realm and enable user registration. As I understand, it should look something like this:
from flask_appbuilder.security.manager import AUTH_OID
AUTH_TYPE = AUTH_OID
OPENID_PROVIDERS = [
{ 'name':'keycloak', 'url':'http://localhost:8080/auth/realms/superset' }
]
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Gamma'
With this configuration, the login page changes to a prompt where the user can select the desired OpenID provider (in our case keycloak). We also have two buttons, one to sign in (for existing users) and one to register as a new user.
I would expect that either of these buttons would take me to my keycloak login page. However, this does not happen. Instead, I am redirected right back to the
login page.
In the case where I press the registration button, I get a message that says 'Not possible to register you at the moment, try again later'. When I press the sign in button, no message is displayed. The Superset logs show the request that loads the login page, but no requests to keycloak. I have tried the same using the Google OpenID provider, which works just fine.
Since I am seeing no requests to keycloak, this makes me think that I am either missing a configuration setting somewhere, or that I am using the wrong settings. Could you please help me figure out which settings I should be using?
| Update 03-02-2020
@s.j.meyer has written an updated guide which works with Superset 0.28.1 and up. I haven't tried it myself, but thanks @nawazxy for confirming this solution works.
I managed to solve my own question. The main problem was caused by a wrong assumption I made regarding the flask-openid plugin that superset is using. This plugin actually supports OpenID 2.x, but not OpenID-Connect (which is the version implemented by Keycloak).
As a workaround, I decided to switch to the flask-oidc plugin. Switching to a new authentication provider actually requires some digging work. To integrate the plugin, I had to follow these steps:
Configue flask-oidc for keycloak
Unfortunately, flask-oidc does not support the configuration format generated by Keycloak. Instead, your configuration should look something like this:
{
"web": {
"realm_public_key": "<YOUR_REALM_PUBLIC_KEY>",
"issuer": "http://<YOUR_DOMAIN>/auth/realms/<YOUR_REALM_ID>",
"auth_uri": "http://<YOUR_DOMAIN>/auth/realms/<YOUR_REALM_ID>/protocol/openid-connect/auth",
"client_id": "<YOUR_CLIENT_ID>",
"client_secret": "<YOUR_SECRET_KEY>",
"redirect_urls": [
"http://<YOUR_DOMAIN>/*"
],
"userinfo_uri": "http://<YOUR_DOMAIN>/auth/realms/<YOUR_REALM_ID>/protocol/openid-connect/userinfo",
"token_uri": "http://<YOUR_DOMAIN>/auth/realms/<YOUR_REALM_ID>/protocol/openid-connect/token",
"token_introspection_uri": "http://<YOUR_DOMAIN>/auth/realms/<YOUR_REALM_ID>/protocol/openid-connect/token/introspect"
}
}
Flask-oidc expects the configuration to be in a file. I have stored mine in client_secret.json. You can configure the path to the configuration file in your superset_config.py.
Extend the Security Manager
Firstly, you will want to make sure that flask stops using flask-openid ad starts using flask-oidc instead. To do so, you will need to create your own security manager that configures flask-oidc as its authentication provider. I have implemented my security manager like this:
from flask_appbuilder.security.manager import AUTH_OID
from flask_appbuilder.security.sqla.manager import SecurityManager
from flask_oidc import OpenIDConnect
class OIDCSecurityManager(SecurityManager):
def __init__(self,appbuilder):
super(OIDCSecurityManager, self).__init__(appbuilder)
if self.auth_type == AUTH_OID:
self.oid = OpenIDConnect(self.appbuilder.get_app)
self.authoidview = AuthOIDCView
To enable OpenID in Superset, you would previously have had to set the authentication type to AUTH_OID. My security manager still executes all the behaviour of the super class, but overrides the oid attribute with the OpenIDConnect object. Further, it replaces the default OpenID authentication view with a custom one. I have implemented mine like this:
from flask_appbuilder.security.views import AuthOIDView
from flask_login import login_user
from urllib import quote
class AuthOIDCView(AuthOIDView):
@expose('/login/', methods=['GET', 'POST'])
def login(self, flag=True):
sm = self.appbuilder.sm
oidc = sm.oid
@self.appbuilder.sm.oid.require_login
def handle_login():
user = sm.auth_user_oid(oidc.user_getfield('email'))
if user is None:
info = oidc.user_getinfo(['preferred_username', 'given_name', 'family_name', 'email'])
user = sm.add_user(info.get('preferred_username'), info.get('given_name'), info.get('family_name'), info.get('email'), sm.find_role('Gamma'))
login_user(user, remember=False)
return redirect(self.appbuilder.get_url_for_index)
return handle_login()
@expose('/logout/', methods=['GET', 'POST'])
def logout(self):
oidc = self.appbuilder.sm.oid
oidc.logout()
super(AuthOIDCView, self).logout()
redirect_url = request.url_root.strip('/') + self.appbuilder.get_url_for_login
return redirect(oidc.client_secrets.get('issuer') + '/protocol/openid-connect/logout?redirect_uri=' + quote(redirect_url))
My view overrides the behaviours at the /login and /logout endpoints. On login, the handle_login method is run. It requires the user to be authenticated by the OIDC provider. In our case, this means the user will first be redirected to Keycloak to log in.
On authentication, the user is redirected back to Superset. Next, we look up whether we recognize the user. If not, we create the user based on their OIDC user info. Finally, we log the user into Superset and redirect them to the landing page.
On logout, we will need to invalidate these cookies:
The superset session
The OIDC token
The cookies set by Keycloak
By default, Superset will only take care of the first. The extended logout method takes care of all three points.
Configure Superset
Finally, we need to add some parameters to our superset_config.py. This is how I've configured mine:
'''
AUTHENTICATION
'''
AUTH_TYPE = AUTH_OID
OIDC_CLIENT_SECRETS = 'client_secret.json'
OIDC_ID_TOKEN_COOKIE_SECURE = False
OIDC_REQUIRE_VERIFIED_EMAIL = False
CUSTOM_SECURITY_MANAGER = OIDCSecurityManager
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Gamma'
| Keycloak | 47,678,321 | 15 |
I have a Spring Boot project with keycloak integrated. Now I want to disable keycloak for testing purposes.
I tried by adding keycloak.enabled=false to application.properties as mentioned in Keycloak documentation but it didnt work.
So how do I disable it?
| Update 2022
Please follow this excellent guide on Baeldung.
For anyone who might have the same trouble, here is what I did.
I didn't disable Keycloak but I made a separate a Keycloak config file for testing purposes.
Here is my config file
@Profile("test")
@Configuration
@EnableWebSecurity
public class SecurityTestConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().antMatchers("/**").permitAll();
http.headers().frameOptions().disable();
http.csrf().disable();
}
@Override
public void configure(WebSecurity web) throws Exception {
web.ignoring().antMatchers("/**");
}
@Bean
@Scope(scopeName = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public AccessToken accessToken() {
AccessToken accessToken = new AccessToken();
accessToken.setSubject("abc");
accessToken.setName("Tester");
return accessToken;
}
}
Please note it is important to use this only in a test environment and therefore I have annotated the config as @Profile("test"). I have also added an AccessToken bean since some of the auditing features in my application depend on it.
| Keycloak | 47,861,513 | 15 |
I'm using keycloak 3.4 and spring boot to develop a web app.
I'm using the Active Directory as User Federation to retrieve all users information.
But to use those information inside my web app I think I have to save them inside the "local-webapp" database.
So after the users are logged, how can I save them inside my database?
I'm thinking about a scenario like: "I have an object A which it refers to the user B, so I have to put a relation between them. So I add a foreign key."
In that case I need to have the user on my DB. no?
EDIT
To avoid to get save all users on my DB I'm trying to use the Administrator API, so I added the following code inside a controller.
I also created another client called Test to get all users, in this way I can use client-id and client-secret. Or is there a way to use the JWT to use the admin API?
The client:
Keycloak keycloak2 = KeycloakBuilder.builder()
.serverUrl("http://localhost:8080/auth/admin/realms/MYREALM/users")
.realm("MYREALMM")
.username("u.user")
.password("password")
.clientId("Test")
.clientSecret("cade3034-6ee1-4b18-8627-2df9a315cf3d")
.resteasyClient(new ResteasyClientBuilder().connectionPoolSize(20).build())
.build();
RealmRepresentation realm2 = keycloak2.realm("MYREALMM").toRepresentation();
the error is:
2018-02-05 12:33:06.638 ERROR 16975 --- [nio-8080-exec-7] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed; nested exception is java.lang.Error: Unresolved compilation problem:
The method realm(String) is undefined for the type AccessTokenResponse
] with root cause
java.lang.Error: Unresolved compilation problem:
The method realm(String) is undefined for the type AccessTokenResponse
Where am I doing wrong?
EDIT 2
I also tried this:
@Autowired
private HttpServletRequest request;
public ResponseEntity listUsers() {
KeycloakAuthenticationToken token = (KeycloakAuthenticationToken) request.getUserPrincipal();
KeycloakPrincipal principal=(KeycloakPrincipal)token.getPrincipal();
KeycloakSecurityContext session = principal.getKeycloakSecurityContext();
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl("http://localhost:8080/auth")
.realm("MYREALMM")
.authorization(session.getToken().getAuthorization().toString())
.resteasyClient(new ResteasyClientBuilder().connectionPoolSize(20).build())
.build();
RealmResource r = keycloak.realm("MYREALMM");
List<org.keycloak.representations.idm.UserRepresentation> list = keycloak.realm("MYREALMM").users().list();
return ResponseEntity.ok(list);
but the authorization is always null.
Why?
EDIT 3
Following you can find my spring security config:
@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled=true)
@ComponentScan(basePackageClasses = KeycloakSecurityComponents.class)
@KeycloakConfiguration
public class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http.httpBasic().disable();
http
.csrf().disable()
.authorizeRequests()
.antMatchers("/webjars/**").permitAll()
.antMatchers("/resources/**").permitAll()
.anyRequest().authenticated()
.and()
.logout()
.logoutUrl("/logout")
.logoutRequestMatcher(new AntPathRequestMatcher("/logout", "GET"))
.permitAll()
.logoutSuccessUrl("/")
.invalidateHttpSession(true);
}
@Autowired
public KeycloakClientRequestFactory keycloakClientRequestFactory;
@Bean
public KeycloakRestTemplate keycloakRestTemplate() {
return new KeycloakRestTemplate(keycloakClientRequestFactory);
}
@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) {
KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
SimpleAuthorityMapper simpleAuthorityMapper = new SimpleAuthorityMapper();
simpleAuthorityMapper.setPrefix("ROLE_");
simpleAuthorityMapper.setConvertToUpperCase(true);
keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(simpleAuthorityMapper);
auth.authenticationProvider(keycloakAuthenticationProvider);
}
@Bean
public KeycloakSpringBootConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
@Bean
@Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new RegisterSessionAuthenticationStrategy(new SessionRegistryImpl());
}
@Override
public void configure(WebSecurity web) throws Exception {
web
.ignoring()
.antMatchers("/resources/**", "/static/**", "/css/**", "/js/**", "/images/**", "/webjars/**");
}
@Bean
@Scope(scopeName = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public AccessToken accessToken() {
HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.currentRequestAttributes()).getRequest();
return ((KeycloakSecurityContext) ((KeycloakAuthenticationToken) request.getUserPrincipal()).getCredentials()).getToken();
}
}
EDIT 4
These are the properties inside the applicatoin.properties
#######################################
# KEYCLOAK #
#######################################
keycloak.auth-server-url=http://localhost:8181/auth
keycloak.realm=My Realm
keycloak.ssl-required=external
keycloak.resource=AuthServer
keycloak.credentials.jwt.client-key-password=keystorePwd
keycloak.credentials.jwt.client-keystore-file=keystore.jks
keycloak.credentials.jwt.client-keystore-password=keystorePwd
keycloak.credentials.jwt.alias=AuthServer
keycloak.credentials.jwt.token-expiration=10
keycloak.credentials.jwt.client-keystore-type=JKS
keycloak.use-resource-role-mappings=true
keycloak.confidential-port=0
keycloak.principal-attribute=preferred_username
EDIT 5.
This is my keycloak config:
the user that I'm using to login with view user permission:
EDIT 6
This the log form keycloak after enabling logging:
2018-02-12 08:31:00.274 3DEBUG 5802 --- [nio-8080-exec-1] o.k.adapters.PreAuthActionsHandler : adminRequest http://localhost:8080/utente/prova4
2018-02-12 08:31:00.274 3DEBUG 5802 --- [nio-8080-exec-1] .k.a.t.AbstractAuthenticatedActionsValve : AuthenticatedActionsValve.invoke /utente/prova4
2018-02-12 08:31:00.274 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.AuthenticatedActionsHandler : AuthenticatedActionsValve.invoke http://localhost:8080/utente/prova4
2018-02-12 08:31:00.274 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.AuthenticatedActionsHandler : Policy enforcement is disabled.
2018-02-12 08:31:00.275 3DEBUG 5802 --- [nio-8080-exec-1] o.k.adapters.PreAuthActionsHandler : adminRequest http://localhost:8080/utente/prova4
2018-02-12 08:31:00.275 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.AuthenticatedActionsHandler : AuthenticatedActionsValve.invoke http://localhost:8080/utente/prova4
2018-02-12 08:31:00.275 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.AuthenticatedActionsHandler : Policy enforcement is disabled.
2018-02-12 08:31:00.276 3DEBUG 5802 --- [nio-8080-exec-1] o.k.adapters.PreAuthActionsHandler : adminRequest http://localhost:8080/utente/prova4
2018-02-12 08:31:00.276 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.AuthenticatedActionsHandler : AuthenticatedActionsValve.invoke http://localhost:8080/utente/prova4
2018-02-12 08:31:00.276 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.AuthenticatedActionsHandler : Policy enforcement is disabled.
2018-02-12 08:31:10.580 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.s.client.KeycloakRestTemplate : Created GET request for "http://localhost:8181/auth/admin/realms/My%20Realm%20name/users"
2018-02-12 08:31:10.580 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.s.client.KeycloakRestTemplate : Setting request Accept header to [application/json, application/*+json]
2018-02-12 08:31:10.592 3DEBUG 5802 --- [nio-8080-exec-1] o.k.a.s.client.KeycloakRestTemplate : GET request for "http://localhost:8181/auth/admin/realms/My%20Realm%20name/users" resulted in 401 (Unauthorized); invoking error handler
2018-02-12 08:31:10.595 ERROR 5802 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.web.client.HttpClientErrorException: 401 Unauthorized] with root cause
org.springframework.web.client.HttpClientErrorException: 401 Unauthorized
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:85) ~[spring-web-4.3.13.RELEASE.jar:4.3.13.RELEASE]
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:707) ~[spring-web-4.3.13.RELEASE.jar:4.3.13.RELEASE]
| In order to access the whole list of users, you must verify that the logged user contains at least the view-users role from the realm-management client, see this answer I wrote some time ago. Once the user has this role, the JWT she retrieves will cointain it.
As I can infer from your comments, you seem to lack some bases about the Authorization header. Once the user gets logged in, she gets the signed JWT from keycloak, so as every client in the realm can trust it, without the need to ask Keycloak. This JWT contains the access token, which is later on required in the Authorization header for each of user's request, prefixed by the Bearer keyword (see Token-Based Authentication in https://auth0.com/blog/cookies-vs-tokens-definitive-guide/).
So when user makes the request to your app in order to view the list of users, her access token containing the view-users role already goes into the request headers. Instead of having to parse it manually, create another request yourself to access the Keycloak user endpoint and attach it (as you seem to be doing with KeycloakBuilder), the Keycloak Spring Security adapter already provides a KeycloakRestTemplate class, which is able to perform a request to another service for the current user:
SecurityConfig.java
@Configuration
@EnableWebSecurity
@ComponentScan(basePackageClasses = KeycloakSecurityComponents.class)
public class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter {
...
@Autowired
public KeycloakClientRequestFactory keycloakClientRequestFactory;
@Bean
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public KeycloakRestTemplate keycloakRestTemplate() {
return new KeycloakRestTemplate(keycloakClientRequestFactory);
}
...
}
Note the scope for the template is PROTOTYPE, so Spring will use a different instance for each of the requests being made.
Then, autowire this template and use it to make requests:
@Service
public class UserRetrievalService{
@Autowired
private KeycloakRestTemplate keycloakRestTemplate;
public List<User> getUsers() {
ResponseEntity<User[]> response = keycloakRestTemplate.getForEntity(keycloakUserListEndpoint, User[].class);
return Arrays.asList(response.getBody());
}
}
You will need to implement your own User class which matches the JSON response returned by the keycloak server.
Note that, when user not allowed to access the list, a 403 response code is returned from the Keycloak server. You could even deny it before yourself, using some annotations like: @PreAuthorize("hasRole('VIEW_USERS')").
Last but not least, I think @dchrzascik's answer is well pointed. To sum up, I would say there's actually another way to avoid either retrieving the whole user list from the keycloak server each time or having your users stored in your app database: you could actually cache them, so as you could update that cache if you do user management from your app.
EDIT
I've implemented a sample project to show how to obtain the whole list of users, uploaded to Github. It is configured for a confidential client (when using a public client, the secret should be deleted from the application.properties).
See also:
https://github.com/keycloak/keycloak-documentation/blob/master/securing_apps/topics/oidc/java/spring-security-adapter.adoc
| Keycloak | 48,583,361 | 15 |
I am trying to setup Keycloak server for our organisation. I have couple of questions.
How can we use our existing user database to authenticate users - User Federation. Keycloak only has LADP/Kerberos options. Is there any custom plugin which can be used for MySQL user authentication or can we use existing connectors itself (LDAP/Kerberos) via some adapter for the database?
Is it possible to have multiple Identity providers within Keycloak environment - (Have Keycloak as IDP for few services, while Keycloak Google IDP for other services).
I have followed the official documentation, but for some reason not able to view the content of the link. Any helpful links to proper guide would be great.
| Check Keycloak Custom User Federation
It means that, to use diffirent datasource (or process) while Keycloak username / password login
see =>
https://github.com/keycloak/keycloak/blob/main/docs/documentation/server_development/topics/user-storage/simple-example.adoc
https://tech.smartling.com/migrate-to-keycloak-with-zero-downtime-8dcab9e7cb2c github => (https://github.com/Smartling/keycloak-user-migration-provider)
First link => explaining how to configure external db to keycloak.
Second link (need changes)=> these examplecan change like that,
you can create a custom federation implementation,
it will be call your service,
your service will be query your db
your service will response your result
Second example(my suggestion) will be abstract your custom code (federation process, your service) and keycloak. Keycloak ony call your service, everything else are your implementation.
| Keycloak | 50,087,867 | 15 |
I am trying to deploy keycloak using docker image (https://hub.docker.com/r/jboss/keycloak/ version 4.5.0-Final) and facing an issue with setting up SSL.
According to the docs
Keycloak image allows you to specify both a
private key and a certificate for serving HTTPS. In that case you need
to provide two files:
tls.crt - a certificate tls.key - a private key Those files need to be
mounted in /etc/x509/https directory. The image will automatically
convert them into a Java keystore and reconfigure Wildfly to use it.
I followed the given steps and provided the volume mount setting with a folder with the necessary files (tls.crt and tls.key), But I am facing issues with SSL handshake, getting
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
error, blocking keycloak load in browser when trying to access it.
I have used letsencrypt to generate pem files and used openssl to create .crt and .key files.
Also tried just openssl to create those files to narrow down issue and the behavior is same(some additional info if this should matter)
By default, when I simply specify just the port binding -p 8443:8443 without specifying the cert volume mount /etc/x509/https the keycloak server generates a self signed certificate and I don't see issue in viewing the app in browser
I guess this might be more of a certificate creation issue than anything specific to keycloak, But, unsure how to get this to working.
Any help is appreciated
| I also faced the issue of getting an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error, using the jboss/keycloak Docker image and free certificates from letsencrypt. Even after considering the advices from the other comments. Now, I have a working (and quite easy) setup, which might also help you.
1) Generate letsencrypt certificate
At first, I generated my letsencrypt certificate for domain sub.example.com using the certbot. You can find detailed instructions and alternative ways to gain a certificate at https://certbot.eff.org/ and the user guide at https://certbot.eff.org/docs/using.html.
$ sudo certbot certonly --standalone
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): sub.example.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for sub.example.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/sub.example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/sub.example.com/privkey.pem
Your cert will expire on 2020-01-27. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
2) Prepare docker-compose environment
I use docker-compose to run keycloak via docker. The config and data files are stored in path /srv/docker/keycloak/.
Folder config contains the docker-compose.yml
Folder data/certs contains the certificates I generated via letsencrypt
Folder data/keycloack_db is mapped to the database container to make its data persistent.
Put the certificate files to the right path
When I first had issues using the original letscrypt certificates for keycloak, I tried the workaround of converting the certificates to another format, as mentioned in the comments of the former answers, which also failed. Eventually, I realized that my problem was caused by permissions set to the mapped certificate files.
So, what worked for me is to just to copy and rename the files provided by letsencrypt, and mount them to the container.
$ cp /etc/letsencrypt/live/sub.example.com/fullchain.pem /srv/docker/keycloak/data/certs/tls.crt
$ cp /etc/letsencrypt/live/sub.example.com/privkey.pem /srv/docker/keycloak/data/certs/tls.key
$ chmod 755 /srv/docker/keycloak/data/certs/
$ chmod 604 /srv/docker/keycloak/data/certs/*
docker-compose.yml
In my case, I needed to use the host network of my docker host. This is not best practice and should not be required for your case. Please find information about configuration parameters in the documentation at hub.docker.com/r/jboss/keycloak/.
version: '3.7'
networks:
default:
external:
name: host
services:
keycloak:
container_name: keycloak_app
image: jboss/keycloak
depends_on:
- mariadb
restart: always
ports:
- "8080:8080"
- "8443:8443"
volumes:
- "/srv/docker/keycloak/data/certs/:/etc/x509/https" # map certificates to container
environment:
KEYCLOAK_USER: <user>
KEYCLOAK_PASSWORD: <pw>
KEYCLOAK_HTTP_PORT: 8080
KEYCLOAK_HTTPS_PORT: 8443
KEYCLOAK_HOSTNAME: sub.example.ocm
DB_VENDOR: mariadb
DB_ADDR: localhost
DB_USER: keycloak
DB_PASSWORD: <pw>
network_mode: host
mariadb:
container_name: keycloak_db
image: mariadb
volumes:
- "/srv/docker/keycloak/data/keycloak_db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: <pw>
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: <pw>
network_mode: host
Final directory setup
This is how my final file and folder setup looks like.
$ cd /srv/docker/keycloak/
$ tree
.
├── config
│ └── docker-compose.yml
└── data
├── certs
│ ├── tls.crt
│ └── tls.key
└── keycloak_db
Start container
Finally, I was able to start my software using docker-compose.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can see the mounted certificates within the container.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can doublecheck the mounted certificates within the container.
## open internal shell of keycloack container
$ sudo docker exec -it keycloak_app /bin/bash
## open directory of certificates
$ cd /etc/x509/https/
$ ll
-rw----r-- 1 root root 3586 Oct 30 14:21 tls.crt
-rw----r-- 1 root root 1708 Oct 30 14:20 tls.key
Considerung the setup from the docker-compose.yml, keycloak is now available at https://sub.example.com:8443
| Keycloak | 52,674,979 | 15 |
Recently, I hardened my Keycloak deployment to use a dedicated Infinispan cluster as a remote-store for an extra layer of persistence for Keycloak's various caches. The change itself went reasonably well, although after making this change, we started seeing a lot of login errors due to the expired_code error message:
WARN [org.keycloak.events] (default task-2007) type=LOGIN_ERROR, realmId=my-realm, clientId=null, userId=null, ipAddress=192.168.50.38, error=expired_code, restart_after_timeout=true
This error message is typically repeated dozens of times all within a short period of time and from the same IP address. The cause of this appears to be the end-user's browser infinitely redirecting on login until the browser itself stops the loop.
I have seen various GitHub issues (https://github.com/helm/charts/issues/8355) that also document this behavior, and the consensus seems to be that this is caused by the Keycloak cluster not able to correctly discover its members via JGroups.
This explanation makes sense when you consider that some of the Keycloak caches are distributed across the Keycloak nodes in the default configuration within standalone-ha.xml. However, I have modified these caches to be local caches with a remote-store pointing to my new Infinispan cluster, and I believe I have made some incorrect assumptions about how this works, causing this error to start happening.
Here is how my Keycloak caches are configured:
<subsystem xmlns="urn:jboss:domain:infinispan:7.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
<transport lock-timeout="60000"/>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<local-cache name="sessions">
<remote-store cache="sessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="authenticationSessions">
<remote-store cache="authenticationSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="offlineSessions">
<remote-store cache="offlineSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="clientSessions">
<remote-store cache="clientSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="offlineClientSessions">
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="loginFailures">
<remote-store cache="loginFailures" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="actionTokens">
<remote-store cache="actionTokens" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<replicated-cache name="work">
<remote-store cache="work" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</replicated-cache>
</cache-container>
<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<replicated-cache name="default">
<transaction mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
<cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<transport lock-timeout="60000"/>
<local-cache name="local-query">
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<invalidation-cache name="entity">
<transaction mode="NON_XA"/>
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</invalidation-cache>
<replicated-cache name="timestamps"/>
</cache-container>
</subsystem>
Note that most of this cache configuration is unchanged when compared to the default standalone-ha.xml configuration file. The changes I have made here are changing the following caches to be local and pointing them to my remote Infinispan cluster:
sessions
authenticationSessions
offlineSessions
clientSessions
offlineClientSessions
loginFailures
actionTokens
work
Here is the configuration for my remote-cache server:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<!-- Default socket bindings from standalone-ha.xml are not listed here for brevity -->
<outbound-socket-binding name="remote-cache">
<remote-destination host="${env.INFINISPAN_HOST}" port="${remote.cache.port:11222}"/>
</outbound-socket-binding>
</socket-binding-group>
Here is how my caches are configured on the Infinispan side:
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default">
<transport lock-timeout="60000"/>
<global-state/>
<replicated-cache-configuration name="replicated-keycloak" mode="SYNC">
<locking acquire-timeout="3000" />
</replicated-cache-configuration>
<replicated-cache name="work" configuration="replicated-keycloak"/>
<replicated-cache name="sessions" configuration="replicated-keycloak"/>
<replicated-cache name="authenticationSessions" configuration="replicated-keycloak"/>
<replicated-cache name="clientSessions" configuration="replicated-keycloak"/>
<replicated-cache name="offlineSessions" configuration="replicated-keycloak"/>
<replicated-cache name="offlineClientSessions" configuration="replicated-keycloak"/>
<replicated-cache name="actionTokens" configuration="replicated-keycloak"/>
<replicated-cache name="loginFailures" configuration="replicated-keycloak"/>
</cache-container>
</subsystem>
I believe I have made some incorrect assumptions about how local caches with remote stores work, and I was hoping someone would be able to clear this up for me. My intention was to make the Infinispan cluster the source of truth for all of Keycloak's caches. By making every cache local, I assumed that data would be replicated to each Keycloak node through the Infinispan cluster, such that a write to the local authenticationSessions cache on keycloak-0 would be synchronously persisted to keycloak-1 through the Infinispan cluster.
What I believe is happening is that the write to a local cache on Keycloak is not synchronous with respect to persisting that value to the remote Infinispan cluster. In other words, when a write is performed to the authenticationSessions cache, it does not block while waiting for this value to be written to the Infinispan cluster, so an immediate read for this data on another Keycloak node results in a cache miss, locally and in the Infinispan cluster.
I'm looking for some help with identifying why my current configuration is causing this issue, and some clarification on the behavior of a remote-store - is there a way to get cache writes to a local cache backed by a remote-store to be synchronous? If not, is there a better way to do what I'm trying to accomplish here?
Some other potentially relevant details:
Both Keycloak and Infinispan are deployed to the same namespace in a Kubernetes cluster.
I am using KUBE_PING for JGroups discovery.
Using the Infinispan console, I am able to verify that all of the caches replicated to all of the Infinispan nodes have some amount of entries in them - they aren't completely unused.
If I add a new realm to one Keycloak node, it successfully shows up on other Keycloak nodes, which leads me to believe that the work cache is being propagated across all Keycloak nodes.
If I log in to one Keycloak node, my session remains on other Keycloak nodes, which leads me to believe that the session related caches are being propagated across all Keycloak nodes.
I'm using sticky sessions for Keycloak as a temporary fix for this, but I believe fixing these underlying cache issues is a more permanent solution.
Thanks in advance!
| I will try to clarify some points to take in mind when you configure Keycloak in cluster.
Talking about subject of "infinite redirects", I have experienced a similar problem in development environments years ago. While the keycloak team has corrected several bugs related to infinite loops (e.g. KEYCLOAK-5856, KEYCLOAK-5022, KEYCLOAK-4717, KEYCLOAK-4552, KEYCLOAK-3878) sometimes it is happening due to configuration issues.
One thing to check if the site is HTTPS is to be accessing a Keycloak server by HTTPS as well.
I remember suffered a similar problem to the infinite loop when the Keycloak was placed behind an HTTPS reverse proxy and the needed headers were not propagated to the Keycloak (headers X-FOWARDED...). It was solved setting up the environment well. It can happen a similar problem when the nodes discovery in the cluster does not work correctly (JGroups).
About the error message "expired_code", I would verify that the clocks of each node are synchronized since it can lead to this kind of expired token / code error.
Now understanding better your configuration, it does not seem inappropriate to use the "local-cache" mode with a remote-store, pointing to the infinispan cluster.
Although, usually, the shared store (such as a remote-cache) is usually used with an invalidation-cache where it is avoided to replicate the complete data by the cluster (see comment that can be applied here https://developer.jboss.org/message/986847#986847), there may not be big differences with a distributed or invalidation cache.
I believe that a distributed-cache with a remote-store would apply better (or an invalidation-cache to avoid replicating heavy data to the owners) however I could not ensure how a "local-cache" works with a remote storage (shared) since I have never tried this kind of configuration.
I would first choose to test a distributed-cache or an invalidation-cache given by how it works with the evicted / invalidated data. Normally, local caches do not synchronize with other remote nodes in the cluster. If this kind of implementation keeps a local map in memory, it is likely that even if the data in the remote-storage is modified, these changes may be not reflected in some situations.
I can give you a Jmeter test file that you can use so that you can try to perform your own tests with both configurations.
Returning to the topic of your configuration, you have to take into account in addition to that the replicated cache have certain limitations and are usually a little slower than the distributed ones that only replicate the data to the defined owners (the replicated ones write in all the nodes). There is also a variant called scattered-cache that performs better but for example lacks Transaction support (you can see here a comparative chart https://infinispan.org/docs/stable/user_guide/user_guide.html#which_cache_mode_should_i_use).
Replication usually only performs well in small clusters (under 8 or 10 servers), due to the number of replication messages that need to send. Distributed cache allows Infinispan to scale linearly by defining a number of replicas by entry.
The main reason to make a configuration of the type you are trying to do instead of one similar to the one proposed by Keycloak (standalone-ha.xml), is when you have a requirement to independently scale the infinispan cluster of the application or using infinispan as a persistent store.
I will explain how Keycloak manages its cache and how it divides it into two or three groups basically so you can better understand the configuration you need.
Usually, to configure Keycloak in a cluster, simply raise and configure the Keycloak in HA mode just as you would do with a traditional instance of Wildfly. If one observes the differences between the standalone.xml and the standalone-ha.xml that comes in the keycloak installation, one notices that basically support is added to "Jgroups", "modcluster", and the caches are distributed (which were previously local) between the nodes in Wildfly / Keycloak (HA).
In detail:
jgroups subsystem is added, which will be responsible for connecting the cluster nodes and carrying out the messaging / communication in the cluster. JGroups provides network communication capabilities, reliable communications and other features like node discovery, point-to-point communications, multicast communication, failure detection, and data transfer between cluster nodes.
the EJB3 cache goes from a SIMPLE cache (in local memory without transaction handling) to a DISTRIBUTED. However, I would ensure that the Keycloak project does not require using EJB3 according to my experience extending this project.
cache: "realms", "users", "authorization", and "keys" are kept local since they are only used to reduce the load on the database.
cache: "work" becomes REPLICATED since it is the one that Keycloak uses to notify to the cluster nodes that an entry of the cache must be evicted/invalidated since its status has been modified.
cache "sessions", "authenticationSessions", "offlineSessions", "clientSessions", "offlineSessions", "loginFailures", and "actionTokens" becomes DISTRIBUTED because they perform better than replicated-cache (see https://infinispan.org/docs/stable/user_guide/user_guide.html#which_cache_mode_should_i_use) because you only have to replicate the data to the owners.
The other changes proposed by keycloak for its default HA configuration are to distributing"web" and "ejb" (and above) cache container, and to change "hibernate" cache to an "invalidation-cache" (like a local cache but with invalidation sync).
I think that your cache configuration should be defined as "distributed-cache" for caches like "sessions", "authenticationSessions", "offlineSessions", "clientSessions", "offlineClientSessions", "loginFailures" and "actionTokens" (instead of "local"). However, because you use a remote shared store, you should test it to see how it works as I said before.
Also, cache named "work" should be "replicated-cache" and the others ("keys", "authorization", "realms" and "users") should be defined as "local-cache".
In your infinispan cluster you can define it as "distributed-cache" (or "replicated-cache").
Remember that:
In a replicated cache all nodes in a cluster hold all keys i.e. if a
key exists on one node, it will also exist on all other nodes. In a
distributed cache, a number of copies are maintained to provide
redundancy and fault tolerance, however this is typically far fewer
than the number of nodes in the cluster. A distributed cache provides
a far greater degree of scalability than a replicated cache.
A distributed cache is also able to transparently locate keys across a
cluster, and provides an L1 cache for fast local read access of state
that is stored remotely. You can read more in the relevant User Guide
chapter.
Infinispan doc. ref: cache mode
As the Keycloak (6.0) documentation says:
Keycloak has two types of caches. One type of cache sits in front of
the database to decrease load on the DB and to decrease overall
response times by keeping data in memory. Realm, client, role, and
user metadata is kept in this type of cache. This cache is a local
cache. Local caches do not use replication even if you are in the
cluster with more Keycloak servers. Instead, they only keep copies
locally and if the entry is updated an invalidation message is sent to
the rest of the cluster and the entry is evicted. There is separate
replicated cache work, which task is to send the invalidation messages
to the whole cluster about what entries should be evicted from local
caches. This greatly reduces network traffic, makes things efficient,
and avoids transmitting sensitive metadata over the wire.
The second type of cache handles managing user sessions, offline
tokens, and keeping track of login failures so that the server can
detect password phishing and other attacks. The data held in these
caches is temporary, in memory only, but is possibly replicated across
the cluster.
Doc. Reference: cache configuration
If you want to read another good document, you can take a look to "cross-dc" section (cross-dc mode) especially section "3.4.6 Infinispan cache" (infinispan cache)
I tried with Keycloak 6.0.1 and Infinispan 9.4.11.Final, here is my test configuration (based on standalone-ha.xml file).
Keycloak infinispan subsystem:
<subsystem xmlns="urn:jboss:domain:infinispan:8.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
<transport lock-timeout="60000"/>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<distributed-cache name="sessions" owners="1" remote-timeout="30000">
<remote-store cache="sessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="authenticationSessions" owners="1" remote-timeout="30000">
<remote-store cache="authenticationSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="offlineSessions" owners="1" remote-timeout="30000">
<remote-store cache="offlineSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="clientSessions" owners="1" remote-timeout="30000">
<remote-store cache="clientSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="offlineClientSessions" owners="1" remote-timeout="30000">
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="loginFailures" owners="1" remote-timeout="30000">
<remote-store cache="loginFailures" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<replicated-cache name="work"/>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<distributed-cache name="actionTokens" owners="1" remote-timeout="30000">
<remote-store cache="actionTokens" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
<object-memory size="-1"/>
<expiration max-idle="-1" interval="300000"/>
</distributed-cache>
</cache-container>
Keycloak socket bindings:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="jgroups-mping" interface="private" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
<socket-binding name="jgroups-tcp" interface="private" port="7600"/>
<socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
<socket-binding name="modcluster" multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}" multicast-port="23364"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="remote-cache">
<remote-destination host="my-server-domain.com" port="11222"/>
</outbound-socket-binding>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
</socket-binding-group>
Infinispan cluster configuration:
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
<transport lock-timeout="60000"/>
<global-state/>
<distributed-cache-configuration name="transactional">
<transaction mode="NON_XA" locking="PESSIMISTIC"/>
</distributed-cache-configuration>
<distributed-cache-configuration name="async" mode="ASYNC"/>
<replicated-cache-configuration name="replicated"/>
<distributed-cache-configuration name="persistent-file-store">
<persistence>
<file-store shared="false" fetch-state="true"/>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="indexed">
<indexing index="LOCAL" auto-config="true"/>
</distributed-cache-configuration>
<distributed-cache-configuration name="memory-bounded">
<memory>
<binary size="10000000" eviction="MEMORY"/>
</memory>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-file-store-passivation">
<memory>
<object size="10000"/>
</memory>
<persistence passivation="true">
<file-store shared="false" fetch-state="true">
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</file-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-file-store-write-behind">
<persistence>
<file-store shared="false" fetch-state="true">
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</file-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-rocksdb-store">
<persistence>
<rocksdb-store shared="false" fetch-state="true"/>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-jdbc-string-keyed">
<persistence>
<string-keyed-jdbc-store datasource="java:jboss/datasources/ExampleDS" fetch-state="true" preload="false" purge="false" shared="false">
<string-keyed-table prefix="ISPN">
<id-column name="id" type="VARCHAR"/>
<data-column name="datum" type="BINARY"/>
<timestamp-column name="version" type="BIGINT"/>
</string-keyed-table>
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</string-keyed-jdbc-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache name="default"/>
<replicated-cache name="repl" configuration="replicated"/>
<replicated-cache name="work" configuration="replicated"/>
<replicated-cache name="sessions" configuration="replicated"/>
<replicated-cache name="authenticationSessions" configuration="replicated"/>
<replicated-cache name="clientSessions" configuration="replicated"/>
<replicated-cache name="offlineSessions" configuration="replicated"/>
<replicated-cache name="offlineClientSessions" configuration="replicated"/>
<replicated-cache name="actionTokens" configuration="replicated"/>
<replicated-cache name="loginFailures" configuration="replicated"/>
</cache-container>
</subsystem>
P.S. Change attribute "owners" from 1 to your favorite value.
I hope to be helpful.
| Keycloak | 57,467,224 | 15 |
In Keycloak, by default, users are able to change their first and last name in the account manager page. However, is it possible to disable this behavior?
Removing both fields in the theme results in those values not being sent and the form failing, and a hand-crafted POST request would defeat this method anyway.
| I came across a similar problem and after reading this SO post, came to know that although you can disable/hide fields in ftl, you cannot disable form validation
For e.g I hid firstname field , but still cannot submit. Same was the result with disable as well:
I am not aware about disabling a particular field in some other way. However there is a workaround in which you can disable the entire account modification flow (Password can still be changed by Forgot Password option).
Bu default, account modification is enabled, but you can disable it for a particular realm by going to Realms -> Clients -> Account.
The result of this will be, the account page will be inaccessible:
| Keycloak | 57,528,936 | 15 |
I have a Keycloak server running in an EKS cluster that I'm trying to configure for production instead of dev mode.
I've managed to get SSL working with a reverse proxy, but when I go to the login page for the admin console it just loads indefinitely.
Here's my configuration:
Dockerfile
FROM --platform=linux/arm64 quay.io/keycloak/keycloak:19.0.1 as builder
ENV KC_DB=postgres
ENV KC_PROXY=edge
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
ENV KC_HTTP_RELATIVE_PATH=/auth
RUN /opt/keycloak/bin/kc.sh build
FROM --platform=linux/arm64 quay.io/keycloak/keycloak:19.0.1
COPY --from=builder /opt/keycloak/ /opt/keycloak/
## Install custom providers
COPY auth-identione-extension/target/auth-identione-extension-1.0.0-SNAPSHOT.jar /opt/keycloak/providers
ENV KC_HOSTNAME_STRICT=false
ENV KC_KEYCLOAK_USER={user}
ENV KC_KEYCLOAK_PASSWORD={password}
ENV KC_DB_URL={dburl}
ENV KC_DB_USERNAME={dbusername}
ENV KC_DB_PASSWORD={dbpassword}
ENV KC_HTTP_ENABLED=true
ENV KC_HOSTNAME=auth.identione.com
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start", "--optimized"]
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: keycloak-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: keycloak-app
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: keycloak-app
spec:
containers:
- image: {keycloak-img-url}
name: keycloak-app
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1024Mi"
cpu: "1000m"
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: default
name: keycloak-service
spec:
ports:
- port: 8180
targetPort: 8080
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: keycloak-app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: keycloak-service-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: {certificate-arn}
alb.ingress.kubernetes.io/ssl-redirect: 'https'
spec:
rules:
- host: auth.identione.com
http:
paths:
- path: /*
backend:
serviceName: keycloak-service
servicePort: 8180
| Found the issue.
I had to move the ENV KC_PROXY=edge variable in the Dockerfile after running the build script.
| Keycloak | 73,883,728 | 15 |
We're currently evaluating Keycloak as our SSO solution and while it works for our servlet-based applications there's a question regarding our (React-based) SPAs.
What our designers want: as an example let's say we have an email client spa. The user is in the process of writing an email but then gets distracted. When he returns the SSO session has already timed out and a re-login is required. The user should now be presented with a login form and after login it should be possible to send the email that's still in the SPA's local storage (i.e. re-login without restarting the SPA or losing data).
AFAIK Keycloak doesn't provide an authentication-api (for good reasons) and uses a redirect to the login page and back to the application (as I understand it for mobile apps the system browser would be used). If I'm not mistaken that redirect would then mean the SPA is then reinitialized and thus the data would be lost.
So here's the question: is what our designers want possible to do with Keycloak?
If yes, how would it be done? Directly posting to the login-url that Keycloak is using seems like a bad idea since the tokens would probably not be stored correctly and there might be same-origin policy problems. Would doing it inside an iframe or popup-window work?
| For someone who comes back to this question,
I think it's better to stick to the best practice for oAuth2/OpenId Connect for SPAs which is currently "Authorization Code Flow" with PKCE.
https://oauth.net/2/pkce/
https://datatracker.ietf.org/doc/html/draft-ietf-oauth-security-topics-13
A normal flow here needs a complete redirect to the auth server and back so your app will completely re-initialize. Or you use check-sso like Sébastien already mentioned with silent mode.
https://github.com/keycloak/keycloak-documentation/blob/master/securing_apps/topics/oidc/javascript-adapter.adoc
You can configure a silent check-sso option. With this feature enabled, your browser won’t do a full redirect to the {project_name} server and back to your application, but this action will be performed in a hidden iframe, so your application resources only need to be loaded and parsed once by the browser when the app is initialized and not again after the redirect back from {project_name} to your app. This is particularly useful in case of SPAs (Single Page Applications).
This way the login will happen in an iframe and the app initializes only once and should preserve state.
| Keycloak | 44,028,912 | 14 |
I'm working on a multi tenant project where usernames are actually their email addresses and the domain of the email serves as a tenant identifier.
Now in keycloak I'll have different realms per tenant, but I want to have a single login page for all tenants and the actual realm that will do the authentication to be somehow resolved by the username (email address).
How do I go about doing that?
I found a thread on the mailing list (that I cant find now...) that discussed the same problem. It was something along the lines of - create a main realm that will "proxy" to the others, but I'm not quite sure how to do that.
| I think Michał Łazowik's answer is on the right track, but for Single-Sign-On to work, it needs to be extended a little.
Keep in mind that because of KEYCLOAK-4593 if we have > 100 realms we may have to have multiple Keycloak servers also.
We'll need:
A separate HTTP server specifically for this purpose, auth-redirector.example.com.
An algorithm to determine the Keycloak server and realm from a username (email address).
Here would be the entire OAuth2 Authorization Code Flow:
An application discovers the user wants to log in. Before multiple realms, the realm's name would be a constant, so the application would redirect to:
https://keycloak.example.com/auth/realms/realname/protocol/openid-connect/auth?$get_params
Instead, it redirects to
https://auth-redirector.example.com/?$get_params
auth-redirector determines if it itself has a valid access token for this session, perhaps having to refresh the access token first from the Keycloak server that issued it (the user could have logged out and is trying to login as a different user that is served by a different realm).
If it has an valid access token we can determine the Keycloak server and realm from the username or email address in the access token and redirect to:
https://$keycloak_server/auth/$realm/realname/protocol/openid-connect/auth?$get_params
from here, the OAuth2 Authorization Code Flow proceeds as usual.
Else if it doesn't have a a valid access token, the auth-redirector stores the original app's $get_params as session data. It presents a form to the user asking for a username. When the user submits that, we can determine the Keycloak server and realm to use and then auth-redirector itself logs in to the Keycloak server using its own $get_params. Once the auth-redirector gets a call-back, it retrieves the access+refresh token from the Keycloak server and stores them in session data. It then, finally, redirects back to that same keycloak server and realm with the callers original $get_params (from session data). And the OAuth2 Authorization Code Flow proceeds as usual.
This is definitely a hack! But I think it could work. I'd love to try it out some day, time permitting.
Other hacks/solutions are needed for other OAuth2 flows...
| Keycloak | 44,583,833 | 14 |
I'm trying to export a realm file into keycloak docker container, I'm not able to do that because the server is runing when I execute this command:
bin/standalone.sh -Dkeycloak.migration.action=export
-Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=<DIR TO EXPORT TO>
I tried to modify the docker-entrypoint.sh and I delete the command which executes the server to launch:
#!/bin/bash
if [ $KEYCLOAK_USER ] && [ $KEYCLOAK_PASSWORD ]; then
keycloak/bin/add-user-keycloak.sh --user $KEYCLOAK_USER --password $KEYCLOAK_PASSWORD
fi
if [ "$DB_VENDOR" == "POSTGRES" ]; then
databaseToInstall="postgres"
elif [ "$DB_VENDOR" == "MYSQL" ]; then
databaseToInstall="mysql"
elif [ "$DB_VENDOR" == "H2" ]; then
databaseToInstall=""
else
if (printenv | grep '^POSTGRES_' &>/dev/null); then
databaseToInstall="postgres"
elif (printenv | grep '^MYSQL_' &>/dev/null); then
databaseToInstall="mysql"
fi
fi
if [ "$databaseToInstall" != "" ]; then
echo "[KEYCLOAK DOCKER IMAGE] Using the external $databaseToInstall database"
/bin/sh /opt/jboss/keycloak/bin/change-database.sh $databaseToInstall
else
echo "[KEYCLOAK DOCKER IMAGE] Using the embedded H2 database"
fi
exit $?
However I got a caschLoopBack when I run the pod of keycloak. Is there any solution to make the export inside the docker container and stop the server from running?
| You can start a temporary container. I'm using swarm and attachable network, but replacing the --network flag with some --link to the DB container should do it for a vanilla docker container :
docker run --rm --network=naq\
--name keycloak_exporter\
-v /tmp:/tmp/keycloak-export\
-e POSTGRES_DATABASE=keycloak\
-e POSTGRES_PASSWORD=password\
-e POSTGRES_USER=keycloak\
-e DB_VENDOR=POSTGRES\
-e POSTGRES_PORT_5432_TCP_ADDR=keycloakdb\
jboss/keycloak:3.4.3.Final\
-Dkeycloak.migration.action=export\
-Dkeycloak.migration.provider=dir\
-Dkeycloak.migration.dir=/tmp/keycloak-export\
-Dkeycloak.migration.usersExportStrategy=SAME_FILE\
-Dkeycloak.migration.realmName=Naq\
You'll then find export files in the /tmp dir on your host.
| Keycloak | 49,075,205 | 14 |
I'm using Spring Boot (v1.5.10.RELEASE) to create a backend for an application written in Angular. The back is secured using spring security + keycloak. Now I'm adding a websocket, using STOMP over SockJS, and wanted to secure it. I'm trying to follow the docs at Websocket Token Authentication, and it shows the following piece of code:
if (StompCommand.CONNECT.equals(accessor.getCommand())) {
Authentication user = ... ; // access authentication header(s)
accessor.setUser(user);
}
I'm able to retrieve the bearer token from the client using:
String token = accessor.getNativeHeader("Authorization").get(0);
My question is, how can I convert that to an Authentication object? Or how to proceed from here? Because I always get 403. This is my websocket security config:
@Configuration
public class WebSocketSecurityConfig extends
AbstractSecurityWebSocketMessageBrokerConfigurer {
@Override
protected void configureInbound(MessageSecurityMetadataSourceRegistry
messages) {
messages.simpDestMatchers("/app/**").authenticated().simpSubscribeDestMatchers("/topic/**").authenticated()
.anyMessage().denyAll();
}
@Override
protected boolean sameOriginDisabled() {
return true;
}
}
And this is the Web security configuration:
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
@Configuration
public class WebSecurityConfiguration extends KeycloakWebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.csrf().disable()
.authenticationProvider(keycloakAuthenticationProvider())
.addFilterBefore(keycloakAuthenticationProcessingFilter(), BasicAuthenticationFilter.class)
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.sessionAuthenticationStrategy(sessionAuthenticationStrategy())
.and()
.authorizeRequests()
.requestMatchers(new NegatedRequestMatcher(new AntPathRequestMatcher("/management/**")))
.hasRole("USER");
}
@Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new NullAuthenticatedSessionStrategy();
}
@Bean
public KeycloakConfigResolver KeycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
}
Any help or ideas are welcome.
| I was able to enable token based authentication, following the recomendations by Raman on this question. Here's the final code to make it work:
1) First, create a class that represent the JWS auth token:
public class JWSAuthenticationToken extends AbstractAuthenticationToken implements Authentication {
private static final long serialVersionUID = 1L;
private String token;
private User principal;
public JWSAuthenticationToken(String token) {
this(token, null, null);
}
public JWSAuthenticationToken(String token, User principal, Collection<GrantedAuthority> authorities) {
super(authorities);
this.token = token;
this.principal = principal;
}
@Override
public Object getCredentials() {
return token;
}
@Override
public Object getPrincipal() {
return principal;
}
}
2) Then, create an authenticator that handles the JWSToken, validating against keycloak. User is my own app class that represents a user:
@Slf4j
@Component
@Qualifier("websocket")
@AllArgsConstructor
public class KeycloakWebSocketAuthManager implements AuthenticationManager {
private final KeycloakTokenVerifier tokenVerifier;
@Override
public Authentication authenticate(Authentication authentication) throws AuthenticationException {
JWSAuthenticationToken token = (JWSAuthenticationToken) authentication;
String tokenString = (String) token.getCredentials();
try {
AccessToken accessToken = tokenVerifier.verifyToken(tokenString);
List<GrantedAuthority> authorities = accessToken.getRealmAccess().getRoles().stream()
.map(SimpleGrantedAuthority::new).collect(Collectors.toList());
User user = new User(accessToken.getName(), accessToken.getEmail(), accessToken.getPreferredUsername(),
accessToken.getRealmAccess().getRoles());
token = new JWSAuthenticationToken(tokenString, user, authorities);
token.setAuthenticated(true);
} catch (VerificationException e) {
log.debug("Exception authenticating the token {}:", tokenString, e);
throw new BadCredentialsException("Invalid token");
}
return token;
}
}
3) The class that actually validates the token against keycloak by calling the certs endpoint to validate the token signature, based on this gists. It returns a keycloak AccessToken:
@Component
@AllArgsConstructor
public class KeycloakTokenVerifier {
private final KeycloakProperties config;
/**
* Verifies a token against a keycloak instance
* @param tokenString the string representation of the jws token
* @return a validated keycloak AccessToken
* @throws VerificationException when the token is not valid
*/
public AccessToken verifyToken(String tokenString) throws VerificationException {
RSATokenVerifier verifier = RSATokenVerifier.create(tokenString);
PublicKey publicKey = retrievePublicKeyFromCertsEndpoint(verifier.getHeader());
return verifier.realmUrl(getRealmUrl()).publicKey(publicKey).verify().getToken();
}
@SuppressWarnings("unchecked")
private PublicKey retrievePublicKeyFromCertsEndpoint(JWSHeader jwsHeader) {
try {
ObjectMapper om = new ObjectMapper();
Map<String, Object> certInfos = om.readValue(new URL(getRealmCertsUrl()).openStream(), Map.class);
List<Map<String, Object>> keys = (List<Map<String, Object>>) certInfos.get("keys");
Map<String, Object> keyInfo = null;
for (Map<String, Object> key : keys) {
String kid = (String) key.get("kid");
if (jwsHeader.getKeyId().equals(kid)) {
keyInfo = key;
break;
}
}
if (keyInfo == null) {
return null;
}
KeyFactory keyFactory = KeyFactory.getInstance("RSA");
String modulusBase64 = (String) keyInfo.get("n");
String exponentBase64 = (String) keyInfo.get("e");
Decoder urlDecoder = Base64.getUrlDecoder();
BigInteger modulus = new BigInteger(1, urlDecoder.decode(modulusBase64));
BigInteger publicExponent = new BigInteger(1, urlDecoder.decode(exponentBase64));
return keyFactory.generatePublic(new RSAPublicKeySpec(modulus, publicExponent));
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
public String getRealmUrl() {
return String.format("%s/realms/%s", config.getAuthServerUrl(), config.getRealm());
}
public String getRealmCertsUrl() {
return getRealmUrl() + "/protocol/openid-connect/certs";
}
}
4) Finally, inject the authenticator in the Websocket configuration and complete the piece of code as recommended by spring docs:
@Slf4j
@Configuration
@EnableWebSocketMessageBroker
@AllArgsConstructor
public class WebSocketConfiguration extends AbstractWebSocketMessageBrokerConfigurer {
@Qualifier("websocket")
private AuthenticationManager authenticationManager;
@Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic");
config.setApplicationDestinationPrefixes("/app");
}
@Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/ws-paperless").setAllowedOrigins("*").withSockJS();
}
@Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.interceptors(new ChannelInterceptorAdapter() {
@Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
StompHeaderAccessor accessor = MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor.class);
if (StompCommand.CONNECT.equals(accessor.getCommand())) {
Optional.ofNullable(accessor.getNativeHeader("Authorization")).ifPresent(ah -> {
String bearerToken = ah.get(0).replace("Bearer ", "");
log.debug("Received bearer token {}", bearerToken);
JWSAuthenticationToken token = (JWSAuthenticationToken) authenticationManager
.authenticate(new JWSAuthenticationToken(bearerToken));
accessor.setUser(token);
});
}
return message;
}
});
}
}
I also changed my security configuration a bit. First, I excluded the WS endpoint from spring web securty, and also let the connection methods open to anyone in the websocket security:
In WebSecurityConfiguration:
@Override
public void configure(WebSecurity web) throws Exception {
web.ignoring()
.antMatchers("/ws-endpoint/**");
}
And in the class WebSocketSecurityConfig:
@Configuration
public class WebSocketSecurityConfig extends AbstractSecurityWebSocketMessageBrokerConfigurer {
@Override
protected void configureInbound(MessageSecurityMetadataSourceRegistry messages) {
messages.simpTypeMatchers(CONNECT, UNSUBSCRIBE, DISCONNECT, HEARTBEAT).permitAll()
.simpDestMatchers("/app/**", "/topic/**").authenticated().simpSubscribeDestMatchers("/topic/**").authenticated()
.anyMessage().denyAll();
}
@Override
protected boolean sameOriginDisabled() {
return true;
}
}
So the final result is: anybody in the local network can connect to the socket, but to actually subscribe to any channel, you have to be authenticated, so you need to send the Bearer token with the original CONNECT message or you'll get UnauthorizedException. Hope it helps others with this requeriment!
| Keycloak | 50,573,461 | 14 |
We have developed a REST API using the resteasy. (deployed in wildfly 10)
Basically these REST APIs are called internally from another application and end points are secured with keycloak.
But one endpoint is exposed to outside party (that endpoint is also secured with keycloak).
But since the outside party can't provide the Keycloak Autherization code, we have done an implementation where client is registerred with application generated auth_key and client will call the endpoint with that auth_key.
Then in the a web filter (a javax.servlet.Filter), using tha auth_key we get the relevant keycloak authntication Bearer token. If needed (eg : token expired) we call the Keycloak Server also. Once it is received we add that Autherization token to the httpRequest within the web filter and proceed to the end point application.
But the problem is, KeyCloak authentication is called before Web Filter.
What I'm looking for is "how to get Web Filter called before keycloak authentication?"
EDIT :
Now I'm trying to find a way as mentioned in here. Setting Request Header to Request Before Authentication Happens in Keycloak. There I could get the call before authentication happens.
But I'm unable to set the Request Header there.
web.xml
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://java.sun.com/xml/ns/javaee"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
<display-name>Restful Web Application</display-name>
<context-param>
<param-name>resteasy.scan</param-name>
<param-value>true</param-value>
</context-param>
<!-- keycloak -->
<context-param>
<param-name>keycloak.config.resolver</param-name>
<param-value>package.to.HeaderBasedKeycloakConfigResolver</param-value>
</context-param>
<security-constraint>
<web-resource-collection>
<web-resource-name>REST endpoints</web-resource-name>
<url-pattern>/ep-name/resource-name</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>resource-name</role-name>
</auth-constraint>
</security-constraint>
<!-- more security-constraint -->
<!-- more security-constraint -->
<!-- more security-constraint -->
<login-config>
<auth-method>KEYCLOAK</auth-method>
<realm-name>realm-name</realm-name>
</login-config>
<security-role>
<role-name>role-name-for-resource-1</role-name>
<role-name>role-name-for-resource-2</role-name>
<!-- more security-role -->
<!-- more security-role -->
<!-- more security-role -->
</security-role>
<listener>
<listener-class>
org.jboss.resteasy.plugins.server.servlet.ResteasyBootstrap</listener-class>
</listener>
<servlet>
<servlet-name>resteasy-servlet</servlet-name>
<servlet-class>
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher
</servlet-class>
<init-param>
<param-name>resteasy.servlet.mapping.prefix</param-name>
<param-value>/ep-name</param-value>
</init-param>
</servlet>
<servlet-mapping>
<servlet-name>resteasy-servlet</servlet-name>
<url-pattern>/ep-name/*</url-pattern>
</servlet-mapping>
<filter>
<filter-name>WebFilter</filter-name>
<filter-class>package.to.filter.WebFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>WebFilter</filter-name>
<url-pattern>/desired-ep-name/*</url-pattern>
</filter-mapping>
</web-app>
| Have you tried to change the order of the elements in the web.xml (eg put filter definitions BEFORE servlet definitions) ?
Not sure it will works, but the doc says:
"The order of the filters in the chain is the same as the order that filter mappings appear in the web application deployment descriptor"
The principle may be also true for the order between servlets and filters...
| Keycloak | 51,387,730 | 14 |
Is it necessary to have a keycloak.json file even if we have configured everything in application.properties for a spring boot application.
| If you are using Spring Security Adapter, add bean KeycloakConfigResolver in your configuration file. It will use application.properties instead of WEB-INF/keycloack.json
@Bean
public KeycloakConfigResolver KeycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
See: https://developers.redhat.com/blog/2017/05/25/easily-secure-your-spring-boot-applications-with-keycloak/ Creating a SecurityConfig class section
| Keycloak | 53,533,088 | 14 |
I am using keycloak as an identity broker to SAML identity provider in order to login to web application.
To get it work I have created new authentication flow which looks like: "Create User If Unique", "Automatically Link Brokered Account".
Keycloak redirects correctly to the identity provider with the login page. After login identity provider redirects as expected to keycloak and then to my web application but keycloak also creates local user.
Is it possible to use external IDP without local users creation?
The problem with local users : I have "custom user federation" implementation which fetch users from my application and if local user created it's not possible login to keycloak using "custom user federation". Keycloak will just try login like with local user.
| Unfortunately, it is currently not possible to skip the creation of local user account. According to the Keycloak team, they are deferring the support "as we are planning on some larger work to the storage layer which will make it possible to deliver on this capabiltiy".
See Feature Request https://issues.jboss.org/browse/KEYCLOAK-4429.
| Keycloak | 56,492,501 | 14 |
Consider the following environment:
one docker container is keycloak
another docker container is our web app that uses keycloak for authentication
The web app is a Spring Boot application with "keycloak-spring-boot-starter" applied. In application.properties:
keycloak.auth-server-url = http://localhost:8028/auth
A user accessing our web app will be redirected to keycloak using the URL for the exposed port of the keycloak docker container. Login is done without problems in keycloak and the user (browser) is redirected to our web app again. Now, the authorization code needs to be exchanged for an access token. Hence, our web app (keycloak client) tries to connect to the same host and port configured in keycloak.auth-server-url. But this is a problem because the web app resides in a docker container and not on the host machine, so it should rather access http://keycloak:8080 or something where keycloak is the linked keycloak docker container.
So the question is: How can I configure the keycloak client to apply different URLs for browser redirection and access token endpoints?
| There used to be another property auth-server-url-for-backend-requests but was removed by pull request #2506 as a solution to issue #2623 on Keycloak's JIRA. In the description of this issue, you'll find the reasons why and possible workarounds: that should be solved at the DNS level or by adding entries to the host file.
So there is not much you can do in the client configuration, unless you change the code and make your own version of the adapter, but there is something you can do at the Docker level. For this to work properly, first I suggest you use a fully qualified domain name instead of localhost for the public hostname, as you would in production anyway, eg. keycloak.mydomain.com. You can use a fake one (not registered in DNS servers) if you just add it to the host's /etc/hosts file (or Windows equivalent) as an alias next to localhost.
Then, if you are using Docker Compose, you can set aliases (alternative hostnames) for the keycloak service on the docker network to which the containers are connected (see doc: Compose File reference / Service configuration reference / networks / aliases). For example:
version: "3.7"
services:
keycloak:
image: jboss/keycloak
networks:
# Replace 'mynet' with whatever user-defined network you are using or want to use
mynet:
aliases:
- keycloak.mydomain.com
webapp:
image: "nginx:alpine"
networks:
- mynet
networks:
mynet:
If you are just using plain Docker, you can do the equivalent with --alias flag of docker network connect command (see doc: Container networking / IP address and hostname).
| Keycloak | 57,213,611 | 14 |
I run keycloak standalone using a command for docker docker run -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin quay.io/keycloak/keycloak:15.0.2
How to mount a volume to save the data, after container is stopped?
| working docker-compose.yml
version: "3.7"
volumes:
keycloak:
services:
keycloak:
image: quay.io/keycloak/keycloak:18.0.2
ports:
- 8080:8080
environment:
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=admin
volumes:
- keycloak:/opt/keycloak/data/
restart: always
command:
- "start-dev"
| Keycloak | 69,812,281 | 14 |
Have springboot project in which wanted to either exclude snakeyaml 1.30 or upgrade it 1.31 inorder to avoid fortify issue reporting
with snakeyaml 1.30 version there is security vulnerability
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.3</version>
</parent>
Below is seen on the effective pom.xml of the project
<dependency>
<groupId>org.yaml</groupId>
<artifactId>snakeyaml</artifactId>
<version>1.30</version>
<scope>compile</scope>
</dependency>
Is there any possibility to upgrade as the remediation says to upgrade the version to snakeyaml 1.31 ?
Ref : https://security.snyk.io/vuln/SNYK-JAVA-ORGYAML-2806360
| SnakeYAML is a managed dependency in Spring Boot, so you can simply add the following to the properties section of pom.xml to have Spring Boot 2.3.7 use SnakeYAML 1.31 instead of 1.30:
<snakeyaml.version>1.31</snakeyaml.version>
| Snyk | 73,707,768 | 22 |
When trying to figure out how to configure a aws_instance with AWS VPC the following errors occur:
* Error launching source instance: InvalidParameterCombination: The parameter groupName cannot be used with the parameter subnet
status code: 400, request id: []
or
* Error launching source instance: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
| This is due to how a security group is associated with an instance.
Without a subnet it is OK to associate it using the security group's name:
resource "aws_instance" "server" {
...
security_groups = [ "${aws_security_group.my_security_group.name}" ]
}
In the case where a subnet is also associated you cannot use the name, but should instead use the security group's ID:
security_groups = [ "${aws_security_group.my_security_group.id}" ]
subnet_id = "${aws_subnet.my_subnet.id}"
The above assumes you've created a security group named my_security_group, and a subnet named my_subnet
| Terraform | 31,569,910 | 17 |
I have a list of objects from a variable in terraform
variable "persons" {
type = list(object({
name = string,
phonenumber = string,
tshirtSize = string
}))
description = "List of person"
}
Now I want a list of the person's names so I can use it to define an AWS Resource
How can I convert this object list to a list of names
["bob", "amy", "jane"]
I'm on terraform 0.12.24, though can upgrade if needed
| Updated Answer:
Use the splat expression
var.persons[*].name
https://developer.hashicorp.com/terraform/language/expressions/splat
Original Answer:
I was able to do this in locals file
locals {
names = [
for person in var.persons:
person.name
]
}
For additional reading
SEE: https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/
| Terraform | 62,836,424 | 17 |
Question and details
How can I allow a Kubernetes cluster in Azure to talk to an Azure Container Registry via terraform?
I want to load custom images from my Azure Container Registry. Unfortunately, I encounter a permissions error at the point where Kubernetes is supposed to download the image from the ACR.
What I have tried so far
My experiments without terraform (az cli)
It all works perfectly after I attach the acr to the aks via az cli:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acrName>
My experiments with terraform
This is my terraform configuration; I have stripped some other stuff out. It works in itself.
terraform {
backend "azurerm" {
resource_group_name = "tf-state"
storage_account_name = "devopstfstate"
container_name = "tfstatetest"
key = "prod.terraform.tfstatetest"
}
}
provider "azurerm" {
}
provider "azuread" {
}
provider "random" {
}
# define the password
resource "random_string" "password" {
length = 32
special = true
}
# define the resource group
resource "azurerm_resource_group" "rg" {
name = "myrg"
location = "eastus2"
}
# define the app
resource "azuread_application" "tfapp" {
name = "mytfapp"
}
# define the service principal
resource "azuread_service_principal" "tfapp" {
application_id = azuread_application.tfapp.application_id
}
# define the service principal password
resource "azuread_service_principal_password" "tfapp" {
service_principal_id = azuread_service_principal.tfapp.id
end_date = "2020-12-31T09:00:00Z"
value = random_string.password.result
}
# define the container registry
resource "azurerm_container_registry" "acr" {
name = "mycontainerregistry2387987222"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Basic"
admin_enabled = false
}
# define the kubernetes cluster
resource "azurerm_kubernetes_cluster" "mycluster" {
name = "myaks"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "mycluster"
network_profile {
network_plugin = "azure"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
# Use the service principal created above
service_principal {
client_id = azuread_service_principal.tfapp.application_id
client_secret = azuread_service_principal_password.tfapp.value
}
tags = {
Environment = "demo"
}
windows_profile {
admin_username = "dingding"
admin_password = random_string.password.result
}
}
# define the windows node pool for kubernetes
resource "azurerm_kubernetes_cluster_node_pool" "winpool" {
name = "winp"
kubernetes_cluster_id = azurerm_kubernetes_cluster.mycluster.id
vm_size = "Standard_B2s"
node_count = 1
os_type = "Windows"
}
# define the kubernetes name space
resource "kubernetes_namespace" "namesp" {
metadata {
name = "namesp"
}
}
# Try to give permissions, to let the AKR access the ACR
resource "azurerm_role_assignment" "acrpull_role" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azuread_service_principal.tfapp.object_id
skip_service_principal_aad_check = true
}
This code is adapted from https://github.com/terraform-providers/terraform-provider-azuread/issues/104.
Unfortunately, when I launch a container inside the kubernetes cluster, I receive an error message:
Failed to pull image "mycontainerregistry.azurecr.io/myunittests": [rpc error: code = Unknown desc = Error response from daemon: manifest for mycontainerregistry.azurecr.io/myunittests:latest not found: manifest unknown: manifest unknown, rpc error: code = Unknown desc = Error response from daemon: Get https://mycontainerregistry.azurecr.io/v2/myunittests/manifests/latest: unauthorized: authentication required]
Update / note:
When I run terraform apply with the above code, the creation of resources is interrupted:
azurerm_container_registry.acr: Creation complete after 18s [id=/subscriptions/000/resourceGroups/myrg/providers/Microsoft.ContainerRegistry/registries/mycontainerregistry2387987222]
azurerm_role_assignment.acrpull_role: Creating...
azuread_service_principal_password.tfapp: Still creating... [10s elapsed]
azuread_service_principal_password.tfapp: Creation complete after 12s [id=000/000]
azurerm_kubernetes_cluster.mycluster: Creating...
azurerm_role_assignment.acrpull_role: Creation complete after 8s [id=/subscriptions/000/resourceGroups/myrg/providers/Microsoft.ContainerRegistry/registries/mycontainerregistry2387987222/providers/Microsoft.Authorization/roleAssignments/000]
azurerm_kubernetes_cluster.mycluster: Still creating... [10s elapsed]
Error: Error creating Managed Kubernetes Cluster "myaks" (Resource Group "myrg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ServicePrincipalNotFound" Message="Service principal clientID: 000 not found in Active Directory tenant 000, Please see https://aka.ms/aks-sp-help for more details."
on test.tf line 56, in resource "azurerm_kubernetes_cluster" "mycluster":
56: resource "azurerm_kubernetes_cluster" "mycluster" {
I think, however, that this is just because it takes a few minutes for the service principal to be created. When I run terraform apply again a few minutes later, it goes beyond that point without issues.
| (I did up the answer above)
Just adding a simpler way where you don't need to create a service principal for anyone else that might need it.
resource "azurerm_kubernetes_cluster" "kubweb" {
name = local.cluster_web
location = local.rgloc
resource_group_name = local.rgname
dns_prefix = "${local.cluster_web}-dns"
kubernetes_version = local.kubversion
# used to group all the internal objects of this cluster
node_resource_group = "${local.cluster_web}-rg-node"
# azure will assign the id automatically
identity {
type = "SystemAssigned"
}
default_node_pool {
name = "nodepool1"
node_count = 4
vm_size = local.vm_size
orchestrator_version = local.kubversion
}
role_based_access_control {
enabled = true
}
addon_profile {
kube_dashboard {
enabled = true
}
}
tags = {
environment = local.env
}
}
resource "azurerm_container_registry" "acr" {
name = "acr1"
resource_group_name = local.rgname
location = local.rgloc
sku = "Standard"
admin_enabled = true
tags = {
environment = local.env
}
}
# add the role to the identity the kubernetes cluster was assigned
resource "azurerm_role_assignment" "kubweb_to_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.kubweb.kubelet_identity[0].object_id
}
| Terraform | 59,978,060 | 17 |
I've saved terraform plan -out=my-plan and intend to save it to source control and inject further to custom tool for ingestion and performing testing etc.
However, the file contents of my-plan are jumbled and I'm wondering what the encoding used is.
What is the encoding being used for the Terraform plan file?
| While the other tools mentioned here are useful, things change regularly in the Terraform space and third-party tools often aren't able to be kept up-to-date.
For some time now Terraform has directly supported viewing a plan file in the same human-readable format that is displayed at the time you run plan:
terraform show <filename>
Since v0.12 you can now also view a plan file in JSON format, which you could save to work on further with other tools:
terraform show -json <filename>
There's an explanation of the JSON schema at https://www.terraform.io/docs/internals/json-format.html. As of writing, note that:
The output ... currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version ... We do not anticipate any significant breaking changes to the format before its first major version, however.
| Terraform | 49,385,346 | 17 |
Use Case
Trying to provision a (Docker Swarm or Consul) cluster where initializing the cluster first occurs on one node, which generates some token, which then needs to be used by other nodes joining the cluster. Key thing being that nodes 1 and 2 shouldn't attempt to join the cluster until the join key has been generated by node 0.
Eg. on node 0, running docker swarm init ... will return a join token. Then on nodes 1 and 2, you'd need to pass that token to the same command, like docker swarm init ${JOIN_TOKEN} ${NODE_0_IP_ADDRESS}:{SOME_PORT}. And magic, you've got a neat little cluster...
Attempts So Far
Tried initializing all nodes with the AWS SDK installed, and storing the join key from node 0 on S3, then fetching that join key on other nodes. This is done via a null_resource with 'remote-exec' provisioners. Due to the way Terraform executes things in parallel, there are racy type conditions and predictably nodes 1 and 2 frequently attempt to fetch a key from S3 thats not there yet (eg. node 0 hasn't finished its stuff yet).
Tried using the 'local-exec' provisioner to SSH into node 0 and capture its join key output. This hasn't worked well or I sucked at doing it.
I've read the docs. And stack overflow. And Github issues, like this really long outstanding one. Thoroughly. If this has been solved elsewhere though, links appreciated!
PS - this is directly related to and is a smaller subset of this question, but wanted to re-ask it in order to focus the scope of the problem.
| You can redirect the outputs to a file:
resource "null_resource" "shell" {
provisioner "local-exec" {
command = "uptime 2>stderr >stdout; echo $? >exitstatus"
}
}
and then read the stdout, stderr and exitstatus files with local_file
The problem is that if the files disappear, then terraform apply will fail.
In terraform 0.11 I made a workaround by reading the file with external data source and storing the results in a null_resource triggers (!)
resource "null_resource" "contents" {
triggers = {
stdout = "${data.external.read.result["stdout"]}"
stderr = "${data.external.read.result["stderr"]}"
exitstatus = "${data.external.read.result["exitstatus"]}"
}
lifecycle {
ignore_changes = [
"triggers",
]
}
}
But in 0.12 this can be replaced with file()
and then finally I can use / output those with:
output "stdout" {
value = "${chomp(null_resource.contents.triggers["stdout"])}"
}
See the module https://github.com/matti/terraform-shell-resource for full implementation
| Terraform | 44,509,997 | 17 |
Is terraform destroy needed before terraform apply? If not, what is a workflow you follow when updating existing infrastructure and how do you decide if destroy is needed?
| That would be pretty non-standard, in my opinion. Terraform destroy is only used in cases where you want to completely wipe your infrastructure. One of the biggest features of terraform is that it can do an intelligent delta of your desired infrastructure and your existing infrastructure and only make the changes needed. By performing a refresh, plan and apply you can ensure that terraform:
refresh - Has an up-to-date understanding of your current infrastructure. This is important in case anything was changed manually, outside of your terraform script.
plan - Prepares a list for you to review of what terraform intends to modify, or delete (or leave alone).
apply - Performs the changes laid out in the plan.
By executing these 3 commands in sequence terraform will only perform the changes necessary, in the order required, to bring your environments in line with any changes to your terraform file.
Where I find destroy to be useful is in non-production environments or in cases where you are performing a restructure that's so invasive that starting from scratch would ensure a safer build.
*There are also edge cases where terraform may fail to understand the correct order of operations (do I modify a security group first or a security group rule?), or it will find itself in a dependency cycle and will be unable to perform an operation. In those cases, however, running destroy is a nuclear solution. In general, I would perform the problem change manually (via command line, or AWS Console, if I'm in AWS), to nudge it along and then run a refresh, plan, apply sequence to get back on track.
| Terraform | 37,464,875 | 17 |
I'm new to Terraform world. I'm trying to run a shell script using Terraform.
Below is the main.tf file
#Executing shell script via Null Resource
resource "null_resource" "install_istio" {
provisioner "local-exec" {
command = <<EOT
"chmod +x install-istio.sh"
"./install-istio.sh"
EOT
interpreter = ["/bin/bash", "-c"]
working_dir = "${path.module}"
}
}
Below is the install-istio.sh file which it needs to run
#!/bin/sh
# Download and install the Istio istioctl client binary
# Specify the Istio version that will be leveraged throughout these instructions
ISTIO_VERSION=1.7.3
curl -sL "https://github.com/istio/istio/releases/download/$ISTIO_VERSION/istioctl-$ISTIO_VERSION-linux-amd64.tar.gz" | tar xz
sudo mv ./istioctl /usr/local/bin/istioctl
sudo chmod +x /usr/local/bin/istioctl
# Install the Istio Operator on EKS
istioctl operator init
# The Istio Operator is installed into the istio-operator namespace. Query the namespace.
kubectl get all -n istio-operator
# Install Istio components
istioctl profile dump default
# Create the istio-system namespace and deploy the Istio Operator Spec to that namespace.
kubectl create ns istio-system
kubectl apply -f istio-eks.yaml
# Validate the Istio installation
kubectl get all -n istio-system
I'm getting below warning:
Warning: Interpolation-only expressions are deprecated
on .terraform/modules/istio_module/Istio-Operator/main.tf line 10, in resource "null_resource" "install_istio":
10: working_dir = "${path.module}"
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
The above script in main.tf does run command in the background.
Can someone help me with the missing part? How can I run multiple commands using local exec Also, How can I get rid of the warning message?
Appreciate all your help, Thanks!
| I think there are two separate things going on here which are actually not related.
The main problem here is in how you've written your local-exec script:
command = <<EOT
"chmod +x install-istio.sh"
"./install-istio.sh"
EOT
This will become the following shell script to run:
"chmod +x install-istio.sh"
"./install-istio.sh"
By putting the first command line in quotes, you're telling the shell to try to run a program that is called chmod +x install-istio.sh without any arguments. That is, the shell will try to find an executable in your PATH called chmod +x install-istio.sh, rather than trying to run a command called just chmod with some arguments as I think you intended.
Remove the quotes around the command lines to make this work. Quotes aren't needed here because neither of these commands contain any special characters that would require quoting:
command = <<-EOT
chmod +x install-istio.sh
./install-istio.sh
EOT
The warning message about interpolation-only expressions is unrelated to the problem of running these commands. This is telling you that you've used a legacy syntax that is still supported for backward compatibility but no longer recommended.
If you are using the latest version of Terraform at the time of writing (one of the v0.15 releases, or later) then you may be able to resolve this and other warnings like it by switching into this module directory and running terraform fmt, which is a command that updates your configuration to match the expected style conventions.
Alternatively, you could manually change what that command would update automatically, which is to remove the redundant interpolation markers around path.module:
working_dir = path.module
| Terraform | 67,297,188 | 16 |
Used ami-0fd3c3c68a2a8066f from ap-south-1 region http://cloud-images.ubuntu.com/locator/ec2/, but unable to use t2.micro instance type against this.
Error: Error launching source instance: InvalidParameterValue: The architecture 'x86_64' of the specified instance type does not match the architecture 'arm64' of the specified AMI. Specify an instance type and an AMI that have matching architectures, and try again. You can use 'describe-instance-types' or 'describe-images' to discover the architecture of the instance type or AMI.
How to find the list of applicable instance types for an AMI, before trying to launch an instance using terraform
| Using AWS CLI you can use describe-instance-types:
aws ec2 describe-instance-types --filters Name=processor-info.supported-architecture,Values=arm64 --query "InstanceTypes[*].InstanceType" --output text
E.g. output:
r6gd.large m6g.metal m6gd.medium c6gd.metal m6gd.12xlarge c6g.16xlarge r6g.large r6gd.medium r6g.8xlarge m6gd.metal r6gd.xlarge t4g.medium r6gd.2xlarge m6gd.xlarge c6g.xlarge c6g.12xlarge r6g.medium a1.medium m6g.xlarge m6gd.4xlarge t4g.nano r6g.16xlarge
t4g.2xlarge m6g.12xlarge r6gd.8xlarge a1.large m6g.4xlarge c6gd.16xlarge t4g.xlarge c6g.large m6g.large c6gd.xlarge a1.metal m6g.8xlarge m6gd.16xlarge a1.xlarge r6g.12xlarge r6gd.metal t4g.micro r6g.4xlarge t4g.small a1.2xlarge r6gd.4xlarge t4g.large
m6g.16xlarge c6g.4xlarge m6gd.2xlarge c6gd.medium c6gd.8xlarge r6gd.16xlarge m6gd.8xlarge c6g.2xlarge r6gd.12xlarge a1.4xlarge c6g.8xlarge r6g.2xlarge m6g.2xlarge m6g.medium c6gd.large c6g.medium c6gd.2xlarge r6g.metal c6gd.4xlarge m6gd.large r6g.xlarge
I don't see any equivalent in TF for that. In the worst case, you could define external data source for that.
update
There is no single call to get the list of instance types based on ami. It must be done in two steps.
Use aws_ami data source to get architecture of a given AMI.
Use describe-instance-types to get instance types for that architecture.
| Terraform | 67,276,619 | 16 |
I want to create a map like this:
variable "test" {
type = map(any)
default = {
param1 = "sdfsdf",
param2 = "sdfsfd",
param3 = {
mylist = [
"aaaaa",
"bbbbb",
"ccccc"
]
}
I get this error:
This default value is not compatible with the variable's type constraint: all
map elements must have the same type.
Does that mean I haven defined the var right or Terraform just doesn't allow this?
| What you've encountered here is the difference in the Terraform language between collection types and structural types.
"Map" is a collection type kind, and a map value is any number of elements each identified by an arbitrary string but all values of the same type.
In your case it sounds like you want to declare that you need a set of specific attributes, each of which has its own type constraint. Fixed structures with an individual type for each element are represented by structural types, and "object" is the structural type kind that is most similar to a map.
The following is an object type constraint that would accept the example value you included in your question:
variable "test" {
type = object({
param1 = string
param2 = string
param3 = object({
mylist = list(string)
})
})
}
If you want to allow the caller to set an arbitrary number of keys in param3 and they will all be lists of strings then you could instead set that one to be a map of lists of strings:
variable "test" {
type = object({
param1 = string
param2 = string
param3 = map(list(string))
})
}
In most cases the module will expect a particular data type and would fail if the given value is not of that type, in which case it's helpful to write out that type in full to give guidance to the person calling the module and allow Terraform to validate the value. However, in some cases you really do want to just accept any arbitrary value -- for example, if you intend to just JSON encode the value and send it verbatim to the argument of a resource -- and so in that case you can set the type constraint to any, which will accept any type of value at all:
variable "test" {
type = any
}
In this case, Terraform will not check the incoming value at all, and so your module shouldn't make any assumptions about its type.
| Terraform | 63,180,277 | 16 |
I have just run Terraform upgrade. My code was updated but now it shows some errors. The first was:
variable "s3_bucket_name" {
type = list(string)
default = [
"some_bucket_name",
"other_bucket_name",
...
]
}
It doesn't like list(string). I went back to square one and redid the entire Getting Started tutorial. It said that I could either explicitly state type = list or I could implicitly state it by leaving out type and just using the [square brackets].
I saw here: unknown token IDENT list error for IP address variable that I could use "list" (quotes) but I can't find any information on list(string).
So I commented out my list(string) which moved the error along to the next part.
provider "aws" {
region = var.aws_region
}
The tutorial indicates that this is the correct way to create a region tag (there's actually part of the tutorial with that exact code).
Can anyone help me to understand what Unknown token IDENT means as it's throughout my code but it's not helping me to understand what I should do to fix it.
| This error appears when you execute terraform 0.12upgrade and your code syntax is already in Terraform 0.12x or obviously a mix of syntax versions <= 0.11x and 0.12x. Also the Unknown token IDENT error can happen when your installed version on your local machine (or in the remote CI/CD server) is 0.11x and your code syntax is on 0.12x and you run a terraform command such as terraform init
variable "var1" {
type = "list"
...
}
This a Terraform 0.11x syntax the alternative 12x is type = list(string)
To reproduce your error, I have a Terraform code 0.12x, I executed terraform 0.12upgrade then the unknown token: IDENT showed up!
In sum, I thought that your first code iteration is already in the correct syntax so there’s no need to upgrade.
To avoid this kind of errors you can add a new version.tf file in your code with this content:
terraform {
required_version = ">= 0.12"
}
Upgrading tips:
Don’t mix the syntaxes in the same Terraform code, if so, downgrade manually your code to 0.11x
Put all your Terraform code syntax in 0.11x
Then run: terraform 0.12upgrade
| Terraform | 59,471,513 | 16 |
I'm writing the terraform for creating an IAM role for AWS StepFunctions.
What should be the value for Principal in assume_role_policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "stepfunctions.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
I'm getting the error
Error: Error creating IAM Role my_utility_sfn: MalformedPolicyDocument: Invalid principal in policy: "SERVICE":"stepfunctions.amazonaws.com"
| The AWS documentation for service endpoints should hold the answer.
Looks like it is states.<region>.amazonaws.com
| Terraform | 58,762,133 | 16 |
I have set up an automation through github/jenkins to post the output of terraform plan for the repo through jenkins as a comment to the pull request in github. The entire orchestration works great except for the fact that the output of terraform plan is not that human readable and doesn't provide you in this kind of automation a way as it looks when you run it in a terminal.
I used several ways like using terraform show for the plan file, then grabbing that to a custom file and posting that as a comment in GitHub PR. In every case the output contains some binary characters.
i even used the terraform-plan-parser
https://github.com/lifeomic/terraform-plan-parser
but that doesn't work for terraform 0.12 and relates to the below issue :-
https://github.com/lifeomic/terraform-plan-parser/issues/31
What's the best way to retrieve the output of any terraform plan in automation so that it can be referenced however that needs to be to inspect before the apply is done. Looks to me it only works great in a terminal.
Any help or suggestions here will be greatly appreciated as always.
| By default Terraform uses terminal escape sequences to highlight parts of the output with simple formatting such as colors or a bold typeface.
In order to reproduce that result exactly in the context of GitHub would require translating the terminal escape sequences into a form that GitHub is able to render.
Unfortunately GitHub comments are written in GitHub-flavored Markdown, which doesn't support any direct way to create colored text similar to Terraform's plan output at the time when I'm write this. Therefore I know of no easy way to reproduce the text formatting from the Terraform plan output in a GitHub comment.
If you run terraform plan with the -no-color option then it will skip the terminal escape sequences and produce plain-text output that you could include in a preformatted text block in your Markdown comment. However, that output will therefore not include the text formatting you normally see in your terminal.
If you are willing to write some custom formatting code to present the Terraform plan in a different format for your GitHub comments, you can obtain a JSON representation of the plan by saving the plan to disk and then reading it with terraform show:
terraform plan -out=tfplan
terraform show -json tfplan
This will produce a JSON representation of the plan that you could parse in a program of your own design and emit whatever result format you want. This will, however, be considerably more work than just interpreting the terminal escape sequences from Terraform's normal output, because it's a JSON representation of the data that Terraform uses to produce the plan rendering, not of the plan rendering itself.
| Terraform | 58,757,379 | 16 |
Is it still not possible to use variable provider in Terraform v0.12.6 ? In *.tfvars I have list variable supplier
supplier = ["azurerm.core-prod","azurerm.core-nonprod"]
and providers defined in provider.tf:
provider "azurerm" {
...
alias = "core-prod"
}
provider "azurerm" {
...
alias = "core-nonprod"
then I want to reference it in *.tf . The example below is with 'data', but the same applies to 'resource'
data "azurerm_public_ip" "pip" {
count = "${var.count}"
....
provider = "${var.supplier[count.index]}"
}
What is a workaround to the error:
Error: Invalid provider configuration reference
...
The provider argument requires a provider type name, optionally followed by a period and then a configuration alias.
| It is not possible to dynamically associate a resource with a provider. Similar to how in statically-typed programming languages you typically can't dynamically switch a particular symbol to refer to a different library at runtime, Terraform needs to bind resource blocks to provider configurations before expression evaluation is possible.
What you can do is write a module that expects to receive a provider configuration from its caller and then statically select a provider configuration per instance of that module:
provider "azurerm" {
# ...
alias = "core-prod"
}
module "provider-agnostic-example" {
source = "./modules/provider-agnostic-example"
providers = {
# This means that the default configuration for "azurerm" in the
# child module is the same as the "core-prod" alias configuration
# in this parent module.
azurerm = azurerm.core-prod
}
}
In this case, the module itself is provider agnostic, and so it can be used in both your production and non-production settings, but any particular use of the module must specify which it is for.
A common approach is to have a separate configuration for each environment, with shared modules representing any characteristics the environments have in common but giving the opportunity for representing any differences that might need to exist between them. In the simplest case, this might just be two configurations that consist only of a single module block and a single provider block, each with some different arguments representing the configuration for that environment, and with the shared module containing all of the resource and data blocks. In more complex systems there might be multiple modules integrated together using module composition techniques.
| Terraform | 57,512,884 | 16 |
I have the following terraform script which creates an API gateway that passes requests to a lambda function.
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
#
region = "${var.region}"
version = "~> 2.6"
}
resource "aws_api_gateway_rest_api" "MyDemoAPI" {
name = "MyDemoAPI"
description = "This is my API for demonstration purposes"
}
resource "aws_api_gateway_resource" "MyDemoResource" {
rest_api_id = "${aws_api_gateway_rest_api.MyDemoAPI.id}"
parent_id = "${aws_api_gateway_rest_api.MyDemoAPI.root_resource_id}"
path_part = "mydemoresource"
}
resource "aws_api_gateway_method" "MyDemoMethod" {
rest_api_id = "${aws_api_gateway_rest_api.MyDemoAPI.id}"
resource_id = "${aws_api_gateway_resource.MyDemoResource.id}"
http_method = "POST"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "MyDemoIntegration" {
rest_api_id = "${aws_api_gateway_rest_api.MyDemoAPI.id}"
resource_id = "${aws_api_gateway_resource.MyDemoResource.id}"
http_method = "${aws_api_gateway_method.MyDemoMethod.http_method}"
integration_http_method = "POST"
type = "AWS_PROXY"
uri = "arn:aws:apigateway:ap-southeast-1:lambda:path/2015-03-31/functions/${aws_lambda_function.test_lambda_function.arn}/invocations"
content_handling = "CONVERT_TO_TEXT"
}
resource "aws_api_gateway_method_response" "200" {
rest_api_id = "${aws_api_gateway_rest_api.MyDemoAPI.id}"
resource_id = "${aws_api_gateway_resource.MyDemoResource.id}"
http_method = "${aws_api_gateway_method.MyDemoMethod.http_method}"
status_code = "200"
response_models {
"application/json" = "Empty"
}
}
resource "aws_lambda_function" "test_lambda_function" {
filename = "lambda.zip"
description = "test build api gateway and lambda function using terraform"
function_name = "test_lambda_function"
role = "arn:aws:iam::123456789123:role/my_labmda_role"
handler = "gateway.lambda_handler"
runtime = "python3.6"
memory_size = 128
timeout = 60
}
The Method Response section of the API gateway resource display Select an integration response..
But if I create the same API gateway using AWS console, the Method Response section displays something different:
Why does this happen?
The following steps are how I use AWS console to create the API gateway:
Select Create Method under the resource.
Select POST method.
Select the desired options.
I've tried creating the above resources manually first, then execute terraform apply. Then terraform tells me that nothing needs to be changed.
terraform apply
aws_api_gateway_rest_api.MyDemoAPI: Refreshing state... (ID: 1qa34vs1k7)
aws_lambda_function.test_lambda_function: Refreshing state... (ID: test_lambda_function)
aws_api_gateway_resource.MyDemoResource: Refreshing state... (ID: 4xej81)
aws_api_gateway_method.MyDemoMethod: Refreshing state... (ID: agm-1qa34vs1k7-4xej81-POST)
aws_api_gateway_method_response.200: Refreshing state... (ID: agmr-1qa34vs1k7-4xej81-POST-200)
aws_api_gateway_integration.MyDemoIntegration: Refreshing state... (ID: agi-1qa34vs1k7-4xej81-POST)
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
This seems to mean that the manually built structure is the same as the structure built by terraform.
| Because API Gateway is a complex AWS component and you can control pretty much everything on it (basically every single part of it is managed independently, giving you a lot of control over what you create but also making things harder to deal with).
See that it says "Select an Integration Response", but since your Terraform code didn't create one, it is therefore empty.
I had come across this very same problem a few weeks ago and I found the solution on Terraform's GitHub. I think Terraform should better document this as you're not the first one nor will you be the last to come up with this question. Well, at least we have this documented in StackOverflow now :)
Long story short, you need to add a aws_api_gateway_integration_response terraform resource to your API Gateway.
resource "aws_api_gateway_integration_response" "MyDemoIntegrationResponse" {
rest_api_id = "${aws_api_gateway_rest_api.MyDemoAPI.id}"
resource_id = "${aws_api_gateway_resource.MyDemoResource.id}"
http_method = "${aws_api_gateway_method.MyDemoMethod.http_method}"
status_code = "${aws_api_gateway_method_response.200.status_code}"
response_templates = {
"application/json" = ""
}
}
If you can, however, I suggest you use a proper framework to hook events to your Lambda functions (like the Serverless Framework or AWS SAM) as it's very verbose and error prone to create them in Terraform.
Usually, I combine Terraform and Serverless Framework together: I use Terraform to create infrastructure resources - even if they are serverless - like DynamoDB tables, SQS queues, SNS topics, etc. and the Serverless Framework to create the Lambda functions and their corresponding events.
| Terraform | 56,071,536 | 16 |
I am trying to create an AWS lambda Function using terraform.
My terraform directory looks like
terraform
iam-policies
main.tf
lambda
files/
main.tf
main.tf
I have my lambda function stored inside /terraform/lambda/files/lambda_function.py.
Whenever I terraform apply, I have a "null_resource" that executes some commands in local machine that will zip the python file
variable "pythonfile" {
description = "lambda function python filename"
type = "string"
}
resource "null_resource" "lambda_preconditions" {
triggers {
always_run = "${uuid()}"
}
provisioner "local-exec" {
command = "rm -rf ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "mkdir -p ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "cp -R ${path.module}/files/${var.pythonfile} ${path.module}/files/zips/lambda_function.py"
}
provisioner "local-exec" {
command = "cd ${path.module}/files/zips && zip -r lambda.zip ."
}
}
My "aws_lambda_function" resource looks like this.
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${base64sha256(format("%s/files/zips/lambda.zip", path.module))}", length(path.cwd) + 1, -1)}")}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
depends_on = [
"null_resource.lambda_preconditions"
]
}
Problem:
Whenever I change the lambda_function.py file and terraform apply again, everything works fine but the actual code in the lambda function do not change.
Also if I delete all the terraform state files and apply again, the new change is propagated without any problem.
What could be the possible reason for this?
| Instead of using null_resource, I used the archive_file data source that creates the zip file automatically if new changes are detected. Next I took a reference from the archive_file data in the lambda resource source_code_hash attribute.
archive_file data source
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "${path.module}/files/zips/lambda.zip"
source {
content = "${file("${path.module}/files/ebs_cleanup_lambda.py")}"
filename = "lambda_function.py"
}
}
The lambda resource
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
}
| Terraform | 53,849,288 | 16 |
What is the Google Cloud Platform mechanism for locking state file when using Terraform?
Something like DynamoDB on AWS...
thanks
| gcs backend implements Terraform state locking by using a special lock file with .tflock extension. This file is placed next to the Terraform state itself for the period of Terraform state operation. For example, if the state file is located at path
gs://BUCKET/PREFIX/WORKSPACE.tfstate
then the corresponding lock file will be located at path
gs://BUCKET/PREFIX/WORKSPACE.tflock
Source: hashicorp/terraform
The atomicity of locking is guaranteed by using the GCS feature called Precondition. Terraform itself makes use of DoesNotExist condition of GCP Go SDK which in turn uses the GCS Precondition. Underneath, this adds this HTTP header x-goog-if-generation-match: 0 to the GCS copy request.
According to GCS documentation:
When a Match precondition uses the value 0 instead of a generation number, the request only succeeds if there are no live objects in the Cloud Storage bucket with the name specified in the request.
Which is exactly what is needed for Terraform state locking.
| Terraform | 53,413,639 | 16 |
Working on a Terraform project in which I am creating an RDS cluster by grabbing and using the most recent production db snapshot:
# Get latest snapshot from production DB
data "aws_db_snapshot" "db_snapshot" {
most_recent = true
db_instance_identifier = "${var.db_instance_to_clone}"
}
#Create RDS instance from snapshot
resource "aws_db_instance" "primary" {
identifier = "${var.app_name}-primary"
snapshot_identifier = "${data.aws_db_snapshot.db_snapshot.id}"
instance_class = "${var.instance_class}"
vpc_security_group_ids = ["${var.security_group_id}"]
skip_final_snapshot = true
final_snapshot_identifier = "snapshot"
parameter_group_name = "${var.parameter_group_name}"
publicly_accessible = true
timeouts {
create = "2h"
}
}
The issue with this approach is that following runs of the terraform code (once another snapshot has been taken) want to re-create the primary RDS instance (and subsequently, the read replicas) with the latest snapshot of the DB. I was thinking something along the lines of a boolean count parameters that specifies first run, but setting count = 0 on the snapshot resource causes issues with the snapshot_id parameters of the db resource. Likewise setting a count = 0 on the db resource would indicate that it would destroy the db.
Use case for this is to be able to make changes to other aspects of the production infrastructure that this terraform plan manages without having to re-create the entire RDS cluster, which is a very time consuming resource to destroy/create.
| Try placing an ignore_changes lifecycle block within your aws_db_instance definition:
lifecycle {
ignore_changes = [
snapshot_identifier,
]
}
This will cause Terraform to only look for changes to the database's snapshot_identifier upon initial creation.
If the database already exists, Terraform will ignore any changes to the existing database's snapshot_identifier field -- even if a new snapshot has been created since then.
| Terraform | 51,486,890 | 16 |
When I used aws_cloudwatch_log_resource_policy in a configuration file, it was succesfully applied. I was expecting a policy to appear in IAM -> Policies list in the web console, but there was no sign of new policies.
What kind of resource does aws_cloudwatch_log_resource_policy create?
| Short answer: it creates a CloudWatch Logs Resource Policy!
Long answer: it's a misnomer from AWS as it doesn't actually get attached to a resource at all and appears to be a service-level access policy for CloudWatch logs.
The only reference to it I can find in the AWS docs (as of this writing) are the API call and CLI command descriptions - everything else is about adding resource policies to destinations which are a different thing.
There also does not appear to be any console support for it anywhere that I would expect it, however if you're creating an ElasticSearch domain in the console it will prompt you for one if you're setting up slow query logs.
And finally here's the actual error message that brought me here to make it easier to find this for people running into similar issues:
ValidationException: The Resource Access Policy specified for the CloudWatch Logs log group es-redacted-prod-logs does not grant sufficient permissions for Amazon Elasticsearch Service to create a log stream. Please check the Resource Access Policy.
| Terraform | 48,912,529 | 16 |
It seems like I can use either user_data with a template file or a "remote-exec" provisioner with inline commands to bootstrap. So which one is considered more idiomatic?
| You should use user_data. The user data field is idiomatic because it's native to AWS, whereas the remote-exec provisioner is specific to Terraform, which is just one of many ways to call the AWS API.
Also, the user-data is viewable in the AWS console, and often an important part of using Auto Scaling Groups in AWS, where you want each EC2 Instance to execute the same config code when it launches. It's not possible to do that with Terraform's remote-exec provisioner.
| Terraform | 44,378,064 | 16 |
I am attempting to create a Route53 entry for a MySQL RDS instance but having issues with the :3306 at the end of the RDS endpoint returned from Terraform.
resource "aws_db_instance" "mydb" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_route53_record" "database" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "database.example.com"
type = "CNAME"
ttl = "300"
records = ["${aws_db_instance.default.endpoint}"]
}
Terraform puts a :3306 at the end of the endpoint and that gets entered into the Route53 Value of the CNAME.
When I then try to connect to the CNAME database.example.com with the MySQL client I get:
ERROR 2005 (HY000): Unknown MySQL server host 'database.example.com' (0)
Once I remove the :3306 via the AWS route53 console It seems work just fine.
Question is: How do I strip the :3306 from the Terraform RDS endpoint
| As well as an endpoint output, Terraform's aws_db_instance resource also outputs address that provides the FQDN of the instance.
So all you need to do is change your aws_route53_record resource to use address instead:
resource "aws_db_instance" "mydb" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_route53_record" "database" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "database.example.com"
type = "CNAME"
ttl = "300"
records = ["${aws_db_instance.mydb.address}"]
}
| Terraform | 39,855,872 | 16 |
I come from a background in Kubernetes and I'm trying to learn AWS/ECS. In Kubernetes, you can use ConfigMap resources to mount simple one-off config files onto containers quickly and easily without having to go through all the trouble of setting up volumes. This also makes it very easy to configure services from Terraform, which is what I'm trying to do.
Do AWS ECS Services have a feature like the Kubernetes Config Maps? I just need the dead-simplest way to insert arbitrary text files into my services on startup, which can be updated with Terraform quickly. I want to avoid having to rebuild the whole image every time this file changes.
Is this possible or do I need to create volumes for this? If so, what's the best type of volume configuration for this purpose? I can store and update the files in S3 easily, and these are just simple config files that only need read access, so would this be an acceptable case to just mount the S3 bucket?
| I've found a "best" solution, and fwiw - this problem vexed me for a year.
FIRST - there already an open feature request on the aws/containers-roadmap here:
https://github.com/aws/containers-roadmap/issues/56
Please go to that github issue to show support. ECS could make this so much more idiomatic, easier, and intuitive.
Now, here is my approach:
First, ecs file composer uses a sidecar pattern to write files to a volume.
https://github.com/compose-x/ecs-files-composer
note: you don't necessarily need to use that ecs-files-composer sidecar mechanism, it doesn't matter how you get the files onto the volume.
BUT ASSUMING you do, then Setup the ECS file composer sidecar as a non-essential INIT container, and the essential container with a "dependsOn" criteria like this:
dependsOn: [
{ "containerName":"ecsFileComposerSidecar","condition":"COMPLETE" }
]
That pattern works well with terraform, you can create a module which accepts the parameters, with sane defaults, and a map of files you want to write, that reduces most of the boilerplate and outputs a sidecar container definition you can include in the ecs container definitions. That will make this pattern substantially more DRY (Don't repeat yourself). It also has the advantages of being able to use terraform for variable interpolation and/or injecting secrets into the environment of the ECS file composer init container using Jinja template syntax and the AWS secrets manager. All good stuff!
Okay, so now you've 🤞 hopefully got a ECS volume with your files on it.
Next, refer to this documentation:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html
That suggests that a bind mount in ECS isn't a mount at all, it performs a file copy into the container. This solution presumes the file already exists, and you just want to overwrite it with a new copy. There are limitations and situations where this won't work though (for example when I try to mount a containerPath of /etc it won't even start the container and returns an obscure error)
If you want to write a new file that doesn't exist in the docker container, then you have two options:
make a copy of the docker image, add the file, publish to ECR, pull your version, then overwrite it using the copy/bindmount approach I just described.
mount the volume someplace else (i.e. /mnt/), and include a bash script that distributes the file/s -- then run your bash script with a command to copy the file. That description is here: https://kichik.com/2020/09/10/mounting-configuration-files-in-fargate/
Hopefully this all makes sense.
The second solution of write a bash script that runs, and using that as your container "command" seems to work all the time so that is the pattern I personally use.
| Terraform | 71,699,327 | 15 |
With the following I can loop through a resource block to add route table associations to "all" of my subnets easily. However I need to create associations only for my public subnets.
How can I make this "if" statement work? Or any other way to filter on each.value.class == "pub" for that matter.
resource "aws_route_table_association" "rtb-pub" {
for_each = local.subnets_map
if each.value.class == "pub" ## <---- how?
route_table_id = aws_route_table.rtb-pub.id
subnet_id = aws_subnet.map["${each.value.subnet}"].id
}
Thanks in advance!
| It depends on exactly what is the structure of your local.subnets_map. But the for_each should be something like the following one:
resource "aws_route_table_association" "rtb-pub" {
for_each = {for key, val in local.subnets_map:
key => val if val.class == "pub"}
route_table_id = aws_route_table.rtb-pub.id
subnet_id = aws_subnet.map["${each.value.subnet}"].id
}
| Terraform | 66,701,429 | 15 |
The idea is that I want to use Terraform resource aws_secretsmanager_secret to create only three secrets (not workspace-specified secret), one for the dev environment, one for preprod and the third one for production env.
Something like:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "example-secret-dev"
}
resource "aws_secretsmanager_secret" "preprod_secret" {
name = "example-secret-preprod"
}
resource "aws_secretsmanager_secret" "prod_secret" {
name = "example-secret-prod"
}
But after creating them, I don't want to overwrite them every time I run 'Terraform apply', is there a way to tell Terraform if any of the secrets exist, skip the creation of the secret and do not overwrite?
I had a look at this page but still doesn't have a clear solution, any suggestion will be appreciated.
| You could have Terraform generate random secret values for you using:
data "aws_secretsmanager_random_password" "dev_password" {
password_length = 16
}
Then create the secret metadata using:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "dev-secret"
recovery_window_in_days = 7
}
And then by creating the secret version:
resource "aws_secretsmanager_secret_version" "dev_sv" {
secret_id = aws_secretsmanager_secret.dev_secret.id
secret_string = data.aws_secretsmanager_random_password.dev_password.random_password
lifecycle {
ignore_changes = [secret_string, ]
}
}
Adding the 'ignore_changes' lifecycle block to the secret version will prevent Terraform from overwriting the secret once it has been created. I tested this just now to confirm that a new secret with a new random value will be created, and subsequent executions of terraform apply do not overwrite the secret.
| Terraform | 66,670,949 | 15 |
I'm trying to deploy a beanstalk and use this as part of the aws_elastic_beanstalk_environment terraform resource:
setting {
namespace = "aws:elb:policies:PublicKey"
name = "PublicKey"
value = var.PUBLICKEY
The value of the var.PUBLICKEY should be in this format:
-----BEGIN PUBLIC KEY-----
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
asdhjasd
-----END PUBLIC KEY-----
May I ask if you have tried to set a variable with this kind of format? Or is terraform allow to use this kind of format as a variable on tfvars section?
| Although the answer for this case was to make it a single-line string, the question here seems likely to attract searches for multi-line strings in Terraform variables in general, so this answer is for anyone who ends up here in a situation where you can't just make it a single-line string.
When setting variables within the Terraform language itself (inside module blocks) or in .tfvars files, you can use Heredoc Strings to write a multi-line string value:
example = <<-EOT
-----BEGIN PUBLIC KEY-----
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
asdhjasd
-----END PUBLIC KEY-----
EOT
When setting a variable value on the command line or via environment variables, Terraform just uses the value it recieves literally as given and so the solution would be to learn how to write a string containing newlines in the syntax of your shell or in some other programming language you're using to run Terraform with environment variables set. For typical Unix-style shells you can write quotes ' around a multi-line string in order to make the newlines be interpreted literally:
export TF_VAR_example='-----BEGIN PUBLIC KEY-----
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
sajldlkuewindasmASL/aisudoiasumasdnowqeuoi@kajsdlkausKJDolkejpwr
asdhjasd
-----END PUBLIC KEY-----
'
...but either way this ends up being a problem for whatever shell or other program you're launching Terraform from, not for Terraform itself, and so it's not possible to give a generic answer that would work for all situations. Writing a .tfvars file is the most predictable way to do it, because then Terraform itself is responsible for parsing it.
| Terraform | 66,609,641 | 15 |
I want a stage in an Azure DevOps pipeline to be executed depending on the content of a variable set in a previous stage.
Here is my pipeline:
stages:
- stage: plan_dev
jobs:
- job: terraform_plan_dev
steps:
- bash: echo '##vso[task.setvariable variable=terraform_plan_exitcode;isOutput=true]2'
name: terraform_plan
- stage: apply_dev
dependsOn: plan_dev
condition: eq(stageDependencies.plan_dev.terraform_plan_dev.outputs['terraform_plan.terraform_plan_exitcode'], '2')
jobs:
- deployment: "apply_dev"
...
The idea is to skip the apply_dev stage, if the plan_dev stage shows no changes. Background is that we have manual approval for the deployment in the plan_dev stage that we want to skip if there are no changes to be approved.
Unfortunately this doesn't seem to work. No matter whether the variable terraform_plan_exitcode is set with the expected value (2) or not, the apply_dev stage is skipped.
For the syntax, I followed the documentation here that says:
stageDependencies.StageName.JobName.outputs['StepName.VariableName']
| I have seen this same issue. You need to use the dependencies variable instead of the stageDependencies:
stages:
- stage: plan_dev
jobs:
- job: terraform_plan_dev
steps:
- bash: echo '##vso[task.setvariable variable=terraform_plan_exitcode;isOutput=true]2'
name: terraform_plan
- stage: apply_dev
dependsOn: plan_dev
condition: eq(dependencies.plan_dev.outputs['terraform_plan_dev.terraform_plan.terraform_plan_exitcode'], '2')
jobs:
- deployment: "apply_dev"
The following is a more complete example of something I have working with Terraform Plan + conditional Apply:
stages:
- stage: Build_zip_plan
displayName: Build portal, zip files and terraform plan
jobs:
- job: Build_portal_zip_files_terraform_plan
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Cache@2
displayName: 'Register TF cache'
inputs:
key: terraform | $(Agent.OS) | $(Build.BuildNumber) | $(Build.BuildId) | $(Build.SourceVersion) | $(prefix)
path: ${{ parameters.tfExecutionDir }}
- task: TerraformInstaller@0
displayName: 'Install Terraform'
inputs:
terraformVersion: ${{ parameters.tfVersion }}
- task: TerraformTaskV1@0
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: ${{ parameters.tfExecutionDir }}
backendServiceArm: ${{ parameters.tfStateServiceConnection }}
backendAzureRmResourceGroupName: ${{ parameters.tfStateResourceGroup }}
backendAzureRmStorageAccountName: ${{ parameters.tfStateStorageAccount }}
backendAzureRmContainerName: ${{ parameters.tfStateStorageContainer }}
backendAzureRmKey: '$(prefix)-$(environment).tfstate'
- task: TerraformTaskV1@0
displayName: 'Terraform Plan'
inputs:
provider: 'azurerm'
command: 'plan'
commandOptions: '-input=false -out=deployment.tfplan -var="environment=$(environment)" -var="prefix=$(prefix)" -var="tenant=$(tenant)" -var="servicenow={username=\"$(servicenowusername)\",instance=\"$(servicenowinstance)\",password=\"$(servicenowpassword)\",assignmentgroup=\"$(servicenowassignmentgroup)\",company=\"$(servicenowcompany)\"}" -var="clientid=$(clientid)" -var="username=$(username)" -var="password=$(password)" -var="clientsecret=$(clientsecret)" -var="mcasapitoken=$(mcasapitoken)" -var="portaltenantid=$(portaltenantid)" -var="portalclientid=$(portalclientid)" -var="customerdisplayname=$(customerdisplayname)" -var="reportonlymode=$(reportonlymode)"'
workingDirectory: ${{ parameters.tfExecutionDir }}
environmentServiceNameAzureRM: ${{ parameters.tfServiceConnection }}
- task: PowerShell@2
displayName: 'Check Terraform plan'
name: "Check_Terraform_Plan"
inputs:
filePath: '$(Build.SourcesDirectory)/Pipelines/Invoke-CheckTerraformPlan.ps1'
arguments: '-TfPlan ''${{ parameters.tfExecutionDir }}/deployment.tfplan'''
pwsh: true
- stage:
dependsOn: Build_zip_plan
displayName: Terraform apply
condition: eq(dependencies.Build_zip_plan.outputs['Build_portal_zip_files_terraform_plan.Check_Terraform_Plan.TFChangesPending'], 'yes')
jobs:
- deployment: DeployHub
displayName: Apply
pool:
vmImage: 'ubuntu-latest'
environment: '$(prefix)'
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: Cache@2
displayName: 'Get Cache for TF Artifact'
inputs:
key: terraform | $(Agent.OS) | $(Build.BuildNumber) | $(Build.BuildId) | $(Build.SourceVersion) | $(prefix)
path: ${{ parameters.tfExecutionDir }}
- task: TerraformInstaller@0
displayName: 'Install Terraform'
inputs:
terraformVersion: ${{ parameters.tfVersion }}
- task: TerraformTaskV1@0
displayName: 'Terraform Apply'
inputs:
provider: 'azurerm'
command: 'apply'
commandOptions: 'deployment.tfplan'
workingDirectory: ${{ parameters.tfExecutionDir }}
environmentServiceNameAzureRM: ${{ parameters.tfServiceConnection }}
| Terraform | 65,219,176 | 15 |
I have the following folder structure:
infrastructure
└───security-groups
│ │ main.tf
│ │ config.tf
│ │. security_groups.tf
│
└───instances
│ main.tf
│ config.tf
│ instances.tf
I would like to reference the security group id instantiated in security-groups folder by reference.
I have tried to output the required ids in the security_groups.tf file with
output "sg_id" {
value = "${aws_security_group.server_sg.id}"
}
And then in the instances file add it as a module:
module "res" {
source = "../security-groups"
}
The problem with this approach is that when I do terraform apply in the instances folder, it tries to create the security groups as well (which I have already created by doing terraform apply in the security-groups folder) and it fails because the SGs are existing.
What would be the easiest way to reference the resources created in a different folder, without changing the structure of the code?
Thank you.
| To refer to an existing resource you need to use a data block. You won't refer directly to the resource block in the other folder, but instead specify a name or ID or whatever other unique identifier it has. So for a security group, you would add something like
data "aws_security_group" "sg" {
name = "the-security-group-name"
}
to your instances folder, and refer to that entity to associate your instances with that security group.
You should also consider whether you actually want to be just applying terraform to the whole tree, instead of each folder separately. Then you can refer between all your managed resources directly like you are trying to do, and you don't have to call terraform apply as many times.
| Terraform | 64,094,992 | 15 |
I have maps of variables like this:
users.tfvars
users = {
"testterform" = {
path = "/"
force_destroy = true
email_address = "[email protected]"
group_memberships = [ "test1" ]
tags = { department : "test" }
ssh_public_key = "ssh-rsa AAAAB3NzaC1yc2EAAA4l7"
}
"testterform2" = {
path = "/"
force_destroy = true
email_address = "[email protected]"
group_memberships = [ "test1" ]
tags = { department : "test" }
ssh_public_key = ""
}
I would like to upload ssh key only if ssh_public_key not empty for the user. But don't understand how to check this
#main.tf
resource "aws_iam_user" "this" {
for_each = var.users
name = each.key
path = each.value["path"]
force_destroy = each.value["force_destroy"]
tags = merge(each.value["tags"], { Provisioner : var.provisioner, EmailAddress : each.value["email_address"] })
}
resource "aws_iam_user_group_membership" "this" {
for_each = var.users
user = each.key
groups = each.value["group_memberships"]
depends_on = [ aws_iam_user.this ]
}
resource "aws_iam_user_ssh_key" "this" {
for_each = var.users
username = each.key
encoding = "SSH"
public_key = each.value["ssh_public_key"]
depends_on = [ aws_iam_user.this ]
}
| It sounds like what you need here is a derived "users that have non-empty SSH keys" map. You can use the if clause of a for expression to derive a new collection from an existing one while filtering out some of the elements:
resource "aws_iam_user_ssh_key" "this" {
for_each = {
for name, user in var.users : name => user
if user.ssh_public_key != ""
}
username = each.key
encoding = "SSH"
public_key = each.value.ssh_public_key
depends_on = [aws_iam_user.this]
}
The derived map here uses the same keys and values as the original var.users, but is just missing some of them. That means that the each.key results will correlate and so you'll still get the same username value you were expecting, and your instances will have addresses like aws_iam_user_ssh_key.this["testterform"].
| Terraform | 63,620,298 | 15 |
I am trying to run a python function on an AWS Lambda layer, I don't find any documentation on terraform to use an AWS provided lambda layer. How do I use AWS Provided lambda layer AWSLambda-Python27-SciPy1x and runtime Python 2.7?
#----compute/lambda.tf----
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "index.py"
output_path = "check_foo.zip"
}
resource "aws_lambda_function" "check_foo" {
filename = "check_foo.zip"
function_name = "checkFoo"
role = "${aws_iam_role.iam_for_lambda_tf.arn}"
handler = "index.handler"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
# i want to use lambda layer - AWSLambda-Python27-SciPy1x and run this function on it
runtime = "python2.7"
}
| You have to specify lambda layers as ARNs in terraform using layers parameter:
layers - (Optional) List of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function.
Using the following syntax in terraform:
layers = ["layer-arn"]
For example, the ARN for AWSLambda-Python27-SciPy1x in us-east-1 region is:
arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python27-SciPy1x:24
If you not sure what is your ARN, you can create a dummy a Python 2.7 lambda function, add AWS layer AWSLambda-Python27-SciPy1x layer, and the console will give you its ARN.
| Terraform | 61,927,400 | 15 |
I need to create a trigger for an S3 bucket. We use the following to create the trigger:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = var.aws_s3_bucket_id
lambda_function {
lambda_function_arn = var.lambda_function_arn
events = ["s3:ObjectCreated:Put"]
filter_prefix = var.filter_prefix
filter_suffix = var.filter_suffix
}
}
This works fine when the bucket does not already have a trigger which was the case for all environments apart from production.
When we deployed production we saw that the trigger which was already present on the bucket got deleted. We need both triggers.
I was able to add another trigger manually, for example a PUT event trigger by just changing the prefix, however when I do it from Terraform the previous always gets deleted. Is there anything I am missing?
| The aws_s3_bucket_notification resource documentation mentions this at the top:
NOTE: S3 Buckets only support a single notification configuration. Declaring multiple aws_s3_bucket_notification resources to the same S3 Bucket will cause a perpetual difference in configuration. See the example "Trigger multiple Lambda functions" for an option.
Their example shows how this should be done by adding multiple lambda_function blocks in the aws_s3_bucket_notification resource:
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_lambda_permission" "allow_bucket1" {
statement_id = "AllowExecutionFromS3Bucket1"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.func1.arn}"
principal = "s3.amazonaws.com"
source_arn = "${aws_s3_bucket.bucket.arn}"
}
resource "aws_lambda_function" "func1" {
filename = "your-function1.zip"
function_name = "example_lambda_name1"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.example"
runtime = "go1.x"
}
resource "aws_lambda_permission" "allow_bucket2" {
statement_id = "AllowExecutionFromS3Bucket2"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.func2.arn}"
principal = "s3.amazonaws.com"
source_arn = "${aws_s3_bucket.bucket.arn}"
}
resource "aws_lambda_function" "func2" {
filename = "your-function2.zip"
function_name = "example_lambda_name2"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.example"
}
resource "aws_s3_bucket" "bucket" {
bucket = "your_bucket_name"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${aws_s3_bucket.bucket.id}"
lambda_function {
lambda_function_arn = "${aws_lambda_function.func1.arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "AWSLogs/"
filter_suffix = ".log"
}
lambda_function {
lambda_function_arn = "${aws_lambda_function.func2.arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "OtherLogs/"
filter_suffix = ".log"
}
}
| Terraform | 60,502,686 | 15 |
I am using, terraform & kubectl to deploy insfra-structure and application.
Since I changed aws configure :
terraform init
terraform apply
I always got :
terraform apply
Error: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 5ba38c31-d39a-11e9-a642-21e0b5cf5c0e
on providers.tf line 1, in provider "aws":
1: provider "aws" {
Can you advise ? Appreciate !
| From here.
This is a general error that can be cause by a few reasons.
Some examples:
1) Invalid credentials passed as environment variables or in ~/.aws/credentials.
Solution: Remove old profiles / credentials and clean all your environment vars:
for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_SECURITY_TOKEN ; do eval unset $var ; done
2) When your aws_secret_access_key contains characters like the plus-sign + or multiple forward-slash /. See more in here.
Solution: Delete credentials and generate new ones.
3) When you try to execute Terraform inside a region which must be explicitly enabled (and wasn't).
(In my case it was me-south-1 (Bahrain) - See more in here).
Solution: Enable region or move to an enabled one.
4) In cases where you work with 3rd party tools like Vault and don't supply valid AWS credentials to communicate with - See more in here.
All will lead to a failure of aws sts:GetCallerIdentity API.
| Terraform | 57,865,825 | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.