question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
Does Keycloak support basic Authentication (Authorization header that contains the word Basic word followed by a space and a base64-encoded string username:password ) and if so how I can configure realm and client settings for it ?
I want to secure my rest api with Keycloak and support also basic Authentication as an option.
| Yes that's possible for clients with Access Type: confidential and Direct Access Grants Enabled. You can find more details on these settings in the documentation.
You also need to enable enable-basic-auth and supply your credentialsin your application settings. Consult the documentation for more details.
| Keycloak | 57,808,046 | 11 |
I know how to deploy custom KeyCloak theme in Windows using both ways as stated here:
Copy-paste theme in themes directory
Using archive deploy
Can someone please suggest how to do this in docker?
| This is what I did:
Created Dockerfile like below
FROM jboss/keycloak
COPY ./themes/<yourThemeName>/ /opt/jboss/keycloak/themes/<yourThemeName>/
Built new docker image from this file
docker build -t <yourDockerHubUserName>/keycloak .
Run this docker image
docker container run --name <someContainerName> -p 8080:8080 -e
KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=password
<yourDockerHubUserName>/keycloak
Check if new theme shows up by logging into admin console at
http://localhost:8080/auth and go to realms/themes click drop down list of themes and you should see <yourThemeName>
| Keycloak | 52,641,379 | 11 |
A 2-year old keycloak-user list question w/o an answer:
there’s a protected resource called Project
and an owner - a Project Manager
Each project manager has access to only their own projects (owner-only policy).
Project Managers in turn report to one or more Portfolio Managers. A Portfolio Manager should be able to access all his/her project managers' projects (portfolio-manager policy).
Let’s assume the system design if flexible and this fact who are the Portfolio Managers for a particular Project Manager
can be either kept inside Keycloak (but not as keycloak groups) or in the client app itself. How can this be implemented as a JavaScrtipt-based
authorization policy in Keycloak? I guess the request can be injected with this info somehow but can’t figure it out from the docs.
| It turned out to be rather easy. I've decided to keep the info about managers in another database, and then the app (service-nodejs) needs to pass this info as a claim to keycloak. I've tested this on the service-nodejs keycloak quickstart. Here are the relevant pieces:
// app.js route:
app.get('/service/project/:id',
keycloak.enforcer(['Project'], {
response_mode: 'permissions',
claims: (request) => {
return { "portfolio.managers": ['alice', 'bob'] } //hard-coded
}
}), function(req, res) {
res.json({ message: `got project "operation "` });
});
The policy protecting the Project resource is an aggregated of OwnerOnly and PortfolioManagers:
// portfolio-managers-policy:
var context = $evaluation.getContext();
var identity = context.getIdentity();
var userid = identity.getAttributes().getValue('preferred_username').asString(0);
var managers = context.getAttributes().getValue('portfolio.managers')
if (!managers) {
print('managers not provided, cannot decide!');
$evaluation.deny();
} else {
// check if the user is one of the portfolio managers of this project:
for (var i = 0; i < managers.size(); i++) {
if (managers.asString(i) == userid) {
$evaluation.grant();
break;
}
}
}
Note that the service-nodejs keycloak client must be confidential in order to call the token endpoint.
| Keycloak | 52,166,711 | 11 |
How does the access token differ from user info token when using Keycloak?
From OAuth2/OpenIDConnect I have understood that the access token gives information that the user has been authenticated
and that you need to use the user info token to get more infomation about the user and its profile/roles etc.
When I look at the access token in something like https://jwt.io/ vs. the UserInfo token. I am able to get the same information about the users profile & roles.
Why is it like this, and how does the access token differ from user info token when using Keycloak?
| The access token is meant to provide you access to the resources of your application. In order to get an access token, you have to authenticate yourself with any of the flows defined by the spec. In keycloak, access token contains the username and roles, but you can also add custom claims using the admin panel. Adding some claims may be useful because the token is sent in every single request and you can decode it from your application.
There's no user info token at all, actually it is an endpoint. This endpoint is accessed using the access token that you get in the first step and usually provides a JSON response with detailed information about the user (such as user data, roles...).
| Keycloak | 48,827,811 | 11 |
I'm trying to figure out, what is import/export best practices in K8S keycloak(version 3.3.0.CR1). Here is keycloak official page import/export explanation, and they example of export to single file json. Going to /keycloak/bin folder and the run this:
./standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=keycloak-export.json
I logged in to pod, and I get errors after run this command:
12:23:32,045 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("core-service" => "management"),
("management-interface" => "http-interface")
]) - failure description: {
"WFLYCTL0080: Failed services" => {"org.wildfly.management.http.extensible" => "java.net.BindException: Address already in use /127.0.0.1:9990"},
"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
"Services that were unable to start:" => ["org.wildfly.management.http.extensible.shutdown"],
"Services that may be the cause:" => ["jboss.remoting.remotingConnectorInfoService.http-remoting-connector"]
}
}
As I see, Keycloak server run on the same port, where I ran backup script. Here helm/keycloak values.yml:
Service:
Name: keycloak
Port: 8080
Type: ClusterIP
Deployment:
Image: jboss/keycloak
ImageTag: 2.5.1.Final
ImagePullPolicy: IfNotPresent
ContainerPort: 8080
KeycloakUser: Admin
KeycloakPassword: Admin
So, server should be stopped, before we ran this scripts? I can't stop keycloak process inside of pod, because ingress will close pod and will create new one.
Any suggestions for any other way to export/import(backup/restore) data? Or I missing something?
P.S.
I even tried UI import/export. Export work good, and I see all data. But import worked in half way. He Brought me all "Clients", but not my "Realm" and "User Federation".
| Basically, you just have to start the exporting Keycloak instance on ports that are different from your main instance. I used something like this just now:
bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=keycloak-export.json -Djboss.http.port=8888 -Djboss.https.port=9999 -Djboss.management.http.port=7777
The important part are all the ports. If you get more error messages, you might need to add more properties (grep port standalone/configuration/standalone.xml is your friend for finding out property names), but in the end, all error messages stop and you see this message instead:
09:15:26,550 INFO [org.keycloak.exportimport.singlefile.SingleFileExportProvider] (ServerService Thread Pool -- 52) Exporting model into file /opt/jboss/keycloak/keycloak-export.json
[...]
09:15:29,565 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 3.2.0.Final (WildFly Core 2.0.10.Final) started in 12156ms - Started 444 of 818 services (558 services are lazy, passive or on-demand)
Now you can stop the server with Ctrl-C, exit the container and copy the export file away with kubectl cp.
| Keycloak | 46,281,416 | 11 |
I'm creating a Keycloak extension with dependencies. I added the entry on the pom.xml like this:
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20160810</version>
</dependency>
Then I deployed it to Keycloak:
mvn clean install wildfly:deploy
But when I run it, I got the error:
org.jboss.resteasy.spi.UnhandledException: java.lang.NoClassDefFoundError: org/json/JSONObject
Caused by: java.lang.NoClassDefFoundError: org/json/JSONObject
Caused by: java.lang.ClassNotFoundException: org.json.JSONObject from [Module "deployment.keycloak-authenticator.jar" from Service Module Loader]
at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:198)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:412)
at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:400)
at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:116)
... 66 more
How to add dependencies to extensions in Keycloak?
| You have to create your SPI dependencies as jboss modules.
Steps:
Add a jboss-deployment-structure.xml file in src/main/resources/META-INF directory or your SPI with something like this (oficial documentation):
<jboss-deployment-structure>
<deployment>
<dependencies>
<module name="org.json.json" />
</dependencies>
</deployment>
</jboss-deployment-structure>
Make $KEYCLOAK_HOME/modules/system/layers/base/org/json/json/main directory
Add json-20160810.jar in created dir
Add a module.xml file in same dir with this content:
<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.5" name="org.json.json">
<properties>
<property name="jboss.api" value="private"/>
</properties>
<resources>
<resource-root path="json-20160810.jar"/>
</resources>
<dependencies>
</dependencies>
</module>
Compile your SPI
Restart keycloak
Redeploy your SPI
| Keycloak | 46,205,475 | 11 |
I am using a client to create a new keycloak user. Something like this:
keycloak.realm(realm)
.users()
.create(user);
The user variable is a UserRepresentation object, and I'm trying to add an Update Password required action:
user.setRequiredActions(singletonList("Update Password"))
User gets created ok, the problem is that I don't have the required action set
Not sure what I'm doing wrong, should I specify a different value in the required actions list?
Thanks
| Figured out what was up.
Keycloak has an enum to represent various user actions:
public static enum RequiredAction {
VERIFY_EMAIL, UPDATE_PROFILE, CONFIGURE_TOTP, UPDATE_PASSWORD, TERMS_AND_CONDITIONS
}
So the value should be "UPDATE_PASSWORD" not "Update password"
| Keycloak | 45,328,003 | 11 |
when I access keycloak admin console (!remotely) and create client:
the keycloak OIDC JSON doesn't have public key
I would expect having in JSON something like:
"realm-public-key": "MIIBIjANBg....
| keycloak.json in newest keycloak doesn't have any realm public key. Actually, it appears that you are using keycloak version 2.3.x. There have been some changes in it. Basically, you can rotate multiple public keys for a realm.
The document says:
In 2.3.0 release we added support for Public Key Rotation. When admin rotates the realm keys in Keycloak admin console, the Client Adapter will be able to recognize it and automatically download new public key from Keycloak. However this automatic download of new keys is done just if you don’t have realm-public-key option in your adapter with the hardcoded public key. For this reason, we don’t recommend to use realm-public-key option in adapter configuration anymore.
Note this option is still supported, but it may be useful just if you really want to have hardcoded public key in your adapter configuration and never download the public key from Keycloak. In theory, one reason for this can be to avoid man-in-the-middle attack if you have untrusted network between adapter and Keycloak, however in that case, it is much better option to use HTTPS, which will secure all the requests between adapter and Keycloak.
| Keycloak | 40,503,697 | 11 |
I know keycloak has exposed below api,
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-services</artifactId>
<version>2.0.0.Final</version>
</dependency>
With complete documentation here. I cannot find the required api here to fetch all users with specific role mapped to them.
Problem Statement - I need to pick all users from keycloak server who have a specific role. I need to send email to all users with role mapped to them.
| Based on the documentation it appears to be this API:
GET /{realm}/clients/{id}/roles/{role-name}/users
It is there for a while. In this older version however it was not possible to get more than 100 users this way. It was fixed later and pagination possibility was added.
| Keycloak | 38,371,943 | 11 |
I'm just a beginner in Spring Security, but I would like to know is it possible to configure keycloak in a way that I can use @PreAuthorize, @PostAuthorize, @Secured and other annotations.
For example, I've configured the keycloak-spring-security-adapter and Spring Security in my simple Spring Rest webapp so that I have access to Principal object in my controller, like this:
@RestController
public class TMSRestController {
@RequestMapping("/greeting")
public Greeting greeting(Principal principal, @RequestParam(value="name") String name) {
return new Greeting(String.format(template, name));
}
...
}
But when I try this (just an example, actually I want to execute custom EL expression before authorization):
@RestController
public class TMSRestController {
@RequestMapping("/greeting")
@PreAuthorize("hasRole('ADMIN')")
public Greeting greeting(Principal principal, @RequestParam(value="name") String name) {
return new Greeting(String.format(template, name));
}
...
}
I get exception:
org.springframework.security.authentication.AuthenticationCredentialsNotFoundException: An Authentication object was not found in the SecurityContext
In my spring security config I enabled global method security:
What do I need to make this spring security annotations work? Is it possible to use this annotation in this context at all?
| here is example code:
@EnableWebSecurity
@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true,
securedEnabled = true,
jsr250Enabled = true)
@ComponentScan(basePackageClasses = KeycloakSecurityComponents.class)
public class WebSecurityConfig extends KeycloakWebSecurityConfigurerAdapter {
}
and
@PreAuthorize("hasRole('ROLE_ADMIN')")
Apart from this code. you need to do the role mapping for realm roles and client(application roles). the application roles will be put in @PreAuthorize
| Keycloak | 34,552,125 | 11 |
I am creating a simple SpringBoot application and trying to integrate with OAuth 2.0 provider Keycloak. I have created a realm, client, roles (Member, PremiumMember) at realm level and finally created users and assigned roles (Member, PremiumMember).
If I use SpringBoot Adapter provided by Keycloak https://www.keycloak.org/docs/latest/securing_apps/index.html#_spring_boot_adapter then when I successfully login and check the Authorities of the loggedin user I am able to see the assigned roles such as Member, PremiumMember.
Collection<? extends GrantedAuthority> authorities =
SecurityContextHolder.getContext().getAuthentication().getAuthorities();
But if I use generic SpringBoot Auth2 Client Config I am able to login but when I check the Authorities it always show only ROLE_USER, SCOPE_email,SCOPE_openid,SCOPE_profile and didn't include the roles I mapped (Member, PremiumMember).
My SpringBoot OAuth2 config:
pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-oauth2-client</artifactId>
</dependency>
application.properties
spring.security.oauth2.client.provider.spring-boot-thymeleaf-client.issuer-uri=http://localhost:8181/auth/realms/myrealm
spring.security.oauth2.client.registration.spring-boot-thymeleaf-client.authorization-grant-type=authorization_code
spring.security.oauth2.client.registration.spring-boot-thymeleaf-client.client-id=spring-boot-app
spring.security.oauth2.client.registration.spring-boot-thymeleaf-client.client-secret=XXXXXXXXXXXXXX
spring.security.oauth2.client.registration.spring-boot-thymeleaf-client.scope=openid,profile,roles
spring.security.oauth2.client.registration.spring-boot-thymeleaf-client.redirect-uri=http://localhost:8080/login/oauth2/code/spring-boot-app
I am using SpringBoot 2.5.5 and Keycloak 15.0.2.
Using this generic OAuth2.0 config approach (without using Keycloak SpringBootAdapter) is there a way to get the assigned roles?
| By default, Spring Security generates a list of GrantedAuthority using the values in the scope or scp claim and the SCOPE_ prefix.
Keycloak keeps the realm roles in a nested claim realm_access.roles. You have two options to extract the roles and map them to a list of GrantedAuthority.
OAuth2 Client
If your application is configured as an OAuth2 Client, then you can extract the roles from either the ID Token or the UserInfo endpoint. Keycloak includes the roles only in the Access Token, so you need to change the configuration to include them also in either the ID Token or the UserInfo endpoint (which is what I use in the following example). You can do so from the Keycloak Admin Console, going to Client Scopes > roles > Mappers > realm roles
Then, in your Spring Security configuration, define a GrantedAuthoritiesMapper which extracts the roles from the UserInfo endpoint and maps them to GrantedAuthoritys. Here, I'll include how the specific bean should look like. A full example is available on my GitHub: https://github.com/ThomasVitale/spring-security-examples/tree/main/oauth2/login-user-authorities
@Bean
public GrantedAuthoritiesMapper userAuthoritiesMapperForKeycloak() {
return authorities -> {
Set<GrantedAuthority> mappedAuthorities = new HashSet<>();
var authority = authorities.iterator().next();
boolean isOidc = authority instanceof OidcUserAuthority;
if (isOidc) {
var oidcUserAuthority = (OidcUserAuthority) authority;
var userInfo = oidcUserAuthority.getUserInfo();
if (userInfo.hasClaim("realm_access")) {
var realmAccess = userInfo.getClaimAsMap("realm_access");
var roles = (Collection<String>) realmAccess.get("roles");
mappedAuthorities.addAll(generateAuthoritiesFromClaim(roles));
}
} else {
var oauth2UserAuthority = (OAuth2UserAuthority) authority;
Map<String, Object> userAttributes = oauth2UserAuthority.getAttributes();
if (userAttributes.containsKey("realm_access")) {
var realmAccess = (Map<String,Object>) userAttributes.get("realm_access");
var roles = (Collection<String>) realmAccess.get("roles");
mappedAuthorities.addAll(generateAuthoritiesFromClaim(roles));
}
}
return mappedAuthorities;
};
}
Collection<GrantedAuthority> generateAuthoritiesFromClaim(Collection<String> roles) {
return roles.stream()
.map(role -> new SimpleGrantedAuthority("ROLE_" + role))
.collect(Collectors.toList());
}
OAuth2 Resource Server
If your application is configured as an OAuth2 Resource Server, then you can extract the roles from the Access Token. In your Spring Security configuration, define a JwtAuthenticationConverter bean which extracts the roles from the Access Token and maps them to GrantedAuthoritys. Here, I'll include how the specific bean should look like. A full example is available on my GitHub: https://github.com/ThomasVitale/spring-security-examples/tree/main/oauth2/resource-server-jwt-authorities
public JwtAuthenticationConverter jwtAuthenticationConverterForKeycloak() {
Converter<Jwt, Collection<GrantedAuthority>> jwtGrantedAuthoritiesConverter = jwt -> {
Map<String, Collection<String>> realmAccess = jwt.getClaim("realm_access");
Collection<String> roles = realmAccess.get("roles");
return roles.stream()
.map(role -> new SimpleGrantedAuthority("ROLE_" + role))
.collect(Collectors.toList());
};
var jwtAuthenticationConverter = new JwtAuthenticationConverter();
jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(jwtGrantedAuthoritiesConverter);
return jwtAuthenticationConverter;
}
| Keycloak | 69,331,013 | 10 |
I have a local test installation of keycloak 12 and unfortunately I've lost the admin password, any idea on how to reset it or reset the keycloak configuration without losing the realms ?
I already used add-user cli command to add a user but even with that one I can't access
| For me, I had to find the user in the user_entity table. Then delete rows in related tables. After this, I restarted the pod, and the admin user login became the one passed through the environment variables KEYCLOAK_USER and KEYCLOAK_PASSWORD.
Find the user id
select * from user_entity
Delete rows
delete from credential where user_id = '<user-id>';
delete from user_role_mapping where user_id = '<user-id>';
delete from user_entity where id = '<user-id>';
| Keycloak | 69,000,968 | 10 |
I'd like to add a new auth method in keycloak. To be precise - I'd like the keycloak to ask external API for some specific value. I have read about flows in keycloak but they seem to be poorly documented and I have a feeling that it is not very intuitive.
During login I would like the keycloak to send request to external API and if and only if when specific value is returned allow the user to login. For example I could override some login method and add a few lines of code doing what I want.
Which method in which class is responsible for login?
| There are multiple things you need to do to achieve that. I will go over them:
Implement Authenticator and AuthenticatorFactory interfaces.
Copy an existing Authentication Flow
Bind flow
I assume you know how to write and deploy a keycloak extension.
1. Implement Authenticator and AuthenticatorFactory interfaces.
The specific interfaces are those:
org.keycloak.authentication.AuthenticatorFactory
org.keycloak.authentication.Authenticator
A sample implementation:
org.keycloak.authentication.authenticators.browser.UsernamePasswordFormFactory
org.keycloak.authentication.authenticators.browser.UsernamePasswordForm
If you want to externalize your config (So you can add username/password etc. for external api), override getConfigProperties() method in AuthenticatorFactory
2. Copy an existing Authentication Flow.
Login keycloak with admin credentials.
Create a new realm (or use if you have one)
Go to Authentication tab on left.
Copy browser login flow
Add your flows/executions (Your implementation of Authenticator/Factory will be listed under executions)
You can move them up or down. Make them required or alternative etc.
If you override config list it will be shown next to your execution
3. Bind flow.
Bind your flow in the second tab of Authentication page.
| Keycloak | 67,800,071 | 10 |
I have setup Keycloak as a SAML broker, and authentication is done by an external IdP provided by the authorities. Users logging in using this IdP are all accepted and all we need from Keycloak is an OAuth token to access our system.
I have tried both the default setup using H2 and running with an external MariaDB.
The external IdP provides us with a full name of the user and a personal ID. Both data are covered by GDPR and I really do not like the sound of storing that data in a database running in the DMZ. Opening up for Keycloak to access a database in the backend is also not a good solution, especially when I do not need users to be stored.
The benefit of running without a database is that I have a simpler DMZ setup as I really do not need to store anything about the users but on the backend.
Do I need a database, and if not how do I run Keycloak without it?
|
Do I need a database, and if not how do I run Keycloak without it?
Yes, however, out-of-the-box Keycloak runs without having to deploy any external DB. From the Keycloak official documentation section Relational Database Setup one can read:
Keycloak comes with its own embedded Java-based relational database
called H2. This is the default database that Keycloak will use to
persist data and really only exists so that you can run the
authentication server out of the box.
So out-of-the-box you cannot run Keycloak without a DB.
That being said from the same documentation on can read:
We highly recommend that you replace it with a more production ready external database. The H2 database is not very viable in high concurrency situations and should not be used in a cluster either.
So regarding this:
The benefit running without a database is that I have a simpler DMZ
setup as I really do not need to store anything about the users but
on the backend.
You would still be better offer deploying another DB, because Keycloak stores more than just the users information in DB (e.g., realm information, groups, roles and so on).
The external IdP provides us with a full name of the user and a
personal ID. Both data are covered by GDPR and I really do not like
the sound of storing that data in a database running in the DMZ.
Opening up for Keycloak to access a database in the backend is also
not a good solution, especially when I do not need users to be stored.
You can configured that IDP and Keycloak in a manner that the users are not imported to the Keycloak whenever those user authenticate.
| Keycloak | 66,801,793 | 10 |
I am trying to integrate Keycloak for my client side application using javascript adapter keycloak-js.
However, I can't seem to make it work. This is my code
const keycloak = new Keycloak({
realm: 'my-realm',
url: 'http://localhost:8080/auth/',
clientId: 'my-client',
});
try {
const authenticated = await keycloak.init();
console.log(authenticated);
} catch (e) {
console.log(e);
}
It doesn't return anything, not even error or anything from the callback. I only have
GET http://localhost:8080/auth/realms/my-realm/protocol/openid-connect/3p-cookies/step1.html 404 (Not Found)
Not sure what did I do wrong? I follow the documentation but I can't find anything about this behaviour
If I type the url above in browser, I see this
Is there anything I can do?
EDIT: I managed to make it work using this code by matching keycloak server with keycloak-js version. Upgrading server and keycloak-js version to 11.0.2 does work for me as well as downgrading both version to 10.0.2
This is the client configuration that I'm using
In the code example above, I can see console.log(isAuthorised); return false in dev tools, and if I do const isAuthorised = await keycloak.init({ onLoad: 'login-required' });, It will redirect me to login page and redirect me back to this page after successful login. Hope this helps.
| It's probably a version mismatch between keycloak-js and your keycloak server. I was using the newest keycloak-js version 11.0.0 with a keycloak server version of 10.0.1, which lead to this exact error. Downgrading keycloak-js on the client side to 10.0.2 did the trick for me. (Haven't tried to upgrade the keycloak server yet, but most likely works as well)
| Keycloak | 63,073,772 | 10 |
Step 1. Get Access Token:
curl --location --request POST 'https://localhost/auth/realms/master/protocol/openid-connect/token' \
--header 'Content-Type: application/x-www-form-url## Heading ##encoded' \
--data-urlencode 'username=*******' \
--data-urlencode 'password=*******' \a
--data-urlencode 'grant_type=*******' \
--data-urlencode 'client_id=*******'
Step 2. Create user and assign a role:
curl --location --request POST 'https://localhost/auth/admin/realms/MyRealm/users' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJKZFVORmNDU19rWjdvZ3ZFSkI4VXZpMTNRb2hKbnh2VW9oeVpieXg2Vld3In0.eyJqdGkiOiI4OGQ4Njk4NC04OGNjLTQzNzAtYWExMC00MTBkYWY5OGY0ODciLCJleHAiOjE1ODQ5NDA2MTYsIm5iZiI6MCwiaWF0IjoxNTg0OTQwNTU2LCJpc3MiOiJodHRwczovL2lkLmRldi1wcm90b24uaXRlcjIwMDQubGFiLmVoZWFsdGguZXhjaGFuZ2UvYXV0aC9yZWFsbXMvbWFzdGVyIiwic3ViIjoiYzI5YjQzMGItMWZlNC00NzJhLWFjYTMtMzgzYTkxNTNmM2RjIiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiYWRtaW4tY2xpIiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiNzMyOGUyMDItNzQyZC00ZTdkLTgwMWUtY2UyNGQ1NWUyZDZjIiwiYWNyIjoiMSIsInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwicHJlZmVycmVkX3VzZXJuYW1lIjoiYWRtaW4ifQ.brCZauRzLeoAHvxtgJy6PYwZhbInVfbLA6HF7YHmwuGzoDoexj97P1s03r2G5bzYUkL93sejEFT5AkPeoZ0gpzHY3IsG3UF7Q9Qvk3t5c08CcAqOn4czhYYV91fwwBWMgTv4sQh0D-_bSq0OtI5g9Ojo0sHzxleYEUW8UYdFsQ_JvpOnZEM87CzUhBqsDDnQ4kPslOaaG2q5PPY3ccNKHexE0UkxjtOeUoIn6tdf-0Yqwc55JCMzWOZmt3pFqWKfm3-VZX5lT0UTL9ktrrLfFTIMfZb-Lmyp2g3_s_juUpkbgPpBPHgh6IGS6XaOnxgseq1Vz4h6pZ_A0O60Z8R5-w' \
--data-raw '{
"username": "ayman",
"enabled": true,
"email": " [email protected]",
"firstName": "ayman",
"lastName": "ayman",
"emailVerified":true,
"credentials": [
{
"type": "password",
"value": "ayman"
}
],
"realmRoles": [
"test-role"
]
}'
Step 3. Get user details
curl --location --request GET 'https://localhost/auth/admin/realms/MyRealm/users/d3bbe900-c7b3-49c5-9414-28f9433d3fc1' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJKZFVORmNDU19rWjdvZ3ZFSkI4VXZpMTNRb2hKbnh2VW9oeVpieXg2Vld3In0.eyJqdGkiOiJkMjgzYzA2NS1hMmJjLTQwNDctOWQ0MC01NWI4Nzg5YmNkNGUiLCJleHAiOjE1ODQ5NTM2NDgsIm5iZiI6MCwiaWF0IjoxNTg0OTUzNTg4LCJpc3MiOiJodHRwczovL2lkLmRldi1wcm90b24uaXRlcjIwMDQubGFiLmVoZWFsdGguZXhjaGFuZ2UvYXV0aC9yZWFsbXMvbWFzdGVyIiwic3ViIjoiYzI5YjQzMGItMWZlNC00NzJhLWFjYTMtMzgzYTkxNTNmM2RjIiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiYWRtaW4tY2xpIiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiNjhmYmQ1YWQtMTkwMC00MzgyLThiMmYtYjhlYjExOTA4YmFhIiwiYWNyIjoiMSIsInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwicHJlZmVycmVkX3VzZXJuYW1lIjoiYWRtaW4ifQ.KmWR31pAR4Tl3Mad7awvqeK8np3x5qaPL1tYWAPLDdYaT4nLzpGblmPOBNzYIaEdhs9iwGEmES5_VzrI4C7xUVsY-Zq4jl8iPYP7IawzqgXyrTVuvAO_DLdgdVRKidTT6I-Eh1F87AV14-pOf0GXQ4wnQl5qGl5S6XUTJkegx8eGCg5Qp-zAdHOkxvPL3KRtpgwJx5QCvce-1-wW5Fckk3a-61vXA-o9jUDnJGWTYUyAssVD8zRUs-hhAms1PoR4nW1tCd_9J7xiWmr2hN0-pHY-u5PjNlrxCyOx-3pkRzworZ9e2i0ff0x2dcivpzyDfqe__sdsLVQsiiD1S7ViHw'
Problem:
The user is successfully created but it is not assigned a role (realmRole). After some more research I found that this behaviour is due to a bug in keycloak API (stack overflow issue).
Is there any way to create a user and assign a realm role to it?
Update:
According to some answers, we can use role mappers API calls to map a role to a user. Documentation about those operations: https://www.keycloak.org/docs-api/6.0/rest-api/index.html#_role_mapper_resource
POST /{realm}/groups/{id}/role-mappings/realm
What are the groups in the above URL?
| This url: POST /{realm}/groups/{id}/role-mappings/realm is used to assign a realm role to a group where {id} is the group id.
To assign a realm role to a user, use:
# Get the role lists
GET /{realm}/roles
# Get the user lists
GET /{realm}/users
# Assign your role to user
POST /{realm}/users/{userId}/role-mappings/realm
body :[{id: roleId, name: roleName]
your request could be:
curl --location --request POST 'https://localhost/auth/admin/realms/MyRealm/users/MyUserId/role-mappings/realm' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJKZFVORmNDU19rWjdvZ3ZFSkI4VXZpMTNRb2hKbnh2VW9oeVpieXg2Vld3In0.eyJqdGkiOiI4OGQ4Njk4NC04OGNjLTQzNzAtYWExMC00MTBkYWY5OGY0ODciLCJleHAiOjE1ODQ5NDA2MTYsIm5iZiI6MCwiaWF0IjoxNTg0OTQwNTU2LCJpc3MiOiJodHRwczovL2lkLmRldi1wcm90b24uaXRlcjIwMDQubGFiLmVoZWFsdGguZXhjaGFuZ2UvYXV0aC9yZWFsbXMvbWFzdGVyIiwic3ViIjoiYzI5YjQzMGItMWZlNC00NzJhLWFjYTMtMzgzYTkxNTNmM2RjIiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiYWRtaW4tY2xpIiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiNzMyOGUyMDItNzQyZC00ZTdkLTgwMWUtY2UyNGQ1NWUyZDZjIiwiYWNyIjoiMSIsInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwicHJlZmVycmVkX3VzZXJuYW1lIjoiYWRtaW4ifQ.brCZauRzLeoAHvxtgJy6PYwZhbInVfbLA6HF7YHmwuGzoDoexj97P1s03r2G5bzYUkL93sejEFT5AkPeoZ0gpzHY3IsG3UF7Q9Qvk3t5c08CcAqOn4czhYYV91fwwBWMgTv4sQh0D-_bSq0OtI5g9Ojo0sHzxleYEUW8UYdFsQ_JvpOnZEM87CzUhBqsDDnQ4kPslOaaG2q5PPY3ccNKHexE0UkxjtOeUoIn6tdf-0Yqwc55JCMzWOZmt3pFqWKfm3-VZX5lT0UTL9ktrrLfFTIMfZb-Lmyp2g3_s_juUpkbgPpBPHgh6IGS6XaOnxgseq1Vz4h6pZ_A0O60Z8R5-w' \
-d '[
{
"id": "12345678-1234-5678-1234-567812345678",
"name": "admin"
}
]'
| Keycloak | 60,812,831 | 10 |
My application consists of:
backend/resource server
UI webapp
keycloak
The UI is talking with the backend server via RESTful API using the keycloak client with authorization code grant flow. This is working fine.
Now, I need the additional possibility to access resource of the backend using a system/service account (with usually more permissions than the user). How would you implement this requirement? I thought the client credentials flow would be useful here.
Is it possible to use the OAuth2 client credentials flow with the keycloak client for Spring Boot? I found examples that used the Spring Security OAuth2 client features to achieve a client credentials flow but that feels weird because I already use the keycloak client for the OAuth thing.
Edit: Solution
Thanks for your answers which helped me a lot. In my UI webapp, I am now able to communicate with the backend either by using the authenticated user OAuth2 token or by using the token from the client credentials flow of my UI service account. Each way has its own RestTemplate, the first is done via the keycloak integration and second is done by Spring Security OAuth2 as explained here.
| Yes, you can use OAuth 2.0 Client Credentials flow and Service Accounts.
Keycloak suggest 3 ways to secure SpringBoot REST services:
with Keycloak Spring Boot Adapter
with keycloak Spring Security Adapter
with OAuth2 / OpenID Connect
Here is a good explanation about this with an example in the OAuth2/OIDC way:
Tutorial by Arun B Chandrasekaran
Code sample by Arun B Chandrasekaran
If you follow this example, keep in mind:
Take care to configure your client as:
Access Type: Confidential
Authorization: Enabled
Service Account (OAuth Client Credentials Flow): Enabled
Take care to configure your target service as:
Access Type: Bearer-only
So, caller should be confidential and target service should be bearer-only.
Create your users, roles, mappers... and assign roles to your users.
Check that you have this dependencies in your spring project:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security.oauth.boot</groupId>
<artifactId>spring-security-oauth2-autoconfigure</artifactId>
</dependency>
Configure authentication to be used in the REST client (application.properties)
e.g.:
security.oauth2.client.client-id=employee-service
security.oauth2.client.client-secret=68977d81-c59b-49aa-aada-58da9a43a850
security.oauth2.client.user-authorization-uri=${rest.security.issuer-uri}/protocol/openid-connect/auth
security.oauth2.client.access-token-uri=${rest.security.issuer-uri}/protocol/openid-connect/token
security.oauth2.client.scope=openid
security.oauth2.client.grant-type=client_credentials
Implement your JwtAccessTokenCustomizer and SecurityConfigurer (ResourceServerConfigurerAdapter) like Arun's sample.
And finally implement your service Controller:
@RestController
@RequestMapping("/api/v1/employees")
public class EmployeeRestController {
@GetMapping(path = "/username")
@PreAuthorize("hasAnyAuthority('ROLE_USER')")
public ResponseEntity<String> getAuthorizedUserName() {
return ResponseEntity.ok(SecurityContextUtils.getUserName());
}
@GetMapping(path = "/roles")
@PreAuthorize("hasAnyAuthority('ROLE_USER')")
public ResponseEntity<Set<String>> getAuthorizedUserRoles() {
return ResponseEntity.ok(SecurityContextUtils.getUserRoles());
}
}
For a complete tutorial, please read the referred Arun's tutorial.
Hope it helps.
| Keycloak | 57,974,630 | 10 |
I have logged in to virtual machine in docker but I can't find standalone.sh It isn't in /bin. I don't know also how to write dockerfile which set -Djboss.socket.binding.port-offset=100
| You can pass port as -Djboss.http.port parameter, for example:
docker run --name keycloak -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin -p 11111:11111 jboss/keycloak -Djboss.http.port=11111
| Keycloak | 57,430,811 | 10 |
I need to configure Keycloak so that it creates a JWT with claim "sub" populated with the username, instead of the default userId in sub.
It means that instead of this token:
{
"jti": "b1384883-9b59-4788-b09f-98b40b7e3c3b",
...
"sub": "fbdb4e4a-6e93-4b08-a1e7-0b7bd08520a6",
"preferred_username": "m123456"
}
I need to receive:
{
"jti": "b1384883-9b59-4788-b09f-98b40b7e3c3b",
...
"sub": "m123456",
"preferred_username": "m123456"
}
Could you please suggest how to do that?
I tried username mapper, but it adds a second "sub" claim and with the jwt is not valid.
| or this way: with User Property Mapper type.
{
"id": "5d45fe41-83c6-4457-807b-5240ff7c09b9",
"name": "UsernameInSubject",
"protocol": "openid-connect",
"protocolMapper": "oidc-usermodel-property-mapper",
"consentRequired": false,
"config": {
"userinfo.token.claim": "true",
"user.attribute": "username",
"id.token.claim": "true",
"access.token.claim": "true",
"claim.name": "sub",
"jsonType.label": "String"
}
| Keycloak | 56,666,054 | 10 |
We need to export & import configuration from an old version 3.4 to new Keycloak version 5. But it shows error on import:
{"errorMessage":"App doesn't exist in role definitions: realm-management"}
Is there any option to import realm to new version?
| For me, I needed to create the realm using the Add Realm button and use the export file as the import file on the realm creation screen. I think the realm just needs to be created alongside the import.
| Keycloak | 55,634,189 | 10 |
I started with Using OpenID/Keycloak with Superset and did everything as explained. However, it is an old post, and not everything worked. I'm also trying to implement a custom security manager by installing it as a FAB add-on, so as to implement it in my application without having to edit the existing superset code.
I'm running KeyCloak 4.8.1.Final and Apache SuperSet v 0.28.1
As explained in the post, SuperSet does not play nicely with KeyCloak out of the box because it uses OpenID 2.0 and not OpenID Connect, which is what KeyCloak provides.
The first difference is that after pull request 4565 was merged, you can no longer do:
from flask_appbuilder.security.sqla.manager import SecurityManager
Instead, you now have to use: (as per the UPDATING.md file)
from superset.security import SupersetSecurityManager
In the above mentioned post, the poster shows how to create the manager and view files separately, but don't say where to put it. I placed both the manager and view classes in the same file, named manager.py, and placed it in the FAB add-on structure.
from flask_appbuilder.security.manager import AUTH_OID
from superset.security import SupersetSecurityManager
from flask_oidc import OpenIDConnect
from flask_appbuilder.security.views import AuthOIDView
from flask_login import login_user
from urllib.parse import quote
from flask_appbuilder.views import ModelView, SimpleFormView, expose
import logging
class OIDCSecurityManager(SupersetSecurityManager):
def __init__(self,appbuilder):
super(OIDCSecurityManager, self).__init__(appbuilder)
if self.auth_type == AUTH_OID:
self.oid = OpenIDConnect(self.appbuilder.get_app)
self.authoidview = AuthOIDCView
CUSTOM_SECURITY_MANAGER = OIDCSecurityManager
class AuthOIDCView(AuthOIDView):
@expose('/login/', methods=['GET', 'POST'])
def login(self, flag=True):
sm = self.appbuilder.sm
oidc = sm.oid
@self.appbuilder.sm.oid.require_login
def handle_login():
user = sm.auth_user_oid(oidc.user_getfield('email'))
if user is None:
info = oidc.user_getinfo(['preferred_username', 'given_name', 'family_name', 'email'])
user = sm.add_user(info.get('preferred_username'), info.get('given_name'), info.get('family_name'), info.get('email'), sm.find_role('Gamma'))
login_user(user, remember=False)
return redirect(self.appbuilder.get_url_for_index)
return handle_login()
@expose('/logout/', methods=['GET', 'POST'])
def logout(self):
oidc = self.appbuilder.sm.oid
oidc.logout()
super(AuthOIDCView, self).logout()
redirect_url = request.url_root.strip('/') + self.appbuilder.get_url_for_login
return redirect(oidc.client_secrets.get('issuer') + '/protocol/openid-connect/logout?redirect_uri=' + quote(redirect_url))
I have the CUSTOM_SECURITY_MANAGER variable set in this file and not in superset_config.py. This is because it didn't work when it was there, it didn't load the custom security manager. I moved the variable there after reading Decorator for SecurityManager in flask appbuilder for superest.
My client_secret.json file looks as follows:
{
"web": {
"realm_public_key": "<PUBLIC_KEY>",
"issuer": "https://<DOMAIN>/auth/realms/demo",
"auth_uri": "https://<DOMAIN>/auth/realms/demo/protocol/openid-connect/auth",
"client_id": "local",
"client_secret": "<CLIENT_SECRET>",
"redirect_urls": [
"http://localhost:8001/*"
],
"userinfo_uri": "https://<DOMAIN>/auth/realms/demo/protocol/openid-connect/userinfo",
"token_uri": "https://<DOMAIN>/auth/realms/demo/protocol/openid-connect/token",
"token_introspection_uri": "https://<DOMAIN>/auth/realms/demo/protocol/openid-connect/token/introspect"
}
}
realm_public_key: I got this key at Realm Settings > Keys > Active and then in the table, in the "RS256" row.
client_id: local (the client I use for local testing)
client_secret: I got this at Clients > local (from the table) > Credentials > Secret
All the url/uri values are adjusted from the first mentioned post I used to set it all up. The <DOMAIN> is an AWS CloudFront default domain, since I'm running KeyCloak on EC2 and don't want to go through the trouble to setup a custom HTTPS domain for simply getting it up and running.
Then, finally, part of my superset_config.py file looks like this:
ADDON_MANAGERS = ['fab_addon_keycloak.manager.OIDCSecurityManager']
AUTH_TYPE = AUTH_OID
OIDC_CLIENT_SECRETS = '/usr/local/lib/python3.6/site-packages/fab_addon_keycloak/fab_addon_keycloak/client_secret.json'
OIDC_ID_TOKEN_COOKIE_SECURE = False
OIDC_REQUIRE_VERIFIED_EMAIL = False
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Gamma'
OPENID_PROVIDERS = [{
'name': 'KeyCloak',
'url': 'https://<DOMAIN>/auth/realms/demo/account'
}]
In the original post, the OPENID_PROVIDERS environment variable is not mentioned, so I'm not really sure what to put in here for the URL. I put that one since that's the URL you'll hit to login to the client console on KeyCloak.
When I run SuperSet I don't get any errors. I can see that the custom security manager loads. When I navigate to the login screen, I have to choose my provider, I don't get a login form. I choose KeyCloak, since there's obviously nothing else, and click Login. When I click Login I can see that something loads in the address bar of the browser, but nothing happens. It's my understanding that I'm supposed to be redirected to the KeyCloak login form, and then back to my application upon successful login, but nothing happens. Am I missing something somewhere?
Edit
So after some more digging, it seems like my custom view class loads, however the methods in the class do not override the default behavior. Not sure why this is happening or how to fix it.
| I ended up figuring it out myself.
The solution I ended up with does not make use of a FAB add-on, but you also don't have to edit existing code/files.
I've renamed the manager.py file to security.py, and it now looks like this:
from flask import redirect, request
from flask_appbuilder.security.manager import AUTH_OID
from superset.security import SupersetSecurityManager
from flask_oidc import OpenIDConnect
from flask_appbuilder.security.views import AuthOIDView
from flask_login import login_user
from urllib.parse import quote
from flask_appbuilder.views import ModelView, SimpleFormView, expose
import logging
class AuthOIDCView(AuthOIDView):
@expose('/login/', methods=['GET', 'POST'])
def login(self, flag=True):
sm = self.appbuilder.sm
oidc = sm.oid
@self.appbuilder.sm.oid.require_login
def handle_login():
user = sm.auth_user_oid(oidc.user_getfield('email'))
if user is None:
info = oidc.user_getinfo(['preferred_username', 'given_name', 'family_name', 'email'])
user = sm.add_user(info.get('preferred_username'), info.get('given_name'), info.get('family_name'), info.get('email'), sm.find_role('Gamma'))
login_user(user, remember=False)
return redirect(self.appbuilder.get_url_for_index)
return handle_login()
@expose('/logout/', methods=['GET', 'POST'])
def logout(self):
oidc = self.appbuilder.sm.oid
oidc.logout()
super(AuthOIDCView, self).logout()
redirect_url = request.url_root.strip('/') + self.appbuilder.get_url_for_login
return redirect(oidc.client_secrets.get('issuer') + '/protocol/openid-connect/logout?redirect_uri=' + quote(redirect_url))
class OIDCSecurityManager(SupersetSecurityManager):
authoidview = AuthOIDCView
def __init__(self,appbuilder):
super(OIDCSecurityManager, self).__init__(appbuilder)
if self.auth_type == AUTH_OID:
self.oid = OpenIDConnect(self.appbuilder.get_app)
I place the security.py file next to my superset_config_py file.
The JSON configuration file stays unchanged.
Then I've changed the superset_config.py file to include the following lines:
from security import OIDCSecurityManager
AUTH_TYPE = AUTH_OID
OIDC_CLIENT_SECRETS = <path_to_configuration_file>
OIDC_ID_TOKEN_COOKIE_SECURE = False
OIDC_REQUIRE_VERIFIED_EMAIL = False
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Gamma'
CUSTOM_SECURITY_MANAGER = OIDCSecurityManager
That's it.
Now when I navigate to my site, it automatically goes to the KeyCloak login screen, and upon successful sign in I am redirected back to my application.
| Keycloak | 54,010,314 | 10 |
How do you correctly configure NGINX as a proxy in front of Keycloak?
Asking & answering this as doc because I've had to do it repeatedly now and forget the details after a while.
This is specifically dealing with the case where Keycloak is behind a reverse proxy e.g. nginx and NGINX is terminating SSL and pushing to Keycloak. This is not the same issue as keycloak Invalid parameter: redirect_uri although it produces the same error message.
| The key to this is in the docs at
https://www.keycloak.org/docs/latest/server_installation/index.html#identifying-client-ip-addresses
The proxy-address-forwarding must be set as well as the various X-... headers.
If you're using the Docker image from https://hub.docker.com/r/jboss/keycloak/ then set the env. arg -e PROXY_ADDRESS_FORWARDING=true.
server {
server_name api.domain.com;
location /auth {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8080;
proxy_read_timeout 90;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8081;
proxy_read_timeout 90;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = api.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name api.domain.com;
listen 80;
return 404; # managed by Certbot
}
If you're using another proxy the important parts of this is the headers that are being set:
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
Apache, ISTIO and others have their own means of setting these.
| Keycloak | 53,564,499 | 10 |
I have a keycloak user with custom attributes like below.
I use Reactjs for front-end. I want to retrieve the custom attribute from the javascript side. Like this answer states.
https://stackoverflow.com/a/32890003/2940265
But I can't find how to do it on the javascript side.
I debugged in Chrome but I can't find a suitable result for custom attributes.
Please help
| I found the answer.
I will post here, because someone may find it useful.
Well, You can add custom attributes to the user but you need extra configurations to retrieve it from the javascript side. For Beginner ease, I will write the answer from Adding customer to retrieving the attribute from javascript (in my case react js).
Let's add custom attributes to a user.
login into keycloak and choose your realm (if you have multiple realms unless you will automatically login to realm)
After that select Users -> View all users
Select your user in my case it's Alice
Select Attributes and add custom attributes (in my case I added custom attribute call companyId like below)
Now click Save
Now we have to Map Custom attribute with our keycloak client.
To front end to use keycloak you must have client in Clients (left side bar)
If you haven't you have to configure a client for that. In my case my client is test-app
Therefor select Clients -> test-app -> Mappers
Now we have to create Mapper. Click Create
For Token Claim Name you should give your custom attributes key (in my case it is companyId) for my ease, I use companyId for Name, Realm Role prefix, Token Claim Name. You should choose User Attribute in Mapper Type and String for Claim JSON Type
After that click Save. Now you can get your custom attribute from javascript.
let say your keycloak JavaScript object is keycloak, you can get companyId using keycloak.
let companyId = keyCloak.idTokenParsed.companyId;
sample code would be like below (my code in react.js)
keyCloak.init({
onLoad: 'login-required'
}).success(authenticated => {
if (authenticated) {
if (hasIn(keyCloak.tokenParsed, 'realm_access')) {
if (keyCloak.tokenParsed.realm_access.roles === []) {
console.log("Error: No roles found in token")
} else {
let companyId = keyCloak.idTokenParsed.companyId;
}
} else {
console.log("Error: Cannot parse token");
}
} else {
console.log("Error: Authentication failed");
}
}).error(e => {
console.log("Error: " + e);
console.log(keyCloak);
});
Hope somebody find this answer useful, because I could find an answer for JavaScript. Happy coding :)
| Keycloak | 53,224,680 | 10 |
How to view/configure access logs of HTTP server Keycloak uses?
I'm trying to investigate connection_refused_error to Keycloak admin UI.
| Try adding the following <access-log/> tag to your server configuration file, for example: standalone/configuration/standalone.xml.
<subsystem xmlns="urn:jboss:domain:undertow:4.0">
<buffer-cache name="default"/>
<server name="default-server">
...
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<!-- Add the following one line -->
<access-log prefix="access." />
<http-invoker security-realm="ApplicationRealm"/>
<filter-ref name="proxy-peer"/>
</host>
</server>
You can see access.log in your standalone/log/ directory after restarting your Keycloak server and the log file is rotated daily with a name like access.2019-07-26.log.
EDIT:
You can also use JBoss CLI as follows:
$ ./jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect
[standalone@localhost:9990 /] /subsystem=undertow/server=default-server/host=default-host/setting=access-log:add
{"outcome" => "success"}
these commands adds the one line to standalone.xml:
<access-log/>
the next command shows the access log settings (default values):
[standalone@localhost:9990 /] /subsystem=undertow/server=default-server/host=default-host/setting=access-log:read-resource
{
"outcome" => "success",
"result" => {
"directory" => expression "${jboss.server.log.dir}",
"extended" => false,
"pattern" => "common",
"predicate" => undefined,
"prefix" => "access_log.",
"relative-to" => undefined,
"rotate" => true,
"suffix" => "log",
"use-server-log" => false,
"worker" => "default"
},
"response-headers" => {"process-state" => "reload-required"}
}
You can change an attribute (for example, prefix) by the command:
[standalone@localhost:9990 /] /subsystem=undertow/server=default-server/host=default-host/setting=access-log:write-attribute(name=prefix,value=access.)
| Keycloak | 51,728,612 | 10 |
TL;DR
Objective: Java authorization server:
OAuth2.0 authorization code grant flow with fine-grained permissions (not a mere SSO server)
User management and authentication: custom database
Client management and authentication: Keycloak
Questions: What are the best practices for implementing a Java authorization server with applicative permissions handling backed on Keycloak?
What Keycloak adapter/API should I use in my development?
How should the users be managed/appear in Keycloak if they are to appear at all?
Forewarning
I am quite the beginner with Keycloak and, though I think I understand the main principles, it seems to be a rich tool and I fear I may still be mistaken about some aspects of the best ways to use it. Please do not hesitate to correct me.
Context
We are looking at implementing an API requiring our users (henceforth "users") to grant permissions to third party applications (henceforth "clients").
Our users are stored in a custom existing database-based user management system. As for our clients, we are thinking of using Keycloak.
The users consent will be given using an OAuth2.0 Authorization code grant flow. They will log in, specify which permissions they grant and which they deny, and the client then retrieves the access token it will use to access the API.
It is my understanding that Keycloak can handle the authorization token but it should not know anything applicative, which our permissions are. As a consequence, I thought of building a custom authorization server which will use Keycloak for all identity/authentication problems but will handle the applicative permissions by itself.
Then, we will use Keycloak for client authentication and authorization code/access token management, and an applicative part will check the permissions.
Problem
Besides my first experimenting, I've been roaming the Internet for a week now and I'm surprised as I thought this would be quite a standard case. Yet I found next-to-nothing, so maybe I'm not searching correctly.
I've found many Spring/Spring Boot tutorials1 on how to make a "simple authorization server". Those are mainly SSO servers though, and few do manage permissions, with the exception of those mentioned in this SO answer2. That I think we can deal with.
The real problem I have, and that none of the tutorials I have found are treating, is the following:
How do I integrate Keycloak in this authorization server?
I've been having a look at the available Java Adapters. They look OK when it comes to authenticate but I did not see hints about how to manage clients from a custom authorization server (ie administer the realm).
I therefore suppose I should use the admin API. Am I correct and is it good practice? I saw no adapter for that, so I suppose I should then use the REST API.
I also wonder how we should integrate our users in design? Should they be duplicated inside Keycloak? In this case, should we use Keycloak's admin API to push the data from the authorization server or is there a better way?
Finally, am I missing some other obvious point?
Sorry for the long message and the many questions, but it all boils down to one question in the end:
What are the best practices when building an authorization server using Keycloak as a backbone?
1. Some examples:
Spring Boot OAuth2 tutorial -
A blog post -
Another blog post
2. I've mainly focused on the sample app provided by Spring Security OAuth
| Building Java OAuth2.0 authorization server with Keycloak
This is possible but is bit tricky and there is lot of thing which needs to be customised.
You can derive some motivation from below repo.
keycloak-delegate-authn-consent
Building custom Java OAuth2.0 authorization server with MITREid
If you are open to use other implementations of Oauth and OIDC,I can suggest you MITREid which is referrence implementation of OIDC and could be customized to a great deal.Below is the link to its repo and its open source.
I myself used this to requirement similar to yours and it is highly customizable and easy to implement.
https://github.com/mitreid-connect/OpenID-Connect-Java-Spring-Server
MITREid Connect uses Spring Security for its authentication, so you can put whatever component you like into that space. There are lots of good resources on the web about how to write and configure Spring Security filters for custom authentication mechanisms.
You'll want to look at the user-context.xml file for where the user authentication is defined. In the core project this is a simple username/password field against a local database. In others like the LDAP overlay project, this connects to an LDAP server. In some systems, like MIT's "oidc.mit.edu" server, there are actually a handful of different authentication mechanisms that can be used in parallel: LDAP, kerberos, and certificates in that case.
Note that in all cases, you'll still need to have access to a UserInfo data store somewhere. This can be sourced from the database, from LDAP, or from something else, but it needs to be available for each logged in user.
The MITREid Connect server can function as an OpenID Connect Identity Provider (IdP) and an OAuth 2.0 Authorization Server (AS) simultaneously. The server is a Spring application and its configuration files are found in openid-connect-server-webapp/src/main/webapp/WEB-INF/ and end in .xml. The configuration has been split into multiple .xml files to facilitate overrides and custom configuration.
| Keycloak | 49,150,219 | 10 |
Situation: We use keycloak to authenticate users in our web application (A) through normal browser authentication flow using the JavaScript adapter. This works perfectly!
Goal: Now, a new group of users should be able to access A. But they log in with username and password in a trusted third-party application (B) without Keycloak. In B they have a link to A with a custom JWT (essentially containing username and roles) as a query parameter. So when the user clicks on the link, he lands on our application entry point where we are able to read the JWT out of the URL. What needs to happen now is some sort of token exchange. We want to send this custom JWT to Keycloak, which sends back an access token analog to the normal login process.
Question: Is there built-in support in Keycloak for such a usecase?
Attempts:
I tried to create a confidential client with "Signed JWT" as "Client Authenticator" as suggested in the docs. After some testing I don't think this is the right track, even if the name is promising.
Another track was "Client suggested identity provider" by implementing a custom identity provider. But I don't see, how I can send the JWT within the request.
Currently I'm trying to use the Autentication SPI to extend the authentication flow with a custom authenticator.
Maybe it is much simpler than I think. Can anyone lead me in the right direction?
| So I was finally able to solve it with the Authentication SPI mentioned in the question.
In Keycloak, I made a copy of the "browser" authentication flow (since you can not modify built-in flows) and introduced an additional step "Portal JWT" (see picture below). I then bound it to "Browser Flow" in the "Bindings" tab
Behind "Portal JWT" is my custom authenticator which extracts the JWT from the query parameter in the redirect uri and parses it to get username and roles out of it. The user is then added to keycloak with a custom attribute "isExternal". Here is an extract of it:
public class JwtAuthenticator implements Authenticator {
private final JwtReader reader;
JwtAuthenticator(JwtReader reader) {
this.reader = reader;
}
@Override
public void authenticate(AuthenticationFlowContext context) {
Optional<String> externalCredential = hasExternalCredential(context);
if (externalCredential.isPresent()) {
ExternalUser externalUser = reader.read(context.getAuthenticatorConfig(), externalCredential.get());
String username = externalUser.getUsername();
UserModel user = context.getSession().users().getUserByUsername(username, context.getRealm());
if (user == null) {
user = context.getSession().users().addUser(context.getRealm(), username);
user.setEnabled(true);
user.setSingleAttribute("isExternal", "true");
}
for (String roleName : externalUser.getRoles()) {
RoleModel role = context.getRealm().getRole(roleName);
if (role == null) {
role = context.getRealm().addRole(roleName);
}
user.grantRole(role);
}
context.setUser(user);
context.success();
} else {
context.attempted();
}
}
private Optional<String> hasExternalCredential(AuthenticationFlowContext context) {
String redirectUri = context.getUriInfo().getQueryParameters().getFirst("redirect_uri);
try {
List<NameValuePair> queryParams = URLEncodedUtils.parse(new URI(redirectUri), "UTF-8");
Optional<NameValuePair> jwtParam = queryParams.stream()
.filter(nv -> "jwt".equalsIgnoreCase(nv.getName())).findAny();
if (jwtParam.isPresent()) {
String jwt = jwtParam.get().getValue();
if (LOG.isDebugEnabled()) {
LOG.debug("JWT found: " + jwt);
}
return Optional.of(jwt);
}
} catch (URISyntaxException e) {
LOG.error("Redirect URL not as expected: " + redirectUri);
}
return Optional.empty();
}
| Keycloak | 48,638,584 | 10 |
I'm using Identity Brokering feature and external IDP. So, user logs in into external IDP UI, then KeyCloak broker client receives JWT token from external IDP and KeyCloak provides JWT with which we access the resources. I've set up Default Identitiy Provider feature, so external IDP login screen is displayed to the user on login. That means that users and their passwords are stored on external IDP.
The problem occurs when I need to log in using "Direct Access Grant" (Resource Owner Password grant) programatically in tests. As password is not stored on KeyCloak, I always get 401 Unauthorized error from KeyCloak on login. When I tried to change user password it started to work, so the problem is that user password is not provisioned on KeyCloak and using "Direct Access Grant" KeyCloak doesn't invoke external IDP on programatic login.
I use the following code to obtain access token, but get 401 error everytime I pass valid username/password.
org.keycloak.authorization.client.util.HttpResponseException: Unexpected response from server: 401 / Unauthorized
Direct access grant is enabled for that client.
public static String login(final Configuration configuration) {
final AuthzClient authzClient = AuthzClient.create(configuration);
final AccessTokenResponse accessTokenResponse = authzClient.obtainAccessToken(USERNAME, PASSWORD);
return accessTokenResponse.getToken();
}
Is there any way it can be fixed? For example to call identity broker on "Direct Access Grant", so that KeyCloak provides us it's valid token?
| The problem was that KeyCloak has no information about passwords from initial identity provider. They have a token exchange feature which should be used for programmatic token exchange.
External Token to Interanal Token Exchange should be used to achieve it.
Here is an example code in Python which does it (just place correct values in placeholders):
def login():
idp_access_token = idp_login()
return keycloak_token_exchange(idp_access_token)
def idp_login():
login_data = {
"client_id": <IDP-CLIENT-ID>,
"client_secret": <IDP-CLIENT-SECRET>,
"grant_type": <IDP-PASSWORD-GRANT-TYPE>,
"username": <USERNAME>,
"password": <PASSWORD>,
"scope": "openid",
"realm": "Username-Password-Authentication"
}
login_headers = {
"Content-Type": "application/json"
}
token_response = requests.post(<IDP-URL>, headers=login_headers, data=json.dumps(login_data))
return parse_response(token_response)['access_token']
def keycloak_token_exchange(idp_access_token):
token_exchange_url = <KEYCLOAK-SERVER-URL> + '/realms/master/protocol/openid-connect/token'
data = {
'grant_type': 'urn:ietf:params:oauth:grant-type:token-exchange',
'subject_token': idp_access_token,
'subject_issuer': <IDP-PROVIDER-ALIAS>,
'subject_token_type': 'urn:ietf:params:oauth:token-type:access_token',
'audience': <KEYCLOAK-CLIENT-ID>
}
response = requests.post(token_exchange_url, data=data,
auth=(<KEYCLOAK-CLIENT-ID>, <KEYCLOAK-CLIENT-SECRET>))
logger.info(response)
return parse_response(response)['access_token']
| Keycloak | 47,557,433 | 10 |
First of all I'm using
keycloak-authz-client-3.3.0.Final
spring boot 1.5.8.RELEASE
spring-boot-starter-security
I've been playing with Keycloak spring adapter exploring the examples since we want to adopt it to our project.
I was able to make it run for Roles easily using this tutorial:
https://dzone.com/articles/easily-secure-your-spring-boot-applications-with-k
After that I moved to permissions and that's when it gets trickier (that's also our main goal).
I want to achieve something like described in here (9.1.2):
http://www.keycloak.org/docs/2.4/authorization_services_guide/topics/enforcer/authorization-context.html#
To get permissions you need to setup in Keycloak Authorization, credentials, and then create Resources or Scopes and Policies to be able to create permissions (it took me a while but I got it working). Testing in the Evaluater everything seems fine.
Next step was to get user permissions on the Spring side. In order to do that I had to enable:
keycloak.policy-enforcer-config.enforcement-mode=permissive
The moment I enable this I get everytime this exception
java.lang.RuntimeException: Could not find resource.
at org.keycloak.authorization.client.resource.ProtectedResource.findAll(ProtectedResource.java:88)
at org.keycloak.adapters.authorization.PolicyEnforcer.configureAllPathsForResourceServer...
...
Caused by: org.keycloak.authorization.client.util.HttpResponseException:
Unexpected response from server: 403 / Forbidden
No matter what address I hit in the server.
So I started to investigate what was the root of the problem. Looking at some examples how to manually get the permissions I actually got them in postman with the following request:
http://localhost:8080/auth/realms/${myKeycloakRealm}/authz/entitlement/${MyKeycloakClient}
including the header Authorization : bearer ${accessToken}
response was {"rpt": ${jwt token}} that actually contains the permissions
So knowing this was working it must be something wrong with the Spring adapter. Investigating a bit further on the Keycloak exception I found that that error was occurring the moment the adapter was getting all the resources. For that it was using the following url:
http://localhost:28080/auth/realms/license/authz/protection/resource_set
with a different token in the headers (that I copied when debugging)
So when I tried it in postman I also got a 403 error, but with a json body:
{
"error": "invalid_scope",
"error_description": "Requires uma_protection scope."
}
I've enabled and disabled all uma configuration within keycloak and I can't make it work. Can please someone point me into the right direction?
Update
I've now updated Keycloak adapter to 3.4.0.final and I'm getting the following error in the UI:
Mon Nov 20 10:09:21 GMT 2017
There was an unexpected error (type=Internal Server Error, status=500).
Could not find resource. Server message: {"error":"invalid_scope","error_description":"Requires uma_protection scope."}
(Pretty much the same I was getting in the postman request)
I've also printed all the user roles to make sure the uma_protection role is there, and it is.
Another thing I did was to disable spring security role prefix to make sure it wasn't a mismatch on the role.
Update 2
Was able to resolve the 403 issue (you can see it in the response below).
Still getting problems obtaining KeycloakSecurityContext from the HttpServletRequest
Update 3
Was able to get KeycloakSecurityContext like this:
Principal principal = servletRequest.getUserPrincipal();
KeycloakAuthenticationToken token = (KeycloakAuthenticationToken) principal;
OidcKeycloakAccount auth = token.getAccount();
KeycloakSecurityContext keycloakSecurityContext = auth.getKeycloakSecurityContext();
AuthorizationContext authzContext = keycloakSecurityContext.getAuthorizationContext();
The problem now is that the AuthorizationContext is always null.
| I've managed to get it working by adding uma_protection role to the Service Account Roles tab in Keycloak client configuration
More information about it here:
http://www.keycloak.org/docs/2.0/authorization_services_guide/topics/service/protection/whatis-obtain-pat.html
Second part of the solution:
It's mandatory to have the security constrains in place even if they don't mean much to you. Example:
keycloak.securityConstraints[0].authRoles[0] = ROLE1
keycloak.securityConstraints[0].securityCollections[0].name = protected
keycloak.securityConstraints[0].securityCollections[0].patterns[0] = /*
Useful demos:
https://github.com/keycloak/keycloak-quickstarts
| Keycloak | 47,199,243 | 10 |
I want to create keycloak client role programmatically and assign to user created dynamically. Below is my code for creating user
UserRepresentation user = new UserRepresentation();
user.setEmail("[email protected]");
user.setUsername("xxxx");
user.setFirstName("xxx");
user.setLastName("m");
user.setEnabled(true);
Response response = kc.realm("YYYYY").users().create(user);
| Here is a solution to your request (not very beautiful, but it works):
// Get keycloak client
Keycloak kc = Keycloak.getInstance("http://localhost:8080/auth",
"master", "admin", "admin", "admin-cli");
// Create the role
RoleRepresentation clientRoleRepresentation = new RoleRepresentation();
clientRoleRepresentation.setName("client_role");
clientRoleRepresentation.setClientRole(true);
kc.realm("RealmID").clients().findByClientId("ClientID").forEach(clientRepresentation ->
kc.realm("RealmID").clients().get(clientRepresentation.getId()).roles().create(clientRoleRepresentation)
);
// Create the user
UserRepresentation user = new UserRepresentation();
user.setUsername("test");
user.setEnabled(true);
Response response = kc.realm("RealmID").users().create(user);
String userId = getCreatedId(response);
// Assign role to the user
kc.realm("RealmID").clients().findByClientId("ClientID").forEach(clientRepresentation -> {
RoleRepresentation savedRoleRepresentation = kc.realm("RealmID").clients()
.get(clientRepresentation.getId()).roles().get("client_role").toRepresentation();
kc.realm("RealmID").users().get(userId).roles().clientLevel(clientRepresentation.getId())
.add(asList(savedRoleRepresentation));
});
// Update credentials to make sure, that the user can log in
UserResource userResource = kc.realm("RealmID").users().get(userId);
userResource.resetPassword(credential);
With the help method:
private String getCreatedId(Response response) {
URI location = response.getLocation();
if (!response.getStatusInfo().equals(Response.Status.CREATED)) {
Response.StatusType statusInfo = response.getStatusInfo();
throw new WebApplicationException("Create method returned status " +
statusInfo.getReasonPhrase() + " (Code: " + statusInfo.getStatusCode() + "); expected status: Created (201)", response);
}
if (location == null) {
return null;
}
String path = location.getPath();
return path.substring(path.lastIndexOf('/') + 1);
}
| Keycloak | 43,222,769 | 10 |
I'm using Node.JS (express) and an NPM called keycloak-connect to connect to a keycloak server.
When I'm implementing the default mechanism as described to protect a route:
app.get( '/about', keycloak.protect(), function(req,resp) {
resp.send( 'Page: ' + req.params.page + '<br><a href="/logout">logout</a>');
} );
I do get referred to keycloak, but with following error: "Invalid parameter: redirect_uri"
My query string is: (xx for demonstration) https://xx.xx.xx.xx:8443/auth/realms/master/protocol/openid-connect/auth?client_id=account&state=aa11b27a-8a0b-4a3b-89dc-cb8a303dbde8&redirect_uri=http%3A%2F%2Flocalhost%3A3002%2Fabout%3Fauth_callback%3D1&response_type=code
My keycloak.json is: (xx for demonstration)
{
"realm": "master",
"realm-public-key": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwS00kUaH6OoERNSkFUwxEBxx2SsqmHu9oVQiPs6nlP9fNQm0cK2lpNPphbLzooZL6kivaC4VzXg20F3zY7jRDc4U/XHgXjZVZUXxJ0NeCI5ESDo00EV9xh9XL3xvXslmG0YLWpywtQSYc+XcGDkz87edokbHQIIlQc2sgoVKIKpajZyrI5wnyMhL8JSk+Mdo2T9DeNnZxPkauiKBwWFJReBO51gsoZ49cbD39FRa8pLi8W0TtXoESIf/eGUSdc3revVFR7cjzHUzxF0p0WrLsTA1aBCLkt8yhnq88NqcKsW5mkxRmhLdw20ODTdsmRtm68rjtusMwifo/dZLJ9v5eQIDAQAB",
"auth-server-url": "https://xx.xx.xx.xx:8443/auth",
"ssl-required": "external",
"resource": "account",
"credentials": {
"secret": "9140d4e6-ed05-4899-a3c0-a9cf94ab407d"
},
"use-resource-role-mappings": true
}
keycloak configuration:
| I guess you added a port to your client URLs in your client settings tab.
e.g.
root url: https://demo.server.biz:443/cxf
just remove the port
root url: https://demo.server.biz/cxf
the same goes for Valid Redirect URIs and Web Origins
1 Update
2 Update with your url
| Keycloak | 37,115,626 | 10 |
JBoss keycloak offers an admin url in the client settings, where you can react on logout push events or other events. Unfortunatly I cannot find any documentation about how to use this url? Can you give me a hint, if this is e.g. part of OpenID Spec or if a API Doc exists for this.
Especially I want to know how I can realise a client endpoint, which reacts on logout or revocation requests from the keycloak server.
Documentation in KC about the Admin URL: http://keycloak.github.io/docs/userguide/keycloak-server/html/applicationClustering.html#admin-url-configuration
Thanks
Christian
| AFAIK the use of the Admin URL is Keycloak specific, and not part of Open ID Connect or OAuth.
I suppose you'll need to take a look at the code, i.e. PreAuthActionsHandler#handleRequest handles URLs ending with k_logout and k_push_not_before.
The easiest way to handle these events is to use a Keycloak client adapter. The adapter (available for Jetty, Tomcat and others) will automatically handle this for you. Just specify any URL of your deployed application and the client adapter will do the rest.
| Keycloak | 35,704,546 | 10 |
I'm using Keycloak auth mechanism for my angular/node/typescript application .I could not find a definitelyTyped d.ts file for Keycloak.js
Is there a typescript equivalent/work in progress for this JBOSS Keycloak.js adapter or do I have to write one ? any inputs/pointers would be much appreciated
| There is an official type definition at:
https://github.com/keycloak/keycloak/blob/master/adapters/oidc/js/src/main/resources/keycloak.d.ts and https://www.npmjs.com/package/@types/keycloak-js
Update:
https://www.npmjs.com/package/keycloak-js
From @types/keycloak-js:
keycloak-js provides its own type definitions, so you don't need
@types/keycloak-js installed!
| Keycloak | 34,430,049 | 10 |
I'd like to calculate a point on a quadratic curve. To use it with the canvas element of HTML5.
When I use the quadraticCurveTo() function in JavaScript, I have a source point, a target point and a control point.
How can I calculate a point on the created quadratic curve at let's say t=0.5 with "only" knowing this three points?
| Use the quadratic Bézier formula, found, for instance, on the Wikipedia page for Bézier Curves:
In pseudo-code, that's
t = 0.5; // given example value
x = (1 - t) * (1 - t) * p[0].x + 2 * (1 - t) * t * p[1].x + t * t * p[2].x;
y = (1 - t) * (1 - t) * p[0].y + 2 * (1 - t) * t * p[1].y + t * t * p[2].y;
p[0] is the start point, p[1] is the control point, and p[2] is the end point. t is the parameter, which goes from 0 to 1.
| Curve | 5,634,460 | 64 |
How can I find the point B(t) along a cubic Bezier curve that is closest to an arbitrary point P in the plane?
| I've written some quick-and-dirty code that estimates this for Bézier curves of any degree. (Note: this is pseudo-brute force, not a closed-form solution.)
Demo: http://phrogz.net/svg/closest-point-on-bezier.html
/** Find the ~closest point on a Bézier curve to a point you supply.
* out : A vector to modify to be the point on the curve
* curve : Array of vectors representing control points for a Bézier curve
* pt : The point (vector) you want to find out to be near
* tmps : Array of temporary vectors (reduces memory allocations)
* returns: The parameter t representing the location of `out`
*/
function closestPoint(out, curve, pt, tmps) {
let mindex, scans=25; // More scans -> better chance of being correct
const vec=vmath['w' in curve[0]?'vec4':'z' in curve[0]?'vec3':'vec2'];
for (let min=Infinity, i=scans+1;i--;) {
let d2 = vec.squaredDistance(pt, bézierPoint(out, curve, i/scans, tmps));
if (d2<min) { min=d2; mindex=i }
}
let t0 = Math.max((mindex-1)/scans,0);
let t1 = Math.min((mindex+1)/scans,1);
let d2ForT = t => vec.squaredDistance(pt, bézierPoint(out,curve,t,tmps));
return localMinimum(t0, t1, d2ForT, 1e-4);
}
/** Find a minimum point for a bounded function. May be a local minimum.
* minX : the smallest input value
* maxX : the largest input value
* ƒ : a function that returns a value `y` given an `x`
* ε : how close in `x` the bounds must be before returning
* returns: the `x` value that produces the smallest `y`
*/
function localMinimum(minX, maxX, ƒ, ε) {
if (ε===undefined) ε=1e-10;
let m=minX, n=maxX, k;
while ((n-m)>ε) {
k = (n+m)/2;
if (ƒ(k-ε)<ƒ(k+ε)) n=k;
else m=k;
}
return k;
}
/** Calculate a point along a Bézier segment for a given parameter.
* out : A vector to modify to be the point on the curve
* curve : Array of vectors representing control points for a Bézier curve
* t : Parameter [0,1] for how far along the curve the point should be
* tmps : Array of temporary vectors (reduces memory allocations)
* returns: out (the vector that was modified)
*/
function bézierPoint(out, curve, t, tmps) {
if (curve.length<2) console.error('At least 2 control points are required');
const vec=vmath['w' in curve[0]?'vec4':'z' in curve[0]?'vec3':'vec2'];
if (!tmps) tmps = curve.map( pt=>vec.clone(pt) );
else tmps.forEach( (pt,i)=>{ vec.copy(pt,curve[i]) } );
for (var degree=curve.length-1;degree--;) {
for (var i=0;i<=degree;++i) vec.lerp(tmps[i],tmps[i],tmps[i+1],t);
}
return vec.copy(out,tmps[0]);
}
The code above uses the vmath library to efficiently lerp between vectors (in 2D, 3D, or 4D), but it would be trivial to replace the lerp() call in bézierPoint() with your own code.
Tuning the Algorithm
The closestPoint() function works in two phases:
First, calculate points all along the curve (uniformly-spaced values of the t parameter). Record which value of t has the smallest distance to the point.
Then, use the localMinimum() function to hunt the region around the smallest distance, using a binary search to find the t and point that produces the true smallest distance.
The value of scans in closestPoint() determines how many samples to use in the first pass. Fewer scans is faster, but increases the chances of missing the true minimum point.
The ε limit passed to the localMinimum() function controls how long it continues to hunt for the best value. A value of 1e-2 quantizes the curve into ~100 points, and thus you can see the points returned from closestPoint() popping along the line. Each additional decimal point of precision—1e-3, 1e-4, …—costs about 6-8 additional calls to bézierPoint().
| Curve | 2,742,610 | 54 |
I want to create an animation with css that simulate a wave movement.
I need to change a line-or div- to a curve for this...
The CSS rules that I'm familiar to, make the entire div to semicircular or change element border.
For example:
border-radius, or perspective or border-top-radius...
This image show you what I want:
Is this possible? If so, how can it be done?
| You could use an asymmetrical border to make curves with CSS.
border-radius: 50%/100px 100px 0 0;
VIEW DEMO
.box {
width: 500px;
height: 100px;
border: solid 5px #000;
border-color: #000 transparent transparent transparent;
border-radius: 50%/100px 100px 0 0;
}
<div class="box"></div>
| Curve | 20,803,489 | 48 |
I have the following code to calculate points between four control points to generate a catmull-rom curve:
CGPoint interpolatedPosition(CGPoint p0, CGPoint p1, CGPoint p2, CGPoint p3, float t)
{
float t3 = t * t * t;
float t2 = t * t;
float f1 = -0.5 * t3 + t2 - 0.5 * t;
float f2 = 1.5 * t3 - 2.5 * t2 + 1.0;
float f3 = -1.5 * t3 + 2.0 * t2 + 0.5 * t;
float f4 = 0.5 * t3 - 0.5 * t2;
float x = p0.x * f1 + p1.x * f2 + p2.x * f3 + p3.x * f4;
float y = p0.y * f1 + p1.y * f2 + p2.y * f3 + p3.y * f4;
return CGPointMake(x, y);
}
This works fine, but I want to create something I think is called centripetal parameterization. This means that the curve will have no cusps and no self-intersections. If I move one control point really close to another one, the curve should become "smaller". I have Googled my eyes off trying to find a way to do this. Anyone know how to do this?
| I needed to implement this for work as well. The fundamental concept you need to start with is that the main difference between the regular Catmull-Rom implementation and the modified versions is how they treat time.
In the unparameterized version from your original Catmull-Rom implementation, t starts at 0 and ends with 1 and calculates the curve from P1 to P2. In the parameterized time implementation, t starts with 0 at P0, and keeps increasing across all four points. So in the uniform case, it would be 1 at P1 and 2 at P2, and you would pass in values ranging from 1 to 2 for your interpolation.
The chordal case shows |Pi+1 - P| as the time span change. This just means that you can use the straight line distance between the points of each segment to calculate the actual length to use. The centripetal case just uses a slightly different method for calculating the optimal length of time to use for each segment.
So now we just need to know how to come up with equations that will let us plug in our new time values. The typical Catmull-Rom equation only has one t in it, the time you are trying to calculate a value for. I found the best article for describing how those parameters are calculated here: http://www.cemyuksel.com/research/catmullrom_param/catmullrom.pdf. They were focusing on a mathematical evaluation of the curves, but in it lies the crucial formula from Barry and Goldman.(1)
In the diagram above, the arrows mean "multiplied by" the ratio given in the arrow.
This then gives us what we need to actually perform a calculation to get the desired result. X and Y are calculated independently, although I used the "Distance" factor for modifying time based on the 2D distance, and not the 1D distance.
Test results:
(1) P. J. Barry and R. N. Goldman. A recursive evaluation algorithm for a class of catmull-rom splines. SIGGRAPH Computer Graphics, 22(4):199{204, 1988.
The source code for my final implementation in Java looks as follows:
/**
* This method will calculate the Catmull-Rom interpolation curve, returning
* it as a list of Coord coordinate objects. This method in particular
* adds the first and last control points which are not visible, but required
* for calculating the spline.
*
* @param coordinates The list of original straight line points to calculate
* an interpolation from.
* @param pointsPerSegment The integer number of equally spaced points to
* return along each curve. The actual distance between each
* point will depend on the spacing between the control points.
* @return The list of interpolated coordinates.
* @param curveType Chordal (stiff), Uniform(floppy), or Centripetal(medium)
* @throws gov.ca.water.shapelite.analysis.CatmullRomException if
* pointsPerSegment is less than 2.
*/
public static List<Coord> interpolate(List<Coord> coordinates, int pointsPerSegment, CatmullRomType curveType)
throws CatmullRomException {
List<Coord> vertices = new ArrayList<>();
for (Coord c : coordinates) {
vertices.add(c.copy());
}
if (pointsPerSegment < 2) {
throw new CatmullRomException("The pointsPerSegment parameter must be greater than 2, since 2 points is just the linear segment.");
}
// Cannot interpolate curves given only two points. Two points
// is best represented as a simple line segment.
if (vertices.size() < 3) {
return vertices;
}
// Test whether the shape is open or closed by checking to see if
// the first point intersects with the last point. M and Z are ignored.
boolean isClosed = vertices.get(0).intersects2D(vertices.get(vertices.size() - 1));
if (isClosed) {
// Use the second and second from last points as control points.
// get the second point.
Coord p2 = vertices.get(1).copy();
// get the point before the last point
Coord pn1 = vertices.get(vertices.size() - 2).copy();
// insert the second from the last point as the first point in the list
// because when the shape is closed it keeps wrapping around to
// the second point.
vertices.add(0, pn1);
// add the second point to the end.
vertices.add(p2);
} else {
// The shape is open, so use control points that simply extend
// the first and last segments
// Get the change in x and y between the first and second coordinates.
double dx = vertices.get(1).X - vertices.get(0).X;
double dy = vertices.get(1).Y - vertices.get(0).Y;
// Then using the change, extrapolate backwards to find a control point.
double x1 = vertices.get(0).X - dx;
double y1 = vertices.get(0).Y - dy;
// Actaully create the start point from the extrapolated values.
Coord start = new Coord(x1, y1, vertices.get(0).Z);
// Repeat for the end control point.
int n = vertices.size() - 1;
dx = vertices.get(n).X - vertices.get(n - 1).X;
dy = vertices.get(n).Y - vertices.get(n - 1).Y;
double xn = vertices.get(n).X + dx;
double yn = vertices.get(n).Y + dy;
Coord end = new Coord(xn, yn, vertices.get(n).Z);
// insert the start control point at the start of the vertices list.
vertices.add(0, start);
// append the end control ponit to the end of the vertices list.
vertices.add(end);
}
// Dimension a result list of coordinates.
List<Coord> result = new ArrayList<>();
// When looping, remember that each cycle requires 4 points, starting
// with i and ending with i+3. So we don't loop through all the points.
for (int i = 0; i < vertices.size() - 3; i++) {
// Actually calculate the Catmull-Rom curve for one segment.
List<Coord> points = interpolate(vertices, i, pointsPerSegment, curveType);
// Since the middle points are added twice, once for each bordering
// segment, we only add the 0 index result point for the first
// segment. Otherwise we will have duplicate points.
if (result.size() > 0) {
points.remove(0);
}
// Add the coordinates for the segment to the result list.
result.addAll(points);
}
return result;
}
/**
* Given a list of control points, this will create a list of pointsPerSegment
* points spaced uniformly along the resulting Catmull-Rom curve.
*
* @param points The list of control points, leading and ending with a
* coordinate that is only used for controling the spline and is not visualized.
* @param index The index of control point p0, where p0, p1, p2, and p3 are
* used in order to create a curve between p1 and p2.
* @param pointsPerSegment The total number of uniformly spaced interpolated
* points to calculate for each segment. The larger this number, the
* smoother the resulting curve.
* @param curveType Clarifies whether the curve should use uniform, chordal
* or centripetal curve types. Uniform can produce loops, chordal can
* produce large distortions from the original lines, and centripetal is an
* optimal balance without spaces.
* @return the list of coordinates that define the CatmullRom curve
* between the points defined by index+1 and index+2.
*/
public static List<Coord> interpolate(List<Coord> points, int index, int pointsPerSegment, CatmullRomType curveType) {
List<Coord> result = new ArrayList<>();
double[] x = new double[4];
double[] y = new double[4];
double[] time = new double[4];
for (int i = 0; i < 4; i++) {
x[i] = points.get(index + i).X;
y[i] = points.get(index + i).Y;
time[i] = i;
}
double tstart = 1;
double tend = 2;
if (!curveType.equals(CatmullRomType.Uniform)) {
double total = 0;
for (int i = 1; i < 4; i++) {
double dx = x[i] - x[i - 1];
double dy = y[i] - y[i - 1];
if (curveType.equals(CatmullRomType.Centripetal)) {
total += Math.pow(dx * dx + dy * dy, .25);
} else {
total += Math.pow(dx * dx + dy * dy, .5);
}
time[i] = total;
}
tstart = time[1];
tend = time[2];
}
double z1 = 0.0;
double z2 = 0.0;
if (!Double.isNaN(points.get(index + 1).Z)) {
z1 = points.get(index + 1).Z;
}
if (!Double.isNaN(points.get(index + 2).Z)) {
z2 = points.get(index + 2).Z;
}
double dz = z2 - z1;
int segments = pointsPerSegment - 1;
result.add(points.get(index + 1));
for (int i = 1; i < segments; i++) {
double xi = interpolate(x, time, tstart + (i * (tend - tstart)) / segments);
double yi = interpolate(y, time, tstart + (i * (tend - tstart)) / segments);
double zi = z1 + (dz * i) / segments;
result.add(new Coord(xi, yi, zi));
}
result.add(points.get(index + 2));
return result;
}
/**
* Unlike the other implementation here, which uses the default "uniform"
* treatment of t, this computation is used to calculate the same values but
* introduces the ability to "parameterize" the t values used in the
* calculation. This is based on Figure 3 from
* http://www.cemyuksel.com/research/catmullrom_param/catmullrom.pdf
*
* @param p An array of double values of length 4, where interpolation
* occurs from p1 to p2.
* @param time An array of time measures of length 4, corresponding to each
* p value.
* @param t the actual interpolation ratio from 0 to 1 representing the
* position between p1 and p2 to interpolate the value.
* @return
*/
public static double interpolate(double[] p, double[] time, double t) {
double L01 = p[0] * (time[1] - t) / (time[1] - time[0]) + p[1] * (t - time[0]) / (time[1] - time[0]);
double L12 = p[1] * (time[2] - t) / (time[2] - time[1]) + p[2] * (t - time[1]) / (time[2] - time[1]);
double L23 = p[2] * (time[3] - t) / (time[3] - time[2]) + p[3] * (t - time[2]) / (time[3] - time[2]);
double L012 = L01 * (time[2] - t) / (time[2] - time[0]) + L12 * (t - time[0]) / (time[2] - time[0]);
double L123 = L12 * (time[3] - t) / (time[3] - time[1]) + L23 * (t - time[1]) / (time[3] - time[1]);
double C12 = L012 * (time[2] - t) / (time[2] - time[1]) + L123 * (t - time[1]) / (time[2] - time[1]);
return C12;
}
| Curve | 9,489,736 | 45 |
I have been playing around with canvas recently and have been drawing several shapes (tear drops, flower petals, clouds, rocks) using methods associated with these curves. With that said, I can't seem to figure out the difference between the use cases of these different curves.
I know the cubic bezier has 2 control points, a start point, and an endpoint where as quadratic bezier has a single control point, a start point and an endpoint. However, when drawing shapes, I can't seem to easily decide which one to use or when to use them in conjunction.
How do I know which type of curve to use at different points of drawing a shape?
| As you've discovered, both Quadratic curves and Cubic Bezier curves just connect 2 points with a curve.
Since the Cubic curve has more control points, it is more flexible in the path it takes between those 2 points.
For example, let’s say you want to draw this letter “R”:
Start drawing with the “non-curvey” parts of the R:
Now try drawing the curve with a quadratic curve.
Notice the quadratic curve is more “pointy” than what we desire.
That’s because we only have 1 control point to define quadratic curviness.
Now try drawing the curve with a cubic bezier curve.
The cubic bezier curve is more nicely rounded than the quadratic curve.
That’s because we have 2 control points to define cubic curviness.
So...more control points gives more control over "curviness"
Here is code and a Fiddle: http://jsfiddle.net/m1erickson/JpXZW/
<!doctype html>
<html>
<head>
<link rel="stylesheet" type="text/css" media="all" href="css/reset.css" /> <!-- reset css -->
<script type="text/javascript" src="http://code.jquery.com/jquery.min.js"></script>
<style>
body{ background-color: ivory; padding:20px; }
#canvas{border:1px solid red;}
</style>
<script>
$(function(){
var canvas=document.getElementById("canvas");
var ctx=canvas.getContext("2d");
ctx.lineWidth=8;
ctx.lineCap="round";
function baseR(){
ctx.clearRect(0,0,canvas.width,canvas.height);
ctx.beginPath();
ctx.moveTo(30,200);
ctx.lineTo(30,50);
ctx.lineTo(65,50);
ctx.moveTo(30,120);
ctx.lineTo(65,120);
ctx.lineTo(100,200);
ctx.strokeStyle="black";
ctx.stroke()
}
function quadR(){
ctx.beginPath();
ctx.moveTo(65,50);
ctx.quadraticCurveTo(130,85,65,120);
ctx.strokeStyle="red";
ctx.stroke();
}
function cubicR(){
ctx.beginPath();
ctx.moveTo(65,50);
ctx.bezierCurveTo(120,50,120,120,65,120);
ctx.strokeStyle="red";
ctx.stroke();
}
$("#quad").click(function(){
baseR();
quadR();
//cubicR();
});
$("#cubic").click(function(){
baseR();
cubicR();
});
}); // end $(function(){});
</script>
</head>
<body>
<button id="quad">Use Quadratic curve</button>
<button id="cubic">Use Cubic Bezier curve</button><br><br>
<canvas id="canvas" width=150 height=225></canvas>
</body>
</html>
| Curve | 18,814,022 | 44 |
I have a file that contains 4 numbers (min, max, mean, standard derivation) and I would like to plot it with gnuplot.
Sample:
24 31 29.0909 2.57451
12 31 27.2727 5.24129
14 31 26.1818 5.04197
22 31 27.7273 3.13603
22 31 28.1818 2.88627
If I have 4 files with one column, then I can do:
gnuplot "file1.txt" with lines, "file2.txt" with lines, "file3.txt" with lines, "file4.txt" with lines
And it will plot 4 curves. I do not care about the x-axis, it should just be a constant increment.
How do I plot them? I can't seem to find a way to have 4 curves with 1 file with 4 columns, just having a constantly incrementing x value.
| You can plot different columns of the same file like this:
plot 'file' using 0:1 with lines, '' using 0:2 with lines ...
(... means continuation). A couple of notes on this notation: using specifies which column to plot i.e. column 0 and 1 in the first using statement, the 0th column is a pseudo column that translates to the current line number in the data file. Note that if only one argument is used with using (e.g. using n) it corresponds to saying using 0:n (thanks for pointing that out mgilson).
If your Gnuplot version is recent enough, you would be able to plot all 4 columns with a for-loop:
set key outside
plot for [col=1:4] 'file' using 0:col with lines
Result:
Gnuplot can use column headings for the title if they are in the data file, e.g.:
min max mean std
24 31 29.0909 2.57451
12 31 27.2727 5.24129
14 31 26.1818 5.04197
22 31 27.7273 3.13603
22 31 28.1818 2.88627
and
set key outside
plot for [col=1:4] 'file' using 0:col with lines title columnheader
Results in:
| Curve | 16,073,232 | 40 |
I have three points in 2D and I want to draw a quadratic Bézier curve passing through them. How do I calculate the middle control point (x1 and y1 as in quadTo)? I know linear algebra from college but need some simple help on this.
How can I calculate the middle control point so that the curve passes through it as well?
| Let P0, P1, P2 be the control points, and Pc be your fixed point you want the curve to pass through.
Then the Bezier curve is defined by
P(t) = P0*t^2 + P1*2*t*(1-t) + P2*(1-t)^2
...where t goes from zero to 1.
There are an infinite number of answers to your question, since it might pass through your point for any value of t... So just pick one, like t=0.5, and solve for P1:
Pc = P0*.25 + P1*2*.25 + P2*.25
P1 = (Pc - P0*.25 - P2*.25)/.5
= 2*Pc - P0/2 - P2/2
There the "P" values are (x,y) pairs, so just apply the equation once for x and once for y:
x1 = 2*xc - x0/2 - x2/2
y1 = 2*yc - y0/2 - y2/2
...where (xc,yc) is the point you want it to pass through, (x0,y0) is the start point, and (x2,y2) is the end point. This will give you a Bezier that passes through (xc,yc) at t=0.5.
| Curve | 6,711,707 | 39 |
I have been using scipy.optimize.leastsq to fit some data. I would like to get some confidence intervals on these estimates so I look into the cov_x output but the documentation is very unclear as to what this is and how to get the covariance matrix for my parameters from this.
First of all it says that it is a Jacobian, but in the notes it also says that "cov_x is a Jacobian approximation to the Hessian" so that it is not actually a Jacobian but a Hessian using some approximation from the Jacobian. Which of these statements is correct?
Secondly this sentence to me is confusing:
This matrix must be multiplied by the residual variance to get the covariance of the parameter estimates – see curve_fit.
I indeed go look at the source code for curve_fit where they do:
s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0))
pcov = pcov * s_sq
which corresponds to multiplying cov_x by s_sq but I cannot find this equation in any reference. Can someone explain why this equation is correct?
My intuition tells me that it should be the other way around since cov_x is supposed to be a derivative (Jacobian or Hessian) so I was thinking:
cov_x * covariance(parameters) = sum of errors(residuals) where sigma(parameters) is the thing I want.
How do I connect the thing curve_fit is doing with what I see at eg. wikipedia:
http://en.wikipedia.org/wiki/Propagation_of_uncertainty#Non-linear_combinations
| OK, I think I found the answer. First the solution:
cov_x*s_sq is simply the covariance of the parameters which is what you want. Taking sqrt of the diagonal elements will give you standard deviation (but be careful about covariances!).
Residual variance = reduced chi square = s_sq = sum[(f(x)-y)^2]/(N-n), where N is number of data points and n is the number of fitting parameters. Reduced chi square.
The reason for my confusion is that cov_x as given by leastsq is not actually what is called cov(x) in other places rather it is the reduced cov(x) or fractional cov(x). The reason it does not show up in any of the other references is that it is a simple rescaling which is useful in numerical computations, but is not relevant for a textbook.
About Hessian versus Jacobian, the documentation is poorly worded. It is the Hessian that is calculated in both cases as is obvious since the Jacobian is zero at a minimum. What they mean is that they are using an approximation to the Jacobian to find the Hessian.
A further note. It seems that the curve_fit result does not actually account for the absolute size of the errors, but only take into account the relative size of the sigmas provided. This means that the pcov returned doesn't change even if the errorbars change by a factor of a million. This is of course not right, but seems to be standard practice ie. Matlab does the same thing when using their Curve fitting toolbox. The correct procedure is described here: https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)#Parameter_errors_and_correlation
It seems fairly straightforward to do this once the optimum has been found, at least for Linear Least squares.
| Curve | 14,854,339 | 37 |
One example for curve is shown as below. The elbow point might be x=3 or 4.
How to compute the elbow for a curve automatically and mathematically?
| I created a Python package that attempts to implement the Kneedle algorithm.
To recreate the function above and detect the point of maximum curvature:
x = range(1,21)
y = [0.065, 0.039, 0.030, 0.024, 0.023, 0.022, 0.019, 0.0185, 0.0187,
0.016, 0.015, 0.016, 0.0135, 0.0130, 0.0125, 0.0120, 0.0117, 0.0115, 0.0112, 0.013]
kn = KneeLocator(
x,
y,
curve='convex',
direction='decreasing',
interp_method='interp1d',
)
print(kn.knee)
7
import matplotlib.pyplot as plt
plt.xlabel('x')
plt.ylabel('f(x)')
plt.xticks(range(1,21))
plt.plot(x, y, 'bx-')
plt.vlines(kn.knee, plt.ylim()[0], plt.ylim()[1], linestyles='dashed')
update
Kneed has an improved spline fitting method for handling local minima, use interp_method='polynomial'.
kn = KneeLocator(
x,
y,
curve='convex',
direction='decreasing',
interp_method='polynomial',
)
print(kn.knee)
4
And the new plot:
plt.xlabel('x')
plt.ylabel('f(x)')
plt.xticks(range(1,21))
plt.plot(x, y, 'bx-')
plt.vlines(kn.knee, plt.ylim()[0], plt.ylim()[1], linestyles='dashed')
| Curve | 4,471,993 | 32 |
I want to create a rainbow circle, like the picture below:
How can I draw the curved and multiple color stop gradient?
Here's my current code:
<svg width="500" height="500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<defs>
<linearGradient id="test">
<stop offset="0%" stop-color="#f00"/>
<stop offset="100%" stop-color="#0ff"/>
</linearGradient>
</defs>
<circle cx="50" cy="50" r="40" fill="none" stroke="url(#test)" stroke-width="6"/>
</svg>
| This approach won't work. SVG doesn't have conical gradients. To simulate the effect, you would have to fake it with a large number of small line segments. Or some similar technique.
Update:
Here is an example. I approximate the 360deg of hue with six paths. Each path contains an arc which covers 60deg of the circle. I use a linear gradient to interpolate the colour from the start to the end of each path. It's not perfect (you can see some discontinuities where the coloursmeet ) but it would possibly fool most people. You could increase the accuracy by using more than six segments.
<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="100%" height="100%" viewBox="-10 -10 220 220">
<defs>
<linearGradient id="redyel" gradientUnits="objectBoundingBox" x1="0" y1="0" x2="1" y2="1">
<stop offset="0%" stop-color="#ff0000"/>
<stop offset="100%" stop-color="#ffff00"/>
</linearGradient>
<linearGradient id="yelgre" gradientUnits="objectBoundingBox" x1="0" y1="0" x2="0" y2="1">
<stop offset="0%" stop-color="#ffff00"/>
<stop offset="100%" stop-color="#00ff00"/>
</linearGradient>
<linearGradient id="grecya" gradientUnits="objectBoundingBox" x1="1" y1="0" x2="0" y2="1">
<stop offset="0%" stop-color="#00ff00"/>
<stop offset="100%" stop-color="#00ffff"/>
</linearGradient>
<linearGradient id="cyablu" gradientUnits="objectBoundingBox" x1="1" y1="1" x2="0" y2="0">
<stop offset="0%" stop-color="#00ffff"/>
<stop offset="100%" stop-color="#0000ff"/>
</linearGradient>
<linearGradient id="blumag" gradientUnits="objectBoundingBox" x1="0" y1="1" x2="0" y2="0">
<stop offset="0%" stop-color="#0000ff"/>
<stop offset="100%" stop-color="#ff00ff"/>
</linearGradient>
<linearGradient id="magred" gradientUnits="objectBoundingBox" x1="0" y1="1" x2="1" y2="0">
<stop offset="0%" stop-color="#ff00ff"/>
<stop offset="100%" stop-color="#ff0000"/>
</linearGradient>
</defs>
<g fill="none" stroke-width="15" transform="translate(100,100)">
<path d="M 0,-100 A 100,100 0 0,1 86.6,-50" stroke="url(#redyel)"/>
<path d="M 86.6,-50 A 100,100 0 0,1 86.6,50" stroke="url(#yelgre)"/>
<path d="M 86.6,50 A 100,100 0 0,1 0,100" stroke="url(#grecya)"/>
<path d="M 0,100 A 100,100 0 0,1 -86.6,50" stroke="url(#cyablu)"/>
<path d="M -86.6,50 A 100,100 0 0,1 -86.6,-50" stroke="url(#blumag)"/>
<path d="M -86.6,-50 A 100,100 0 0,1 0,-100" stroke="url(#magred)"/>
</g>
</svg>
Fiddle here: http://jsfiddle.net/Weytu/
Update 2:
For those that want more than six segments, here is some javascript that will produce a wheel with any number of segments that you wish.
function makeColourWheel(numSegments)
{
if (numSegments <= 0)
numSegments = 6;
if (numSegments > 360)
numSegments = 360;
var svgns = xmlns="http://www.w3.org/2000/svg";
var svg = document.getElementById("colourwheel");
var defs = svg.getElementById("defs");
var paths = svg.getElementById("paths");
var radius = 100;
var stepAngle = 2 * Math.PI / numSegments;
var lastX = 0;
var lastY = -radius;
var lastAngle = 0;
for (var i=1; i<=numSegments; i++)
{
var angle = i * stepAngle;
// Calculate this arc end point
var x = radius * Math.sin(angle);
var y = -radius * Math.cos(angle);
// Create a path element
var arc = document.createElementNS(svgns, "path");
arc.setAttribute("d", "M " + lastX.toFixed(3) + "," + lastY.toFixed(3)
+ " A 100,100 0 0,1 " + x.toFixed(3) + "," + y.toFixed(3));
arc.setAttribute("stroke", "url(#wheelseg" + i + ")");
// Append it to our SVG
paths.appendChild(arc);
// Create a gradient for this segment
var grad = document.createElementNS(svgns, "linearGradient");
grad.setAttribute("id", "wheelseg"+i);
grad.setAttribute("gradientUnits", "userSpaceOnUse");
grad.setAttribute("x1", lastX.toFixed(3));
grad.setAttribute("y1", lastY.toFixed(3));
grad.setAttribute("x2", x.toFixed(3));
grad.setAttribute("y2", y.toFixed(3));
// Make the 0% stop for this gradient
var stop = document.createElementNS(svgns, "stop");
stop.setAttribute("offset", "0%");
hue = Math.round(lastAngle * 360 / Math.PI / 2);
stop.setAttribute("stop-color", "hsl(" + hue + ",100%,50%)");
grad.appendChild(stop);
// Make the 100% stop for this gradient
stop = document.createElementNS(svgns, "stop");
stop.setAttribute("offset", "100%");
hue = Math.round(angle * 360 / Math.PI / 2);
stop.setAttribute("stop-color", "hsl(" + hue + ",100%,50%)");
grad.appendChild(stop);
// Add the gradient to the SVG
defs.appendChild(grad);
// Update lastx/y
lastX = x;
lastY = y;
lastAngle = angle;
}
}
makeColourWheel(60);
<svg id="colourwheel" xmlns="http://www.w3.org/2000/svg" version="1.1" width="100%" height="100%" viewBox="-10 -10 220 220">
<defs id="defs">
</defs>
<g id="paths" fill="none" stroke-width="15" transform="translate(100,100)">
</g>
</svg>
| Curve | 18,206,361 | 29 |
I have a data frame created with this code:
require(reshape2)
foo <- data.frame( abs( cbind(rnorm(3),rnorm(3, mean=.8),rnorm(3, mean=.9),rnorm(3, mean=1))))
qux <- data.frame( abs( cbind(rnorm(3),rnorm(3, mean=.3),rnorm(3, mean=.4),rnorm(1, mean=2))))
bar <- data.frame( abs( cbind(rnorm(3,mean=.4),rnorm(3, mean=.3),rnorm(3, mean=.9),rnorm(3, mean=1))))
colnames(foo) <- c("w","x","y","z")
colnames(qux) <- c("w","x","y","z")
colnames(bar) <- c("w","x","y","z")
rownames(foo) <- c("n","q","r")
rownames(qux) <- c("n","q","r")
rownames(bar) <- c("n","q","r")
foo <- cbind(ID=rownames(foo),foo)
bar <- cbind(ID=rownames(bar),qux)
qux <- cbind(ID=rownames(bar),qux)
foo$fn <- "foo"
qux$fn <- "qux"
bar$fn <- "bar"
alldf<-rbind(foo,qux,bar)
alldf.m <- melt(alldf)
What I want to do is to create a ggplot line curve in facet format, so it creates a graph like this:
The actual graph does not contain upward lines - this is just a sketch so that the line separation is clear.
My current code doesn't work:
library(ggplot2)
p <- ggplot(data=alldf.m, aes(x=variable)) +
geom_line(aes(colour=ID),alpha=0.4)
p <- p + facet_wrap( ~ fn)
p
What's the best way to do it?
| Try this:
ggplot(data=alldf.m, aes(x=variable, y = value, colour = ID, group = ID)) +
geom_line() + facet_wrap(~fn)
| Curve | 14,640,872 | 26 |
I have a list of points that make a curve, and I would like to reduce the number of points, but still keep the overall shape of the curve.
Basically, I want to go from this:
To this:
So the algorithm would remove the points that are redundant but preserve those that really define the shape (like the points at the bottom of the curve). Is there any known algorithm to do that? I expect there is but I'm not sure what to search for on Google. Any help would be appreciated.
| Consider Douglas–Peucker_algorithm
| Curve | 7,980,586 | 24 |
I want to plot individual data points with error bars on a plot, but I don't want to have the curve. How can I do this? Are there some 'invisible' line style or can I set the line style colourless (but the marker still has to be visible)?
So this is the graph I have right now:
plt.errorbar(x5,y5,yerr=error5, fmt='o')
plt.errorbar(x3,y3,yerr=error3, fmt='o')
plt.plot(x3_true,y3_true, 'r--', label=(r'$\lambda = 0.3$'))
plot(x5_true, y5_true, 'b--', label=(r'$\lambda = 0.5$'))
plt.plot(x5,y5, linestyle=':', marker='o', color='red') #this is the 'ideal' curve that I want to add
plt.plot(x3,y3, linestyle=':', marker='o', color='red')
I want to keep the two dashed curve but I don't want the two dotted curve. How can I do this? And how can I change the color of the markers so I can have red points for the red curve, blue points for the blue curve?
| You can use scatter:
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 2*np.pi, 10)
y = np.sin(x)
plt.scatter(x, y)
plt.show()
Alternatively:
plt.plot(x, y, 's')
EDIT: If you want error bars you can do:
plt.errorbar(x, y, yerr=err, fmt='o')
| Curve | 27,773,057 | 22 |
Suppose I want to plot x^2. I can use curve() as follows.
curve(x^2, -5, 5)
However, I would like the axes to go through (0, 0). I could do something as follows:
curve(x^2, -5, 5, axes=FALSE)
axis(1, pos=0)
axis(2, pos=0)
abline(h=0)
abline(v=0)
And I end up getting something like below, which looks OK. But the only gripe I have is that this way of plotting axes makes the actual axes - for example the segment between -4 and 4 of the x-axis - thicker than the segments to the right side and the left side. The same goes with the y axis. I wonder if there is a better way of plotting the axes. Thank you!
| By default, axis() computes automatically the tick marks position, but you can define them manually with the at argument. So a workaround could be something like :
curve(x^2, -5, 5, axes=FALSE)
axis(1, pos=0, at=-5:5)
axis(2, pos=0)
Which gives :
The problem is that you have to manually determine the position of each tick mark. A slightly better solution would be to compute them with the axTicks function (the one used by default) but calling this one with a custom axp argument which allows you to specify respectively the minimum, maximum and number of intervals for the ticks in the axis :
curve(x^2, -5, 5, axes=FALSE)
axis(1, pos=0, at=axTicks(1,axp=c(-10,10,10)))
axis(2, pos=0)
Which gives :
| Curve | 14,539,785 | 18 |
I'm trying to draw a curve in canvas with a linear gradient stoke style along the curve, as in this image. On that page there is a linked svg file that gives instructions on how to accomplish the effect in svg. Maybe a similar method would be possible in canvas?
| A Demo: http://jsfiddle.net/m1erickson/4fX5D/
It's fairly easy to create a gradient that changes along the path:
It's more difficult to create a gradient that changes across the path:
To create a gradient across the path you draw many gradient lines tangent to the path:
If you draw enough tangent lines then the eye sees the curve as a gradient across the path.
Note: Jaggies can occur on the outsides of the path-gradient. That's because the gradient is really made up of hundreds of tangent lines. But you can smooth out the jaggies by drawing a line on either side of the gradient using the appropriate colors (here the anti-jaggy lines are red on the top side and purple on the bottom side).
Here are the steps to creating a gradient across the path:
Plot hundreds of points along the path.
Calculate the angle of the path at those points.
At each point, create a linear gradient and draw a gradient stroked line across the tangent of that point. Yes, you will have to create a new gradient for each point because the linear gradient must match the angle of the line tangent to that point.
To reduce the jaggy effect caused by drawing many individual lines, you can draw a smooth path along the top and bottom side of the gradient path to overwrite the jaggies.
Here is annotated code:
<!doctype html>
<html>
<head>
<link rel="stylesheet" type="text/css" media="all" href="css/reset.css" /> <!-- reset css -->
<script type="text/javascript" src="http://code.jquery.com/jquery.min.js"></script>
<style>
body{ background-color: ivory; }
#canvas{border:1px solid red;}
</style>
<script>
$(function(){
// canvas related variables
var canvas=document.getElementById("canvas");
var ctx=canvas.getContext("2d");
// variables defining a cubic bezier curve
var PI2=Math.PI*2;
var s={x:20,y:30};
var c1={x:200,y:40};
var c2={x:40,y:200};
var e={x:270,y:220};
// an array of points plotted along the bezier curve
var points=[];
// we use PI often so put it in a variable
var PI=Math.PI;
// plot 400 points along the curve
// and also calculate the angle of the curve at that point
for(var t=0;t<=100;t+=0.25){
var T=t/100;
// plot a point on the curve
var pos=getCubicBezierXYatT(s,c1,c2,e,T);
// calculate the tangent angle of the curve at that point
var tx = bezierTangent(s.x,c1.x,c2.x,e.x,T);
var ty = bezierTangent(s.y,c1.y,c2.y,e.y,T);
var a = Math.atan2(ty, tx)-PI/2;
// save the x/y position of the point and the tangent angle
// in the points array
points.push({
x:pos.x,
y:pos.y,
angle:a
});
}
// Note: increase the lineWidth if
// the gradient has noticable gaps
ctx.lineWidth=2;
// draw a gradient-stroked line tangent to each point on the curve
for(var i=0;i<points.length;i++){
// calc the topside and bottomside points of the tangent line
var offX1=points[i].x+20*Math.cos(points[i].angle);
var offY1=points[i].y+20*Math.sin(points[i].angle);
var offX2=points[i].x+20*Math.cos(points[i].angle-PI);
var offY2=points[i].y+20*Math.sin(points[i].angle-PI);
// create a gradient stretching between
// the calculated top & bottom points
var gradient=ctx.createLinearGradient(offX1,offY1,offX2,offY2);
gradient.addColorStop(0.00, 'red');
gradient.addColorStop(1/6, 'orange');
gradient.addColorStop(2/6, 'yellow');
gradient.addColorStop(3/6, 'green')
gradient.addColorStop(4/6, 'aqua');
gradient.addColorStop(5/6, 'blue');
gradient.addColorStop(1.00, 'purple');
// draw the gradient-stroked line at this point
ctx.strokeStyle=gradient;
ctx.beginPath();
ctx.moveTo(offX1,offY1);
ctx.lineTo(offX2,offY2);
ctx.stroke();
}
// draw a top stroke to cover jaggies
// on the top of the gradient curve
var offX1=points[0].x+20*Math.cos(points[0].angle);
var offY1=points[0].y+20*Math.sin(points[0].angle);
ctx.strokeStyle="red";
// Note: increase the lineWidth if this outside of the
// gradient still has jaggies
ctx.lineWidth=1.5;
ctx.beginPath();
ctx.moveTo(offX1,offY1);
for(var i=1;i<points.length;i++){
var offX1=points[i].x+20*Math.cos(points[i].angle);
var offY1=points[i].y+20*Math.sin(points[i].angle);
ctx.lineTo(offX1,offY1);
}
ctx.stroke();
// draw a bottom stroke to cover jaggies
// on the bottom of the gradient
var offX2=points[0].x+20*Math.cos(points[0].angle+PI);
var offY2=points[0].y+20*Math.sin(points[0].angle+PI);
ctx.strokeStyle="purple";
// Note: increase the lineWidth if this outside of the
// gradient still has jaggies
ctx.lineWidth=1.5;
ctx.beginPath();
ctx.moveTo(offX2,offY2);
for(var i=0;i<points.length;i++){
var offX2=points[i].x+20*Math.cos(points[i].angle+PI);
var offY2=points[i].y+20*Math.sin(points[i].angle+PI);
ctx.lineTo(offX2,offY2);
}
ctx.stroke();
//////////////////////////////////////////
// helper functions
//////////////////////////////////////////
// calculate one XY point along Cubic Bezier at interval T
// (where T==0.00 at the start of the curve and T==1.00 at the end)
function getCubicBezierXYatT(startPt,controlPt1,controlPt2,endPt,T){
var x=CubicN(T,startPt.x,controlPt1.x,controlPt2.x,endPt.x);
var y=CubicN(T,startPt.y,controlPt1.y,controlPt2.y,endPt.y);
return({x:x,y:y});
}
// cubic helper formula at T distance
function CubicN(T, a,b,c,d) {
var t2 = T * T;
var t3 = t2 * T;
return a + (-a * 3 + T * (3 * a - a * T)) * T
+ (3 * b + T * (-6 * b + b * 3 * T)) * T
+ (c * 3 - c * 3 * T) * t2
+ d * t3;
}
// calculate the tangent angle at interval T on the curve
function bezierTangent(a, b, c, d, t) {
return (3 * t * t * (-a + 3 * b - 3 * c + d) + 6 * t * (a - 2 * b + c) + 3 * (-a + b));
};
}); // end $(function(){});
</script>
</head>
<body>
<canvas id="canvas" width=300 height=300></canvas>
</body>
</html>
| Curve | 24,027,087 | 16 |
Ok pretty self explanatory. I'm using google maps and I'm trying to find out if a lat,long point is within a circle of radius say x (x is chosen by the user).
Bounding box will not work for this. I have already tried using the following code:
distlatLng = new google.maps.LatLng(dist.latlng[0],dist.latlng[1]);
var latLngBounds = circle.getBounds();
if(latLngBounds.contains(distlatLng)){
dropPins(distlatLng,dist.f_addr);
}
This still results in markers being places outside the circle.
I'm guess this is some simple maths requiring the calculation of the curvature or an area but I'm not sure where to begin. Any suggestions?
| Unfortunately Pythagoras is no help on a sphere. Thus Stuart Beard's answer is incorrect; longitude differences don't have a fixed ratio to metres but depend on the latitude.
The correct way is to use the formula for great circle distances. A good approximation, assuming a spherical earth, is this (in C++):
/** Find the great-circle distance in metres, assuming a spherical earth, between two lat-long points in degrees. */
inline double GreatCircleDistanceInMeters(double aLong1,double aLat1,double aLong2,double aLat2)
{
aLong1 *= KDegreesToRadiansDouble;
aLat1 *= KDegreesToRadiansDouble;
aLong2 *= KDegreesToRadiansDouble;
aLat2 *= KDegreesToRadiansDouble;
double cos_angle = sin(aLat1) * sin(aLat2) + cos(aLat1) * cos(aLat2) * cos(aLong2 - aLong1);
/*
Inaccurate trig functions can cause cos_angle to be a tiny amount
greater than 1 if the two positions are very close. That in turn causes
acos to give a domain error and return the special floating point value
-1.#IND000000000000, meaning 'indefinite'. Observed on VS2008 on 64-bit Windows.
*/
if (cos_angle >= 1)
return 0;
double angle = acos(cos_angle);
return angle * KEquatorialRadiusInMetres;
}
where
const double KPiDouble = 3.141592654;
const double KDegreesToRadiansDouble = KPiDouble / 180.0;
and
/**
A constant to convert radians to metres for the Mercator and other projections.
It is the semi-major axis (equatorial radius) used by the WGS 84 datum (see http://en.wikipedia.org/wiki/WGS84).
*/
const int32 KEquatorialRadiusInMetres = 6378137;
| Curve | 4,463,907 | 15 |
I've managed to implement quadratic and cubic Bezier curves.They are pretty straightforward since we have a formula. Now I want to represent an n-th order Bezier curve using the generalization:
Where
and
I'm using a bitmap library to render the output, so here is my code:
// binomialCoef(n, k) = (factorial(n) / (factorial(k) * factorial(n- k)))
unsigned int binomialCoef(unsigned int n, const unsigned int k)
{
unsigned int r = 1;
if(k > n)
return 0;
for(unsigned int d = 1; d <= k; d++)
{
r *= n--;
r /= d;
}
return r;
}
void nBezierCurve(Bitmap* obj, const Point* p, const unsigned int nbPoint, float steps, const unsigned char red, const unsigned char green, const unsigned char blue)
{
int bx1 = p[0].x;
int by1 = p[0].y;
int bx2;
int by2;
steps = 1 / steps;
for(float i = 0; i < 1; i += steps)
{
bx2 = by2 = 0;
for(int j = 0; (unsigned int)j < nbPoint; j++)
{
bx2 += (int)(binomialCoef(nbPoint, j) * pow(1 - i, (float)nbPoint - j) * pow(i, j) * p[j].x);
by2 += (int)(binomialCoef(nbPoint, j) * pow(1 - i, (float)nbPoint - j) * pow(i, j) * p[j].y);
}
bresenhamLine(obj, bx1, by1, bx2, by2, red, green, blue);
bx1 = bx2;
by1 = by2;
}
// curve must end on the last anchor point
bresenhamLine(obj, bx1, by1, p[nbPoint - 1].x, p[nbPoint - 1].y, red, green, blue);
}
Here's the set of points to render:
Point ncurv[] = {
20, 200,
70, 300,
200, 400,
250, 200
};
and here's the output:
The red curve is a cubic Bezier. The blue one is supposed to be the 4th order Bezier, which is the same as cubic Bezier, but in this case, they are not the same ?!
EDIT :
I forgot to note that the bottom left point is (0, 0)
| The sum in your formula...
...runs from 0 to n, ie for an n-th order bezier you need n+1 points.
You have 4 points, so you're drawing a 3rd-order bezier.
The error in your code is here:
for(int j = 0; (unsigned int)j < nbPoint; j++)
it should be:
for(int j = 0; (unsigned int)j <= nbPoint; j++)
otherwise you're only iterating from 0 to n-1.
EDIT:
Out of interest, the shape you were getting is the same as if the missing (5th) point was at (0,0), since that's the only point that would contribute nothing to your sum...
| Curve | 15,599,766 | 14 |
I need to draw a symmetrically curved line between the centers of two circles.
<svg>
<circle class="spot" id="au" cx="1680" cy="700" r="0"></circle>
<circle class="spot" id="sl" cx="1425" cy="525" r="0"></circle>
<line id="line1" stroke-width="2" stroke="red"/>
</svg>
This is the code I wrote so far. < line > element should be replaced with a curved path.
function drawNow() {
let point1X = document.getElementById("au").getAttribute("cx");
let point1Y = document.getElementById("au").getAttribute("cy");
let point2X = document.getElementById("sl").getAttribute("cx");
let point2Y = document.getElementById("sl").getAttribute("cy");
let line1 = document.getElementById("line1");
line1.setAttribute("x1", point1X);
line1.setAttribute("y1", point1Y);
line1.setAttribute("x2", point2X);
line1.setAttribute("y2", point2Y);
}
| An SVG quadratic curve will probably suffice. To draw it, you need the end points (which you have) and a control point which will determine the curve.
To make a symmetrical curve, the control point needs to be on the perpendicular bisector of the line between the end points. A little maths will find it.
So, from two points...
you can get to
with the code in
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title></title>
<style>
svg { background-color: bisque; }
.spot { fill: blue; }
.spot2 { fill: red; }
</style>
<script>
function x() {
var p1x = parseFloat(document.getElementById("au").getAttribute("cx"));
var p1y = parseFloat(document.getElementById("au").getAttribute("cy"));
var p2x = parseFloat(document.getElementById("sl").getAttribute("cx"));
var p2y = parseFloat(document.getElementById("sl").getAttribute("cy"));
// mid-point of line:
var mpx = (p2x + p1x) * 0.5;
var mpy = (p2y + p1y) * 0.5;
// angle of perpendicular to line:
var theta = Math.atan2(p2y - p1y, p2x - p1x) - Math.PI / 2;
// distance of control point from mid-point of line:
var offset = 30;
// location of control point:
var c1x = mpx + offset * Math.cos(theta);
var c1y = mpy + offset * Math.sin(theta);
// show where the control point is:
var c1 = document.getElementById("cp");
c1.setAttribute("cx", c1x);
c1.setAttribute("cy", c1y);
// construct the command to draw a quadratic curve
var curve = "M" + p1x + " " + p1y + " Q " + c1x + " " + c1y + " " + p2x + " " + p2y;
var curveElement = document.getElementById("curve");
curveElement.setAttribute("d", curve);
}
</script>
</head>
<body>
<svg width="240" height="160">
<circle id="au" class="spot" cx="200" cy="50" r="4"></circle>
<circle id="sl" class="spot" cx="100" cy="100" r="4"></circle>
<circle id="cp" class="spot2" cx="0" cy="0" r="4"></circle>
<path id="curve" d="M0 0" stroke="green" stroke-width="4" stroke-linecap="round" fill="transparent"></path>
</svg>
<button type="button" onclick="x();">Click</button>
</body>
</html>
If you want the curve to go the other way, change the sign of offset.
If you are using ES6-compliant browsers, you can use string interpolation for slightly tidier code:
var curve = `M${p1x} ${p1y} Q${c1x} ${c1y} ${p2x} ${p2y}`;
There is no requirement for the control point to be shown - that's just so you can see where it is and illustrate that the curve doesn't go through it.
Note: an alternative to using atan2 is to calculate the negative reciprocal of the gradient of the line between the points, but that is fiddly for the case where the gradient is zero and may produce wildly inaccurate results when the gradient is close to zero.
| Curve | 49,274,176 | 14 |
I'm using OpenCV (Canny + findCountours) to find external contours of objects. The curve drawn is typically almost, but not entirely, closed. I'd like to close it - to find the region it bounds.
How do I do this?
Things considered:
Dilation - the examples I've seen show this after Canny, although it would seem to me it makes more sense to do this after findContours
Convex hull - might work, though I'm really trying to complete a curve
Shape simplification - related, but not exactly what I want
| Using PolyLine method to draw contours
cv2.PolyLine(img, points, is_closed=True, 255, thickness=1, lineType=8, shift=0)
Read the docs for further details: http://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html
Mark answered if it resolved your problem. If not then let me know.
| Curve | 21,469,409 | 13 |
I'm creating a graph in JavaFX which is supposed to be connected by directed edges. Best would be a bicubic curve. Does anyone know how to do add the arrow heads?
The arrow heads should of course be rotated depending on the end of the curve.
Here's a simple example without the arrows:
import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.paint.Color;
import javafx.scene.shape.CubicCurve;
import javafx.scene.shape.Rectangle;
import javafx.stage.Stage;
public class BasicConnection extends Application {
public static void main(String[] args) {
launch(args);
}
@Override
public void start(Stage primaryStage) {
Group root = new Group();
// bending curve
Rectangle srcRect1 = new Rectangle(100,100,50,50);
Rectangle dstRect1 = new Rectangle(300,300,50,50);
CubicCurve curve1 = new CubicCurve( 125, 150, 125, 200, 325, 200, 325, 300);
curve1.setStroke(Color.BLACK);
curve1.setStrokeWidth(1);
curve1.setFill( null);
root.getChildren().addAll( srcRect1, dstRect1, curve1);
// steep curve
Rectangle srcRect2 = new Rectangle(100,400,50,50);
Rectangle dstRect2 = new Rectangle(200,500,50,50);
CubicCurve curve2 = new CubicCurve( 125, 450, 125, 450, 225, 500, 225, 500);
curve2.setStroke(Color.BLACK);
curve2.setStrokeWidth(1);
curve2.setFill( null);
root.getChildren().addAll( srcRect2, dstRect2, curve2);
primaryStage.setScene(new Scene(root, 800, 600));
primaryStage.show();
}
}
What's the best practice? Should I create a custom control or add 2 arrow controls per curve and rotate them (seems overkill to me)? Or is there a better solution?
Or does anyone know how to calculate the angle at which the cubic curve ends? I tried creating a simple small arrow and put it at the end of the curve, but it doesn't look nice if you don't rotate it slightly.
Thank you very much!
edit: Here's a solution in which I applied José's mechanism to jewelsea's cubic curve manipulator (CubicCurve JavaFX) in case someone nees it:
import java.util.ArrayList;
import java.util.List;
import javafx.application.Application;
import javafx.beans.property.DoubleProperty;
import javafx.event.EventHandler;
import javafx.geometry.Point2D;
import javafx.scene.Cursor;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.input.MouseEvent;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.scene.shape.CubicCurve;
import javafx.scene.shape.Line;
import javafx.scene.shape.Polygon;
import javafx.scene.shape.StrokeLineCap;
import javafx.scene.shape.StrokeType;
import javafx.scene.transform.Rotate;
import javafx.stage.Stage;
/**
* Example of how a cubic curve works, drag the anchors around to change the curve.
* Extended with arrows with the help of José Pereda: https://stackoverflow.com/questions/26702519/javafx-line-curve-with-arrow-head
* Original code by jewelsea: https://stackoverflow.com/questions/13056795/cubiccurve-javafx
*/
public class CubicCurveManipulatorWithArrows extends Application {
List<Arrow> arrows = new ArrayList<Arrow>();
public static class Arrow extends Polygon {
public double rotate;
public float t;
CubicCurve curve;
Rotate rz;
public Arrow( CubicCurve curve, float t) {
super();
this.curve = curve;
this.t = t;
init();
}
public Arrow( CubicCurve curve, float t, double... arg0) {
super(arg0);
this.curve = curve;
this.t = t;
init();
}
private void init() {
setFill(Color.web("#ff0900"));
rz = new Rotate();
{
rz.setAxis(Rotate.Z_AXIS);
}
getTransforms().addAll(rz);
update();
}
public void update() {
double size = Math.max(curve.getBoundsInLocal().getWidth(), curve.getBoundsInLocal().getHeight());
double scale = size / 4d;
Point2D ori = eval(curve, t);
Point2D tan = evalDt(curve, t).normalize().multiply(scale);
setTranslateX(ori.getX());
setTranslateY(ori.getY());
double angle = Math.atan2( tan.getY(), tan.getX());
angle = Math.toDegrees(angle);
// arrow origin is top => apply offset
double offset = -90;
if( t > 0.5)
offset = +90;
rz.setAngle(angle + offset);
}
/**
* Evaluate the cubic curve at a parameter 0<=t<=1, returns a Point2D
* @param c the CubicCurve
* @param t param between 0 and 1
* @return a Point2D
*/
private Point2D eval(CubicCurve c, float t){
Point2D p=new Point2D(Math.pow(1-t,3)*c.getStartX()+
3*t*Math.pow(1-t,2)*c.getControlX1()+
3*(1-t)*t*t*c.getControlX2()+
Math.pow(t, 3)*c.getEndX(),
Math.pow(1-t,3)*c.getStartY()+
3*t*Math.pow(1-t, 2)*c.getControlY1()+
3*(1-t)*t*t*c.getControlY2()+
Math.pow(t, 3)*c.getEndY());
return p;
}
/**
* Evaluate the tangent of the cubic curve at a parameter 0<=t<=1, returns a Point2D
* @param c the CubicCurve
* @param t param between 0 and 1
* @return a Point2D
*/
private Point2D evalDt(CubicCurve c, float t){
Point2D p=new Point2D(-3*Math.pow(1-t,2)*c.getStartX()+
3*(Math.pow(1-t, 2)-2*t*(1-t))*c.getControlX1()+
3*((1-t)*2*t-t*t)*c.getControlX2()+
3*Math.pow(t, 2)*c.getEndX(),
-3*Math.pow(1-t,2)*c.getStartY()+
3*(Math.pow(1-t, 2)-2*t*(1-t))*c.getControlY1()+
3*((1-t)*2*t-t*t)*c.getControlY2()+
3*Math.pow(t, 2)*c.getEndY());
return p;
}
}
public static void main(String[] args) throws Exception { launch(args); }
@Override public void start(final Stage stage) throws Exception {
CubicCurve curve = createStartingCurve();
Line controlLine1 = new BoundLine(curve.controlX1Property(), curve.controlY1Property(), curve.startXProperty(), curve.startYProperty());
Line controlLine2 = new BoundLine(curve.controlX2Property(), curve.controlY2Property(), curve.endXProperty(), curve.endYProperty());
Anchor start = new Anchor(Color.PALEGREEN, curve.startXProperty(), curve.startYProperty());
Anchor control1 = new Anchor(Color.GOLD, curve.controlX1Property(), curve.controlY1Property());
Anchor control2 = new Anchor(Color.GOLDENROD, curve.controlX2Property(), curve.controlY2Property());
Anchor end = new Anchor(Color.TOMATO, curve.endXProperty(), curve.endYProperty());
Group root = new Group();
root.getChildren().addAll( controlLine1, controlLine2, curve, start, control1, control2, end);
double[] arrowShape = new double[] { 0,0,10,20,-10,20 };
arrows.add( new Arrow( curve, 0f, arrowShape));
arrows.add( new Arrow( curve, 0.2f, arrowShape));
arrows.add( new Arrow( curve, 0.4f, arrowShape));
arrows.add( new Arrow( curve, 0.6f, arrowShape));
arrows.add( new Arrow( curve, 0.8f, arrowShape));
arrows.add( new Arrow( curve, 1f, arrowShape));
root.getChildren().addAll( arrows);
stage.setTitle("Cubic Curve Manipulation Sample");
stage.setScene(new Scene( root, 400, 400, Color.ALICEBLUE));
stage.show();
}
private CubicCurve createStartingCurve() {
CubicCurve curve = new CubicCurve();
curve.setStartX(100);
curve.setStartY(100);
curve.setControlX1(150);
curve.setControlY1(50);
curve.setControlX2(250);
curve.setControlY2(150);
curve.setEndX(300);
curve.setEndY(100);
curve.setStroke(Color.FORESTGREEN);
curve.setStrokeWidth(4);
curve.setStrokeLineCap(StrokeLineCap.ROUND);
curve.setFill(Color.CORNSILK.deriveColor(0, 1.2, 1, 0.6));
return curve;
}
class BoundLine extends Line {
BoundLine(DoubleProperty startX, DoubleProperty startY, DoubleProperty endX, DoubleProperty endY) {
startXProperty().bind(startX);
startYProperty().bind(startY);
endXProperty().bind(endX);
endYProperty().bind(endY);
setStrokeWidth(2);
setStroke(Color.GRAY.deriveColor(0, 1, 1, 0.5));
setStrokeLineCap(StrokeLineCap.BUTT);
getStrokeDashArray().setAll(10.0, 5.0);
}
}
// a draggable anchor displayed around a point.
class Anchor extends Circle {
Anchor(Color color, DoubleProperty x, DoubleProperty y) {
super(x.get(), y.get(), 10);
setFill(color.deriveColor(1, 1, 1, 0.5));
setStroke(color);
setStrokeWidth(2);
setStrokeType(StrokeType.OUTSIDE);
x.bind(centerXProperty());
y.bind(centerYProperty());
enableDrag();
}
// make a node movable by dragging it around with the mouse.
private void enableDrag() {
final Delta dragDelta = new Delta();
setOnMousePressed(new EventHandler<MouseEvent>() {
@Override public void handle(MouseEvent mouseEvent) {
// record a delta distance for the drag and drop operation.
dragDelta.x = getCenterX() - mouseEvent.getX();
dragDelta.y = getCenterY() - mouseEvent.getY();
getScene().setCursor(Cursor.MOVE);
}
});
setOnMouseReleased(new EventHandler<MouseEvent>() {
@Override public void handle(MouseEvent mouseEvent) {
getScene().setCursor(Cursor.HAND);
}
});
setOnMouseDragged(new EventHandler<MouseEvent>() {
@Override public void handle(MouseEvent mouseEvent) {
double newX = mouseEvent.getX() + dragDelta.x;
if (newX > 0 && newX < getScene().getWidth()) {
setCenterX(newX);
}
double newY = mouseEvent.getY() + dragDelta.y;
if (newY > 0 && newY < getScene().getHeight()) {
setCenterY(newY);
}
// update arrow positions
for( Arrow arrow: arrows) {
arrow.update();
}
}
});
setOnMouseEntered(new EventHandler<MouseEvent>() {
@Override public void handle(MouseEvent mouseEvent) {
if (!mouseEvent.isPrimaryButtonDown()) {
getScene().setCursor(Cursor.HAND);
}
}
});
setOnMouseExited(new EventHandler<MouseEvent>() {
@Override public void handle(MouseEvent mouseEvent) {
if (!mouseEvent.isPrimaryButtonDown()) {
getScene().setCursor(Cursor.DEFAULT);
}
}
});
}
// records relative x and y co-ordinates.
private class Delta { double x, y; }
}
}
| Since you're already dealing with shapes (curves), the best approach for the arrows is just keep adding more shapes to the group, using Path.
Based on this answer, I've added two methods: one for getting any point of the curve at a given parameter between 0 (start) and 1 (end), one for getting the tangent to the curve at that point.
With these methods now you can draw an arrow tangent to the curve at any point. And we use them to create two at the start (0) and at the end (1):
@Override
public void start(Stage primaryStage) {
Group root = new Group();
// bending curve
Rectangle srcRect1 = new Rectangle(100,100,50,50);
Rectangle dstRect1 = new Rectangle(300,300,50,50);
CubicCurve curve1 = new CubicCurve( 125, 150, 125, 225, 325, 225, 325, 300);
curve1.setStroke(Color.BLACK);
curve1.setStrokeWidth(1);
curve1.setFill( null);
double size=Math.max(curve1.getBoundsInLocal().getWidth(),
curve1.getBoundsInLocal().getHeight());
double scale=size/4d;
Point2D ori=eval(curve1,0);
Point2D tan=evalDt(curve1,0).normalize().multiply(scale);
Path arrowIni=new Path();
arrowIni.getElements().add(new MoveTo(ori.getX()+0.2*tan.getX()-0.2*tan.getY(),
ori.getY()+0.2*tan.getY()+0.2*tan.getX()));
arrowIni.getElements().add(new LineTo(ori.getX(), ori.getY()));
arrowIni.getElements().add(new LineTo(ori.getX()+0.2*tan.getX()+0.2*tan.getY(),
ori.getY()+0.2*tan.getY()-0.2*tan.getX()));
ori=eval(curve1,1);
tan=evalDt(curve1,1).normalize().multiply(scale);
Path arrowEnd=new Path();
arrowEnd.getElements().add(new MoveTo(ori.getX()-0.2*tan.getX()-0.2*tan.getY(),
ori.getY()-0.2*tan.getY()+0.2*tan.getX()));
arrowEnd.getElements().add(new LineTo(ori.getX(), ori.getY()));
arrowEnd.getElements().add(new LineTo(ori.getX()-0.2*tan.getX()+0.2*tan.getY(),
ori.getY()-0.2*tan.getY()-0.2*tan.getX()));
root.getChildren().addAll(srcRect1, dstRect1, curve1, arrowIni, arrowEnd);
primaryStage.setScene(new Scene(root, 800, 600));
primaryStage.show();
}
/**
* Evaluate the cubic curve at a parameter 0<=t<=1, returns a Point2D
* @param c the CubicCurve
* @param t param between 0 and 1
* @return a Point2D
*/
private Point2D eval(CubicCurve c, float t){
Point2D p=new Point2D(Math.pow(1-t,3)*c.getStartX()+
3*t*Math.pow(1-t,2)*c.getControlX1()+
3*(1-t)*t*t*c.getControlX2()+
Math.pow(t, 3)*c.getEndX(),
Math.pow(1-t,3)*c.getStartY()+
3*t*Math.pow(1-t, 2)*c.getControlY1()+
3*(1-t)*t*t*c.getControlY2()+
Math.pow(t, 3)*c.getEndY());
return p;
}
/**
* Evaluate the tangent of the cubic curve at a parameter 0<=t<=1, returns a Point2D
* @param c the CubicCurve
* @param t param between 0 and 1
* @return a Point2D
*/
private Point2D evalDt(CubicCurve c, float t){
Point2D p=new Point2D(-3*Math.pow(1-t,2)*c.getStartX()+
3*(Math.pow(1-t, 2)-2*t*(1-t))*c.getControlX1()+
3*((1-t)*2*t-t*t)*c.getControlX2()+
3*Math.pow(t, 2)*c.getEndX(),
-3*Math.pow(1-t,2)*c.getStartY()+
3*(Math.pow(1-t, 2)-2*t*(1-t))*c.getControlY1()+
3*((1-t)*2*t-t*t)*c.getControlY2()+
3*Math.pow(t, 2)*c.getEndY());
return p;
}
And this is what it looks like:
If you move the control points, you'll see that the arrows are already well oriented:
CubicCurve curve1 = new CubicCurve( 125, 150, 55, 285, 375, 155, 325, 300);
| Curve | 26,702,519 | 13 |
Within R, I want to interpolate an arbitrary path with constant distance
between interpolated points.
The test-data looks like that:
require("rgdal", quietly = TRUE)
require("ggplot2", quietly = TRUE)
r <- readOGR(".", "line", verbose = FALSE)
coords <- as.data.frame(r@lines[[1]]@Lines[[1]]@coords)
names(coords) <- c("x", "y")
print(coords)
x y
-0.44409 0.551159
-1.06217 0.563326
-1.09867 0.310255
-1.09623 -0.273754
-0.67283 -0.392990
-0.03772 -0.273754
0.63633 -0.015817
0.86506 0.473291
1.31037 0.998899
1.43934 0.933198
1.46854 0.461124
1.39311 0.006083
1.40284 -0.278621
1.54397 -0.271321
p.orig <- ggplot(coords, aes(x = x, y = y)) + geom_path(colour = "red") +
geom_point(colour = "yellow")
print(p.orig)
I tried different methods, none of them were really satisfying:
aspline (akima-package)
approx
bezierCurve
with the tourr-package I couldn't get started
aspline
aspline from the akima-package does some weird stuff when dealing with arbitrary paths:
plotInt <- function(coords) print(p.orig + geom_path(aes(x = x, y = y),
data = coords) + geom_point(aes(x = x, y = y), data = coords))
N <- 50 # 50 points to interpolate
require("akima", quietly = TRUE)
xy.int.ak <- as.data.frame(with(coords, aspline(x = x, y = y, n = N)))
plotInt(xy.int.ak)
approx
xy.int.ax <- as.data.frame(with(coords, list(x = approx(x, n = N)$y,
y = approx(y, n = N)$y)))
plotInt(xy.int.ax)
At first sight, approx looks pretty fine; however, testing it with real data gives me
problems with the distances between the interpolated points. Also a smooth, cubic interpolation would be a nice thing.
bezier
Another approach is to use bezier-curves; I used the following
implementation
source("bez.R")
xy.int.bz <- as.data.frame(with(coords, bezierCurve(x, y, N)))
plotInt(xy.int.bz)
| How about regular splines using the same method you used for approx? Will that work on the larger data?
xy.int.sp <- as.data.frame(with(coords, list(x = spline(x)$y,
y = spline(y)$y)))
| Curve | 11,356,997 | 12 |
I have
plot(rnorm(120), rnorm(120), col="darkblue", pch=16, xlim=c(-3,3), ylim=c(-4,4))
points(rnorm(120,-1,1), rnorm(120,2,1), col="darkred", pch=16)
points(c(-1,-1.5,-3), c(4,2,0), pch=3, cex=3)
I want to delineate a part of a graph, by drawing a smooth curve passing through a set of points.I can define 3-4 set of points but i cannot define a function. I would like to do this in R (as opposed to GIMP) as I would like to submit as SVG. What I would like to achieve is the following
Is this possible? I know this is not a sophisticated graphing question but any base R solution will do.
| if I understood the question right, drawing a spline through control points should do the job:
xspline(c(-1,-1.5,-3), c(4,2,0), shape = -1)
| Curve | 14,303,251 | 12 |
I am working on a black&white image just like the first one from the link :
http://imageshack.us/g/33/firstwm.png/
It has a lot of "noise" so I applied a Median filter over it to smooth it, thus getting the second picture.
cvSmooth(TempImage, TempImage, CV_MEDIAN, 5, 0);
After this i get the contours and draw them on another image like the 3rd picture from the link.
My problem is that the contours are still a little pixelated(edgy). Is there a way to smooth the B&W image even more so to obtain better contours? Or maybe do something with the contours.
I have also tried Dilate and Erode with different kernels but the problem remains the same.
Thank you for anything that helps.
EDIT:
Also tried:
cvSmooth(TempImage, TempImage, CV_GAUSSIAN, 9, 9, 3);
cvThreshold(TempImage, TempImage, 127, 255, CV_THRESH_BINARY);
Same results as median filter, ok, but still leaves some pixelated contours.
|
If this is the smoothing result you're after, it can be obtained by doing a Gaussian blur, followed by a thresholding. I.e. using cvSmooth with CV_GAUSSIAN as the paramater. Followed by a cvThreshold.
If you want a smoother transition than thresholding (like this), you can get that with adjusting levels (remapping the color range so that you preserve some of the edge transition).
update To explain how to get the smooth (anti-aliased) edge on the thresholding, consider what the thresholding does. It basically processes each pixel in the image, one at a time. If the pixel value is lower than the threshold, it is set to black (0), if not it is set to white (255).
The threshold operator is thus very simple, however, any other general mapping function can be used. Basically it's a function f(i), where i is the intensity pixel value (ranged 0-255) and f(i) is the mapped value. For threshold this function is simple
f(i) = { 0, for i < threshold
255, for i >= threshold
What you have is a smoothed image (by cvSmooth using a Gaussian kernel, which gives you the "smoothest" smoothing, if that makes sense). Thus you have a soft transition of values on the edges, ranging from 0 to 255. What you want to do is make this transition much smaller, so that you get a good edge. If you go ballistic on it, you go directly from 0 to 255, which is the same as the binary thresholding you've already done.
Now, consider a function that maps, maybe a range of 4 intensity values (127 +- 4) to the full range of 0-255. I.e.
f(i) = { 0, for i < 123
255, for i >= 131
linear mapping, for 123 <= i < 131
And you get the desired output. I'll take a quick look and see if it is implemented in openCV already. Shouldn't be too hard to code it yourself though.
update 2
The contour version would be something like this:
f(i) = { 255, for i < 122
linear mapping (255->0), for 122 <= i < 126
0, for 126 <= i < 127
linear mapping (0->255), for 127 <= i < 131
255, for 131 <= i
| Curve | 7,416,025 | 11 |
Given the points of a line and a quadratic bezier curve, how do you calculate their nearest point?
| There exist a scientific paper regarding this question from INRIA: Computing the minimum distance between two Bézier curves (PDF here)
| Curve | 8,473,572 | 11 |
I am trying to find an algorithm to calculate the bounding box of a given cubic bezier curve. The curve is in 3D space.
Is there a mathematic way to do this except of sampling points on the curve and calculating the bounding box of these points?
| Most of this is addressed in An algorithm to find bounding box of closed bezier curves? except here we have cubic Beziers and there they were dealing with quadratic Bezier curves.
Essentially you need to take the derivatives of each of the coordinate functions. If the x-coord is given by
x = A (1-t)^3 +3 B t (1-t)^2 + 3 C t^2 (1-t) + D t^3
differentiating with respect to t.
dx/dt = 3 (B - A) (1-t)^2 + 6 (C - B) (1-t) t + 3 (D - C) t^2
= [3 (D - C) - 6 (C - B) + 3 (B - A)] t^2
+ [ -6 (B - A) - 6 (C - B)] t
+ 3 (B - A)
= (3 D - 9 C + 9 B - 3 A) t^2 + (6 A - 12 B + 6 C) t + 3 (B - A)
this is a quadratic which we can write at a t^2 + b t + c. We want to solve dx/dt = 0 which you can do using the quadratic formula
- b +/- sqrt(b^2-4 a c)
-----------------------
2 a
Solving this will either gives two solutions t0, t1 say, no solutions or in rare case just one solution. We are only interest in solutions with 0 <= t <= 1. You will have a maximum of four candidate points, the two end points and the two solutions. Its a simple matter to find which of these give the extreme points.
You can repeat the same process for each coordinate and then get the bounding box.
I've put this for the 2D case in a js fiddle http://jsfiddle.net/SalixAlba/QQnvm/4/
| Curve | 24,809,978 | 11 |
I've been working on this for several weeks but have been unable to get my algorithm working properly and i'm at my wits end. Here's an illustration of what i have achieved:
If everything was working i would expect a perfect circle/oval at the end.
My sample points (in white) are recalculated every time a new control point (in yellow) is added. At 4 control points everything looks perfect, again as i add a 5th on top of the 1st things look alright, but then on the 6th it starts to go off too the side and on the 7th it jumps up to the origin!
Below I'll post my code, where calculateWeightForPointI contains the actual algorithm. And for reference- here is the information i'm trying to follow. I'd be so greatful if someone could take a look for me.
void updateCurve(const std::vector<glm::vec3>& controls, std::vector<glm::vec3>& samples)
{
int subCurveOrder = 4; // = k = I want to break my curve into to cubics
// De boor 1st attempt
if(controls.size() >= subCurveOrder)
{
createKnotVector(subCurveOrder, controls.size());
samples.clear();
for(int steps=0; steps<=20; steps++)
{
// use steps to get a 0-1 range value for progression along the curve
// then get that value into the range [k-1, n+1]
// k-1 = subCurveOrder-1
// n+1 = always the number of total control points
float t = ( steps / 20.0f ) * ( controls.size() - (subCurveOrder-1) ) + subCurveOrder-1;
glm::vec3 newPoint(0,0,0);
for(int i=1; i <= controls.size(); i++)
{
float weightForControl = calculateWeightForPointI(i, subCurveOrder, controls.size(), t);
newPoint += weightForControl * controls.at(i-1);
}
samples.push_back(newPoint);
}
}
}
//i = the weight we're looking for, i should go from 1 to n+1, where n+1 is equal to the total number of control points.
//k = curve order = power/degree +1. eg, to break whole curve into cubics use a curve order of 4
//cps = number of total control points
//t = current step/interp value
float calculateWeightForPointI( int i, int k, int cps, float t )
{
//test if we've reached the bottom of the recursive call
if( k == 1 )
{
if( t >= knot(i) && t < knot(i+1) )
return 1;
else
return 0;
}
float numeratorA = ( t - knot(i) );
float denominatorA = ( knot(i + k-1) - knot(i) );
float numeratorB = ( knot(i + k) - t );
float denominatorB = ( knot(i + k) - knot(i + 1) );
float subweightA = 0;
float subweightB = 0;
if( denominatorA != 0 )
subweightA = numeratorA / denominatorA * calculateWeightForPointI(i, k-1, cps, t);
if( denominatorB != 0 )
subweightB = numeratorB / denominatorB * calculateWeightForPointI(i+1, k-1, cps, t);
return subweightA + subweightB;
}
//returns the knot value at the passed in index
//if i = 1 and we want Xi then we have to remember to index with i-1
float knot(int indexForKnot)
{
// When getting the index for the knot function i remember to subtract 1 from i because of the difference caused by us counting from i=1 to n+1 and indexing a vector from 0
return knotVector.at(indexForKnot-1);
}
//calculate the whole knot vector
void createKnotVector(int curveOrderK, int numControlPoints)
{
int knotSize = curveOrderK + numControlPoints;
for(int count = 0; count < knotSize; count++)
{
knotVector.push_back(count);
}
}
| Your algorithm seems to work for any inputs I tried it on. Your problem might be a that a control point is not where it is supposed to be, or that they haven't been initialized properly. It looks like there are two control-points, half the height below the bottom left corner.
| Curve | 15,944,532 | 10 |
Minio has policies for each bucket. Which contains:
ReadOnly
WriteOnly
Read+Write
None
How are these related to the anonymous/authorized access to the folders?
Like say I want to make a bunch of files available as read-only to users without credentials (access key and secret key). How can I do it?
| Bucket policies provided by Minio client side are an abstracted version of the same bucket policies AWS S3 provides.
Client constructs a policy JSON based on the input string of bucket and prefix.
ReadOnly means - anonymous download access is allowed includes being
able to list objects on the desired prefix
WriteOnly means - anonymous uploads are allowed includes being able
to list incomplete uploads on the desired prefix
Read-Write - anonymous access to upload and download all objects.
This also means full public access.
None - is default (no policy) it means that all operations need to be
authenticated towards desired bucket and prefix.
A bunch of files should reside under a particular prefix can be made available for read only access. Lets say your prefix is 'my-prefix/read-only/downloads' then if you are using
import java.io.IOException;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidKeyException;
import org.xmlpull.v1.XmlPullParserException;
import io.minio.MinioClient;
import io.minio.policy.PolicyType;
import io.minio.errors.MinioException;
public class SetBucketPolicy {
/**
* MinioClient.setBucketPolicy() example.
*/
public static void main(String[] args)
throws IOException, NoSuchAlgorithmException, InvalidKeyException, XmlPullParserException {
try {
/* play.minio.io for test and development. */
MinioClient minioClient = new MinioClient("https://play.minio.io:9000", "Q3AM3UQ867SPQQA43P2F",
"zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG");
/* Amazon S3: */
// MinioClient minioClient = new MinioClient("https://s3.amazonaws.com", "YOUR-ACCESSKEYID",
// "YOUR-SECRETACCESSKEY");
minioClient.setBucketPolicy("my-bucketname", "my-prefix/read-only/downloads", PolicyType.READ_ONLY);
} catch (MinioException e) {
System.out.println("Error occurred: " + e);
}
}
}
Once your call is successful, all the objects inside 'my-prefix/read-only/downloads' are publicly readable i.e without access/secret key.
| MinIO | 42,616,518 | 27 |
I'm new to minio and I want to use it in a Django app, I read the documentation of minio python library and there is fields for MINIO_ENDPOINT, MINIO_ACCESS_KEY, MINIO_SECRET_KEY. I read the Quickstart documentation of minio but I didn't figure out how to find these parameters.
| If you use docker:
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
Note that these two equivalent variables are now deprecated:
MINIO_ACCESS_KEY[=MINIO_ROOT_USER]
MINIO_SECRET_KEY[=MINIO_ROOT_PASSWORD]
| MinIO | 67,285,745 | 24 |
I have installed minio in docker. It installed successfully and below are logs of the minio server:
I think all is well but when I invoke localhost:9000 url in browser it redirects to localhost:40793 with error message site can't be reached.
I don't know the issue. Can anyone help ? Thanks in advance.
| Addressing the warning about the dynamic port worked for me. I think the issue is that minio serves the API on port 9000, but tries to redirect you to the console when that address visited in the browser (e.g. localhost:9000). The console is on a dynamic port that isn't exposed by docker.
Instead, we can specify the console port using the --console-address flag and expose it in the docker command. This worked for me:
docker run -p 9000:9000 -p 9001:9001 --name minio -d -v ~/minio/data:/data -e "MINIO_ROOT_USER={ACCESS_KEY}" -e "MINIO_ROOT_PASSWORD={ACCESS_SECRET}" minio/minio server --console-address :9001 /data
I was then able to visit the console at localhost:9001 (although visiting localhost:9000 also redirected me there).
| MinIO | 68,317,358 | 20 |
I am running a Minio server using its docker image.
docker run -p 9000:9000 --name minio1 \
-e "MINIO_ACCESS_KEY=user" \
-e "MINIO_SECRET_KEY=pass" \
-v /home/me/data:/data \
minio/minio server /data
I have a couple of folders with files in the mount point. How do I make them available in Minio, do I need to upload them?
Can I put them in a folder and have it added as a bucket when I initialize the server?
EDIT:
When I open the minio web UI on localhost:9000 I don't see the files and folders that were already at the mount point.
What is the most efficient way to add all these folders to the minio server, such that a bucket is created for the first folder in the tree and then all the files inside each folder are added to their 'folder' bucket as objects? I could achieve this using Minio Python SDK, for example, by recursively walking down the folder tree and upload the files, but is that necessary?
| For what its worth, it appears you have to use the minio command line client to accomplish this: the maintainers explicitly declined to add an option to do this internal to Minio (see https://github.com/minio/minio/issues/4769). The easiest option I'd see is basically do something like this:
docker run -p 9000:9000 --name minio1 -e "MINIO_ACCESS_KEY=user" \
-e "MINIO_SECRET_KEY=pass" -v /home/me/data:/data \
minio/minio server /data && docker exec -d minio1 \
"/bin/bash /usr/bin/mc config host add srv http://localhost:9000 \
user pass && /usr/bin/mc mb -p srv/bucket"
Which SHOULD launch the docker container and then exec the mc client to create the bucket bucket (change the name if there is a different folder inside data you'd like to expose).
If you're a Docker Compose fan you can try something like what's documented at https://gist.github.com/harshavardhana/cb6c0d4d220a9334a66d6259c7d54c95 or build your own image with a custom entrypoint.
| MinIO | 55,496,594 | 14 |
If we want to copy a bucket to another MiniO cluster, should we use "mc cp" or "mc mirror"? I have done some simple experiments and it seems that they are the same.
Thank~!
| Short answer
Yes, mc cp --recursive SOURCE TARGET and mc mirror --overwrite SOURCE TARGET will have the same effect (to the best of my experience as of 2022-01).
mc cp allows for fine-tuned options for single files (but can bulk copy using --recursive)
mc mirror is focussed on bulk copying and can create buckets
Looking at the Minio client guide, there are several differences between the mc mirror and the mc cp commands, although the result of running them can be the same.
The answer to which one should you use depends on your requirements, and both options may be acceptable for you.
Details
The command signature differ: mc cp allows for multiple sources while mc mirror only allows for a single source.
In addition, the available flags are somewhat different (see below).
Flags mc cp offers not offered by mc mirror
--rewind value: roll back object(s) to current version at specified time
--version-id value, --vid value: select an object version to copy
--attr: add custom metadata for the object (format: KeyName1=string;KeyName2=string)
--continue, -c: create or resume copy session
--tags: apply tags to the uploaded objects (eg. key=value&key2=value2, etc)
--rewind value: roll back object(s) to current version at specified time
(The --recursive, -r flag, but that's always true for mirror)
Flags offered by mc mirror not offered by mc clone:
Flags mc mirror offers not offered by mc cp
--exclude value: exclude object(s) that match specified object name pattern
--fake: perform a fake mirror operation
--overwrite: overwrite object(s) on target if it differs from source
--region value: specify region when creating new bucket(s) on target (default: "us-east-1")
--watch, -w: watch and synchronize changes (This may be a big deal)
Consider using rclone as an alternative with additional flexibility. The Minio project is focussed on performance and being an excellent, simple S3 backend, not implementing every feature you could ask for (e.g., chunk size, throttling).
| MinIO | 59,558,166 | 13 |
I have Minio server hosted locally.
I need to read file from minio s3 bucket using pandas using S3 URL like "s3://dataset/wine-quality.csv" in Jupyter notebook.
I tried using s3 boto3 library am able to download file.
import boto3
s3 = boto3.resource('s3',
endpoint_url='localhost:9000',
aws_access_key_id='id',
aws_secret_access_key='password')
s3.Bucket('dataset').download_file('wine-quality.csv', '/tmp/wine-quality.csv')
But when I try using pandas,
data = pd.read_csv("s3://dataset/wine-quality.csv")
I'm getting client Error, Forbidden 403.
I know that pandas internally use boto3 library(correct me if am wrong)
PS: Pandas read_csv has one more param, " storage_options={
"key": AWS_ACCESS_KEY_ID,
"secret": AWS_SECRET_ACCESS_KEY,
"token": AWS_SESSION_TOKEN,
}". But I couldn't find any configuration for passing custom Minio host URL for pandas to read.
| Pandas v1.2 onwards allows you to pass storage options which gets passed down to fsspec, see the docs here: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html?highlight=s3fs#reading-writing-remote-files.
To pass in a custom url, you need to specify it through client_kwargs in storage_options:
df = pd.read_csv(
"s3://dataset/wine-quality.csv",
storage_options={
"key": AWS_ACCESS_KEY_ID,
"secret": AWS_SECRET_ACCESS_KEY,
"token": AWS_SESSION_TOKEN,
"client_kwargs": {"endpoint_url": "localhost:9000"}
}
)
| MinIO | 67,093,837 | 10 |
I am trying to redirect a example.com/minio location to minio console, which is run behind a nginx proxy both run by a docker compose file. My problem is that, when I'm trying to reverse proxy the minio endpoint to a path, like /minio it does not work, but when I run the minio reverse proxy on root path in the nginx reverse proxy, it works. I seriously cannot findout what the problem might be.
This is my compose file:
services:
nginx:
container_name: nginx
image: nginx
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
- ./log/nginx:/var/log/nginx/
minio:
image: minio/minio
container_name: minio
volumes:
- ./data/minio/:/data
command: server /data --address ':9000' --console-address ':9001'
environment:
MINIO_ROOT_USER: minio_admin
MINIO_ROOT_PASSWORD: minio_123456
ports:
- 9000
- 9001
restart: always
logging:
driver: "json-file"
options:
max-file: "10"
max-size: 20m
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
My nginx configuration is like this:
server {
listen 80;
server_name example.com;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
access_log /var/log/nginx/service-access.log;
error_log /var/log/nginx/service-error.log debug;
location / {
return 200 "salam";
default_type text/plain;
}
location /minio {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio:9001;
}
}
The picture I'm seeing of minio console at the domain is this:
And the response of curling the endpoint ($ curl -k http://example.com/minio):
<null>
<html lang="en">
<head>
<meta charset="utf-8" />
<base href="/" />
<meta content="width=device-width,initial-scale=1" name="viewport" />
<meta content="#081C42" media="(prefers-color-scheme: light)" name="theme-color" />
<meta content="#081C42" media="(prefers-color-scheme: dark)" name="theme-color" />
<meta content="MinIO Console" name="description" />
<link href="./styles/root-styles.css" rel="stylesheet" />
<link href="./apple-icon-180x180.png" rel="apple-touch-icon" sizes="180x180" />
<link href="./favicon-32x32.png" rel="icon" sizes="32x32" type="image/png" />
<link href="./favicon-96x96.png" rel="icon" sizes="96x96" type="image/png" />
<link href="./favicon-16x16.png" rel="icon" sizes="16x16" type="image/png" />
<link href="./manifest.json" rel="manifest" />
<link color="#3a4e54" href="./safari-pinned-tab.svg" rel="mask-icon" />
<title>MinIO Console</title>
<script defer="defer" src="./static/js/main.eec275cb.js"></script>
<link href="./static/css/main.90d417ae.css" rel="stylesheet">
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root">
<div id="preload">
<img src="./images/background.svg" />
<img src="./images/background-wave-orig2.svg" />
</div>
<div id="loader-block">
<img src="./Loader.svg" />
</div>
</div>
</body>
</html>
%
| minio doesn't work under non default path like location /minio
You need to use
location / {
....
proxy_pass http://localhost:9001;
}
or add another server block to nginx with subdomain like this
server{
listen 80;
server_name minio.example.com;;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://localhost:9001;
}
}
| MinIO | 72,020,904 | 10 |
I would like to know if there is a difference between gVisor and Weave Ignite in terms of their use-cases (if there is any). To me, both of them seem to try a similar thing: make the execution of code in virtualized environments more secure.
gVisor is doing this by introducing runsc, a runtime that enables sandboxed containers and Ignite is doing it by using Firecracker, which in their context also seems to be used as a sandbox.
| Both Firecracker and gVisor are technologies which provide sandboxing / isolation but in a different way.
Firecracker (orange box) is a Virtual Machine Manager.
gVisor (green box) has an architecture which controls/filters the system calls that reach the actual host.
Weave Ignite is a tool that helps you use Firecracker in order to run containers inside lightweight VMs and also do that with a nice UX, similar to using Docker.
This is also mentioned in the Scope section of github.com/weaveworks/ignite
Scope
Ignite is different from Kata Containers or gVisor. They don't let you run real VMs, but only wrap a container in new layer providing some kind of security boundary (or sandbox).
Ignite on the other hand lets you run a full-blown VM, easily and super-fast, but with the familiar container UX. This means you can "move down one layer" and start managing your fleet of VMs powering e.g. a Kubernetes cluster, but still package your VMs like containers.
Regarding the use-case part of your question, it's my feeling that because of the stronger isolation VMs offer, Ignite can be more production-ready. Also, the approach of gVisor seems to have a significant performance cost, as it is mentioned at The True Cost of Containing: A gVisor Case Study:
Conclusion
gVisor is arguably more secure than runc
Unfortunately, our analysis shows that the true costs of effectively containing are high: system calls are 2.2× slower, memory allocations are 2.5× slower, large downloads are 2.8× slower, and file opens are 216× slower
Current Sandboxing Methods
Sandboxing with gVisor
Do I Need gVisor?
No. If you're running production workloads, don't even think about it! Right now, this is a metaphorical science experiment. That's not to say you may not want to use it as it matures. I don't have any problem with the way it's trying to solve process isolation and I think it's a good idea. There are also alternatives you should take the time to explore before adopting this technology in the future.
Where might I want to use it?
As an operator, you'll want to use gVisor to isolate application containers that aren't entirely trusted. This could be a new version of an open source project your organization has trusted in the past. It could be a new project your team has yet to completely vet or anything else you aren't entirely sure can be trusted in your cluster. After all, if you're running an open source project you didn't write (all of us), your team certainly didn't write it so it would be good security and good engineering to properly isolate and protect your environment in case there may be a yet unknown vulnerability.
Further reading
My answer has information from the following sources which are in quote sections when taken "as-is" and I recommend them for further reading:
What is gVisor? from Rancher Blog
Making Containers More Isolated: An Overview of Sandboxed Container Technologies
Containers, Security, and Echo Chambers blog by Jessie Frazelle
The True Cost of Containing: A gVisor Case Study
Kata Containers vs gVisor?
Firecracker: Lightweight Virtualization for Serverless Applications paper from AWS
gVisor Security Basics - Part 1 from gVisor blog
| Firecracker | 56,996,602 | 14 |
I'd like to comprehensively understand the run-time performance cost of a Docker container. I've found references to networking anecdotally being ~100µs slower.
I've also found references to the run-time cost being "negligible" and "close to zero" but I'd like to know more precisely what those costs are. Ideally I'd like to know what Docker is abstracting with a performance cost and things that are abstracted without a performance cost. Networking, CPU, memory, etc.
Furthermore, if there are abstraction costs, are there ways to get around the abstraction cost. For example, perhaps I can mount a disk directly vs. virtually in Docker.
| An excellent 2014 IBM research paper “An Updated Performance Comparison of Virtual Machines and Linux Containers” by Felter et al. provides a comparison between bare metal, KVM, and Docker containers. The general result is: Docker is nearly identical to native performance and faster than KVM in every category.
The exception to this is Docker’s NAT — if you use port mapping (e.g., docker run -p 8080:8080), then you can expect a minor hit in latency, as shown below. However, you can now use the host network stack (e.g., docker run --net=host) when launching a Docker container, which will perform identically to the Native column (as shown in the Redis latency results lower down).
They also ran latency tests on a few specific services, such as Redis. You can see that above 20 client threads, highest latency overhead goes Docker NAT, then KVM, then a rough tie between Docker host/native.
Just because it’s a really useful paper, here are some other figures. Please download it for full access.
Taking a look at Disk I/O:
Now looking at CPU overhead:
Now some examples of memory (read the paper for details, memory can be extra tricky):
| containerd | 21,889,053 | 783 |
Am exploring on how to use containerd in place of dockerd. This is for learning only and as a cli tool rather than with any pipelines or automation.
So far, documentation in regards to using containerd in cli (via ctr) is very limited. Even the official docs are using Go lang to utilize containerd directly.
What I have learnt is ctr command plays the role of docker command to control containerd. I have thus far created a docker image and exported it to .tar format. Now using ctr i import hello.tar I have imported it as an image.
Now ctr i ls gives me the following output:
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/hello-java-app:latest application/vnd.oci.image.manifest.v1+json sha256:ef4acfd85c856ea020328959ff3cac23f37fa639b7edfb1691619d9bfe1e06c7 628.7 MiB linux/amd64 -
Trying to run a container asks me for the image id:
root@slave-node:~/images/sample# ctr run
ctr: image ref must be provided
root@slave-node:~/images/sample# ctr run docker.io/library/hello-java-app:latest
ctr: container id must be provided
I am not sure on where to get the image id from. Are there any docs related to ctr or containerd that could be helpful for a beginner?
Just running the image as a container would be sufficient for me.
| The ctr run command creates a container and executes it
ctr run <imageName> <uniqueValue>
e.g., ctr run --rm docker.io/library/hello-java-app:latest mypod
This executes my basic docker java image with a print statement:
~~~~
HelloWorld from Java Application running in Docker.
~~~~
Steps followed:
1 - A java file:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("~~~~\nHelloWorld from Java Application running in Docker.\n~~~~");
}
}
2 - An image:
FROM java:8
COPY HelloWorld.java .
RUN javac HelloWorld.java
CMD ["java", "HelloWorld"]
3 - Build image and export as .tar
docker build -t hello-java-app .
docker save -o ~/images/sample/hello-java-app.tar hello-java-app
4 - Import image (.tar) into containerd:
ctr i import hello-java-app.tar
unpacking docker.io/library/hello-java-app:latest (sha256:ef4acfd85c856ea020328959ff3cac23f37fa639b7edfb1691619d9bfe1e06c7)...done
ctr i ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/hello-java-app:latest application/vnd.oci.image.manifest.v1+json sha256:ef4acfd85c856ea020328959ff3cac23f37fa639b7edfb1691619d9bfe1e06c7 628.7 MiB linux/amd64 -
5 - Run the image:
ctr run --rm docker.io/library/hello-java-app:latest mypod
~~~~
HelloWorld from Java Application running in Docker.
~~~~
I am still unsure of the use of creating a container. The run command creates a container and executes it once. ctr c create just creates a container which can then be listed with ctr c ls but I am not able to utilize them in any meaningful way. Can anyone clarify its purpose?
PS:
Without the --rm flag, a new unique value is needed to be entered for every run as the old container is retained and we get an error: ctr: snapshot "mypod": already exists
| containerd | 59,393,496 | 23 |
Kubernetes documentation describes pod as a wrapper around one or more containers. containers running inside of a pod share a set of namespaces (e.g. network) which makes me think namespaces are nested (I kind doubt that). What is the wrapper here from container runtime's perspective?
Since containers are just processes constrained by namespaces, Cgroups e.g. Perhaps, pod is just the first container launched by Kubelet and the rest of containers are started and grouped by namespaces.
| The main difference is networking, the network namespace is shared by all containers in the same Pod. Optionally, the process (pid) namespace can also be shared. That means containers in the same Pod all see the same localhost network (which is otherwise hidden from everything else, like normal for localhost) and optionally can send signals to processes in other containers.
The idea is the Pods are groups of related containers, not really a wrapper per se but a set of containers that should always deploy together for whatever reason. Usually that's a primary container and then some sidecars providing support services (mesh routing, log collection, etc).
| containerd | 67,966,607 | 13 |
I have read many links similar to my issue, but none of them were helping me to resolve the issue.
Similar Links:
Failed to exec into the container due to permission issue after executing 'systemctl daemon-reload'
OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown
CI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown
OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown
Fail to execute docker exec
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "open /proc/self/fd: no such file or directory": unknown
Problem Description:
I have created a new Kubernetes cluster using Kubespray. When I wanted to execute some commands in one of containers I faced to the following error:
Executed Command
kubectl exec -it -n rook-ceph rook-ceph-tools-68d847b88d-7kw2v -- sh
Error:
OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/1: operation not permitted: unknown
command terminated with exit code 126
I have also logged in to the node, which runs the pod, and try executing the container using docker exec command, but the error was not changed.
Workarounds:
As I have found, the error code (126) implies that the permissions are insufficient, but I haven't faced this kind of error (like executing sh) in Docker or Kubernetes.
I have also checked whether SELinux is enabled or not (as it has been said in the 3rd link).
apt install policycoreutils
sestatus
# Output
SELinux status: disabled
In the 5th link, it was said to check whether you have updated the kernel, and I didn't upgrade anything on the nodes.
id; stat /dev/pts/0
# output
uid=0(root) gid=0(root) groups=0(root)
File: /dev/pts/0
Size: 0 Blocks: 0 IO Block: 1024 character special file
Device: 18h/24d Inode: 3 Links: 1 Device type: 88,0
Access: (0600/crw-------) Uid: ( 0/ root) Gid: ( 5/ tty)
Access: 2022-08-21 12:01:25.409456443 +0000
Modify: 2022-08-21 12:01:25.409456443 +0000
Change: 2022-08-21 11:54:47.474457646 +0000
Birth: -
Also tried /bin/sh instead of sh or /bin/bash, but not worked and the same error occurred.
Can anyone help me to find the root cause of this problem and then solve it?
| This issue may relate to docker, first drain your node.
kubectl drain <node-name>
Second, SSH to the node and restart docker service.
systemctl restart docker.service
At the end try to execute your command.
| containerd | 73,434,226 | 10 |
As I understand,
Kata Containers
Kata Container build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers but provide the workload isolation and security advantages of VMs
On the other hand, gvisor
gVisor is a user-space kernel for containers. It limits the host kernel surface accessible to the application while still giving the application access to all the features it expects.
As I believe, both of these technology trying to add linux space into containers in order to enhance security.
My question is How do they differ from each other ? Is there overlapping in functionalities?
| From what I gather:
Kata Containers
Full Kernel on top of a lightweight QEMU/KVM VM
Kernel has been optimized in newer releases.
Lets system calls go through freely
Performance penalty due to the VM layer. Not clear yet how slower or faster than gVisor
On paper, slower startup time.
Can run any application.
Can run in nested virtualized environments if the hypervisor and hardware support it.
gVisor
Partial Kernel in userspace.
Intercepts syscalls
Performance penalty at runtime due to syscall filtering. Not clear how slower or faster than Kata yet.
On paper, faster startup time.
Can run only applications that use supported system calls.
On paper, you may not need nested virtualization.
| gVisor | 50,143,367 | 25 |
Questions
How does lxd provide Full operating system functionality within containers, not just single processes?
How is it different from lxc/docker + wrappers?
Is it similar to a container that is launched with docker + supervisor/wrapper script to contain multiple processes in one container?
In other words:
What can I do with lxd that I cannot do with some wrappers over lxc and docker ?
Why is it available only in ubuntu if they are making use of mainline kernel features (namespaces and cgroup )?
|
How does lxd provide Full operating system functionality within containers, not just single processes?
Containers are Isolated Linux systems using the cgroups capabilities for limit cpu/memory/network/etc in the Linux kernel, without the need for starting a full virtual machine.
LXD uses the capabilities provided by liblxc (that is based in LXC) and from this comes the capabilities for full OS functionality.
How is it different from lxc/docker + wrappers?
LXD use liblxc from LXC. Docker is more application focused, only the principal process for your app inside the container (using libcontainer now by default, Docker did use liblxc first for this)
Is it similar to a container that is launched with docker + supervisor/wrapper
script to contain multiple processes in one container?
Something similar. The diference between LXD and Docker is that Docker is an application container, LXD is a system container. LXD use upstart/systemd like principal process inside the container and by default is ready to be a full VM environment with very light memory/cpu usage. Yes, you can build your docker with supervisorctl/runit, but you need to do manually this process. You can check how is done in http://phusion.github.io/baseimage-docker/ that do something similar inside a container.
What can I do with lxd that I cannot do with some wrappers over lxc and docker ?
live migrations of containers, use your containers like full virtual machines, precise config for dedicate cpu cores/memory/network I/O for use in your container, run your container process in unprivileged mode (root process inside your container != root process in your host) by default Docker work in privileged mode, only now in Docker 1.10 they implement unprivileged mode but you need to review (and maybe rewrite) your Dockerfiles because many things will not work in unprivileged mode.
LXD and Docker are diferent things. LXD gives you a "full OS" in a container and you can use any deployment tool that works in a VM for deploying applications in LXD. With Docker your application is inside the container and you need diferent tools for deploying applications in Docker and do metric for performance. Docker is designed to run on various OS platforms, like Windows. LXD/LXC can only run on Linux: this is the reason Docker no longer uses LXC as part of its stack.
Why is it available only in ubuntu if they are making use of mainline kernel features (namespaces and cgroup )?
LXD has commercial support from Canonical if is needed, but you can build LXD in Centos 7, ArchLinux (with kernel patched) check https://github.com/lxc/lxd. Gentoo supports LXD now https://wiki.gentoo.org/wiki/LXD.
| lxd | 30,430,526 | 25 |
I am setting up LXD to play around with conjure-up. I would like to the storage to be mounted only on my RAID device, so it would be good to remove the default storage or replace/redirect it.
I cannot remove the default storage because the default profile uses it.
How can I use the RAID storage with conjure-up and be sure it isn't using my default storage?
| The default storage cannot be deleted because it is part of the default profile. The default profile cannot be removed. So the way around this is to push a blank profile to the default profile with;
printf 'config: {}\ndevices: {}' | lxc profile edit default
Then the default storage will be removed from the default profile, so you will now be able to remove the default storage with;
lxc storage delete default
| lxd | 42,678,979 | 12 |
I want to enable a Fedora Copr repository with Ansible. More specifically I want to convert this command:
dnf copr enable ganto/lxd
Using an Ansible command module I overcome this problem but break the task's idempotence (if run again, the role should not make any changes) (changed_when: false is not an option).
- name: Enable Fedora Copr for LXD
command: "dnf copr enable -y ganto/lxd"
Also, I tried this:
- name: Install LXD
dnf:
name: "{{ item }}"
state: latest
enablerepo: "xxx"
with_items:
- lxd
- lxd-client
Where I test many variations for the option enablerepo without any success.
Is that possible using the dnf Ansible module (or something else)?
| You can use creates to make your command idempotent; if the .repo file already exists then the task won't run:
- name: Enable Fedora Copr for LXD
command:
cmd: dnf copr enable -y ganto/lxd
creates: /etc/yum.repos.d/_copr_ganto-lxd.repo
(You'd have to check that enabled=1 manually)
$ cat /etc/yum.repos.d/_copr_ganto-lxd.repo
[ganto-lxd]
name=Copr repo for lxd owned by ganto
baseurl=https://copr-be.cloud.fedoraproject.org/results/ganto/lxd/fedora-$releasever-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/ganto/lxd/pubkey.gpg
repo_gpgcheck=0
enabled=1
| lxd | 42,651,026 | 10 |
How do these two compare?
As far as I understand, runc is a runtime environment for containers. That means that this component provides the necessary environment to run containers. What is the role of containerd then?
If it does the rest (networking, volume management, etc) then what is the role of the Docker Engine? And what about containerd-shim? Basically, I'm trying to understand what each of these components do.
| I will give a high level overview to get you started:
containerd is a container runtime which can manage a complete container lifecycle - from image transfer/storage to container execution, supervision and networking.
container-shim handle headless containers, meaning once runc initializes the containers, it exits handing the containers over to the container-shim which acts as some middleman.
runc is lightweight universal run time container, which abides by the OCI specification. runc is used by containerd for spawning and running containers according to OCI spec. It is also the repackaging of libcontainer.
grpc used for communication between containerd and docker-engine.
OCI maintains the OCI specification for runtime and images. The current docker versions support OCI image and runtime specs.
More Links:
Open Container Specification
A nice dockercon 2016 presentation
| runc | 41,645,665 | 82 |
I'm trying to understand the Docker world a little better, and can't quite seem to wrap my brain around the differences between these. I believe that OCF is an emerging container standard being endorsed by OpenContainers, and I believe that Docker is set to be the first reference implementation of that standard. But even then, I have concerns that the Google Gods don't seem to be providing answers for:
What exactly is the OCF "standard"? Just a written document? A written API? A compiled C lib?
What are some examples of specific items governed by this standard? I guess without really understanding what a "container" is, its hard for me to understand what a governing standard for containers even addresses.
How/where do runc and libcontainer fit into the Docker/OCF equation?
| The Open Container Format (OCF) specification is a written document (or set of documents) defining what a "standard container" is, in terms of filesystem, available operations and execution environment. The document seems to be backed up with Go code. This specification is currently (July 2015) a work-in-progress.
Runc is an implementation of the standard. At the time of writing, it is basically a repackaging of libcontainer.
Docker uses libcontainer/runc, but adds a lot of tooling and features on top, such as volumes, networking and management of containers.
There is more information on the Docker blog and Open Containers site.
If you're just getting started with containers, I would start with Docker and look into the other projects later once you understand how containers work.
| runc | 31,213,126 | 17 |
I have read many links similar to my issue, but none of them were helping me to resolve the issue.
Similar Links:
Failed to exec into the container due to permission issue after executing 'systemctl daemon-reload'
OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown
CI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown
OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown
Fail to execute docker exec
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "open /proc/self/fd: no such file or directory": unknown
Problem Description:
I have created a new Kubernetes cluster using Kubespray. When I wanted to execute some commands in one of containers I faced to the following error:
Executed Command
kubectl exec -it -n rook-ceph rook-ceph-tools-68d847b88d-7kw2v -- sh
Error:
OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/1: operation not permitted: unknown
command terminated with exit code 126
I have also logged in to the node, which runs the pod, and try executing the container using docker exec command, but the error was not changed.
Workarounds:
As I have found, the error code (126) implies that the permissions are insufficient, but I haven't faced this kind of error (like executing sh) in Docker or Kubernetes.
I have also checked whether SELinux is enabled or not (as it has been said in the 3rd link).
apt install policycoreutils
sestatus
# Output
SELinux status: disabled
In the 5th link, it was said to check whether you have updated the kernel, and I didn't upgrade anything on the nodes.
id; stat /dev/pts/0
# output
uid=0(root) gid=0(root) groups=0(root)
File: /dev/pts/0
Size: 0 Blocks: 0 IO Block: 1024 character special file
Device: 18h/24d Inode: 3 Links: 1 Device type: 88,0
Access: (0600/crw-------) Uid: ( 0/ root) Gid: ( 5/ tty)
Access: 2022-08-21 12:01:25.409456443 +0000
Modify: 2022-08-21 12:01:25.409456443 +0000
Change: 2022-08-21 11:54:47.474457646 +0000
Birth: -
Also tried /bin/sh instead of sh or /bin/bash, but not worked and the same error occurred.
Can anyone help me to find the root cause of this problem and then solve it?
| This issue may relate to docker, first drain your node.
kubectl drain <node-name>
Second, SSH to the node and restart docker service.
systemctl restart docker.service
At the end try to execute your command.
| runc | 73,434,226 | 10 |
Is there way to specify a custom NodePort port in a kubernetes service YAML definition?
I need to be able to define the port explicitly in my configuration file.
| You can set the type NodePort in your Service Deployment. Note that there is a Node Port Range configured for your API server with the option --service-node-port-range (by default 30000-32767). You can also specify a port in that range specifically by setting the nodePort attribute under the Port object, or the system will chose a port in that range for you.
So a Service example with specified NodePort would look like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
For more information on NodePort, see this doc. For configuring API Server Node Port range please see this.
| Flannel | 43,935,502 | 37 |
Issue Redis POD creation on k8s(v1.10) cluster and POD creation stuck at "ContainerCreating"
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned redis to k8snode02
Normal SuccessfulMountVolume 30m kubelet, k8snode02 MountVolume.SetUp succeeded for volume "default-token-f8tcg"
Warning FailedCreatePodSandBox 5m (x1202 over 30m) kubelet, k8snode02 Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "redis_default" network: failed to find plugin "loopback" in path [/opt/loopback/bin /opt/cni/bin]
Normal SandboxChanged 47s (x1459 over 30m) kubelet, k8snode02 Pod sandbox changed, it will be killed and re-created.
| Ensure that /etc/cni/net.d and its /opt/cni/bin friend both exist and are correctly populated with the CNI configuration files and binaries on all Nodes. For flannel specifically, one might make use of the flannel cni repo
| Flannel | 51,169,728 | 34 |
First I start Kubernetes using Flannel with 10.244.0.0.
Then I reset all and restart with 10.84.0.0.
However, the interface flannel.1 still is 10.244.1.0
That's how I clean up:
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /run/flannel
rm -rf /etc/cni/
ifconfig cni0 down
brctl delbr cni0
ifconfig flannel.1 down
systemctl start docker
Am I missing something in the reset?
| Because your ip link have the old record
look by
ip link
you can see the record, and if you want to clean the record of old flannel and cni
please try
ip link delete cni0
ip link delete flannel.1
| Flannel | 46,276,796 | 17 |
Is there a way to define in which interface Flannel should be listening? According to his documentation adding FLANNEL_OPTIONS="--iface=enp0s8" in /etc/sysconfig/flanneld should work, but it isn't.
My master node configuration is running in a xenial(ubuntu 16.04) vagrant:
$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 10.0.0.10
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
ubuntu@master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
clusterrole "flannel" configured
clusterrolebinding "flannel" configured
Host ip addresses:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 02:63:8e:2c:ef:cd brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::63:8eff:fe2c:efcd/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:fb:ad:bb brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10/24 brd 10.0.0.255 scope global enp0s8
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:da:aa:6e:13 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 5e:07:a1:7f:97:53 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::5c07:a1ff:fe7f:9753/64 scope link
valid_lft forever preferred_lft forever
6: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::7037:fcff:fe41:b1fb/64 scope link
valid_lft forever preferred_lft forever
Pods names:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 0 2m
kube-system kube-apiserver-master 1/1 Running 0 2m
kube-system kube-controller-manager-master 1/1 Running 0 2m
kube-system kube-dns-545bc4bfd4-gjjth 0/3 ContainerCreating 0 3m
kube-system kube-flannel-ds-gdz8f 1/1 Running 0 1m
kube-system kube-flannel-ds-h4fd2 1/1 Running 0 33s
kube-system kube-flannel-ds-rnlsz 1/1 Running 1 33s
kube-system kube-proxy-d4wv9 1/1 Running 0 33s
kube-system kube-proxy-fdkqn 1/1 Running 0 3m
kube-system kube-proxy-kj7tn 1/1 Running 0 33s
kube-system kube-scheduler-master 1/1 Running 0 2m
Flannel Logs:
$ kubectl logs -n kube-system kube-flannel-ds-gdz8f -c kube-flannel
I1216 12:00:35.817207 1 main.go:474] Determining IP address of default interface
I1216 12:00:35.822082 1 main.go:487] Using interface with name enp0s3 and address 10.0.2.15
I1216 12:00:35.822335 1 main.go:504] Defaulting external address to interface address (10.0.2.15)
I1216 12:00:35.909906 1 kube.go:130] Waiting 10m0s for node controller to sync
I1216 12:00:35.909950 1 kube.go:283] Starting kube subnet manager
I1216 12:00:36.987719 1 kube.go:137] Node controller sync successful
I1216 12:00:37.087300 1 main.go:234] Created subnet manager: Kubernetes Subnet Manager - master
I1216 12:00:37.087433 1 main.go:237] Installing signal handlers
I1216 12:00:37.088836 1 main.go:352] Found network config - Backend type: vxlan
I1216 12:00:37.089018 1 vxlan.go:119] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
I1216 12:00:37.295988 1 main.go:299] Wrote subnet file to /run/flannel/subnet.env
I1216 12:00:37.296025 1 main.go:303] Running backend.
I1216 12:00:37.296048 1 main.go:321] Waiting for all goroutines to exit
I1216 12:00:37.296084 1 vxlan_network.go:56] watching for new subnet leases
How do I do to configure flannel in kubernetes to listen in enp0s8 instead of enp0s3?
| I've the same problem, trying to use k8s and Vagrant.
I've found this note in the documentation of flannel:
Vagrant typically assigns two interfaces to all VMs. The first, for
which all hosts are assigned the IP address 10.0.2.15, is for external
traffic that gets NATed.
This may lead to problems with flannel. By default, flannel selects
the first interface on a host. This leads to all hosts thinking they
have the same public IP address. To prevent this issue, pass the
--iface eth1 flag to flannel so that the second interface is chosen.
So I look for it in the flannel's pod configuration.
If you download the kube-flannel.yml file, you should look at DaemonSet spec, specifically at the "kube-flannel" container. There, you should add the required "--iface=enp0s8" argument (Don't forget the "="). Part of the code I've used.
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=enp0s8
Then run kubectl apply -f kube-flannel.yml
Hope helps.
| Flannel | 47,845,739 | 16 |
I used kubeadm to initialize my K8 master. However, I missed the --pod-network-cidr=10.244.0.0/16 flag to be used with flannel. Is there a way (or a config file) I can modify to reflect this subnet without carrying out the re-init process again?
| Override PodCIDR parameter on the all k8s Node resource with a IP source range 10.244.0.0/16
$ kubectl edit nodes nodename
Replace "Network" field under net-conf.json header in the relevant Flannel ConfigMap with a new network IP range:
$ kubectl edit cm kube-flannel-cfg -n kube-system
net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } }
Wipe current CNI network interfaces remaining the old network pool:
$ sudo ip link del cni0; sudo ip link del flannel.1
Re-spawn Flannel and CoreDNS pods respectively:
$ kubectl delete pod --selector=app=flannel -n kube-system
$ kubectl delete pod --selector=k8s-app=kube-dns -n kube-system
Wait until CoreDNS pods obtain IP address from a new network pool. Keep in mind that your custom Pods will still retain the old IP addresses inside containers unless you re-create them manually as well
| Flannel | 60,940,447 | 16 |
To install kubernetes using flannel, one initially needs to run:
kubeadm init --pod-network-cidr 10.244.0.0/16
Questions are:
What is the purpose of "pod-network-cidr"?
What's the meaning of such IP "10.244.0.0/16"?
How flannel uses this afterwards?
| pod-network-cidr is the virtual network that pods will use. That is, any created pod will get an IP inside that range.
The reason of setting this parameter in flannel is because of the following: https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
Let us take a look at the configuration:
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
kube-flannel yml file has 10.244.0.0/16 hardcoded as the network value. If you wanted to use another network (for example, the default that kubeadm uses), you would have to modify the yml to match that networking. In this sense, it is easier to simply start kubeadm with 10.244.0.0/16 so the yml works out of the box.
With that configuration, flannel will configure the overlay in the different nodes accordingly. More details here: https://blog.laputa.io/kubernetes-flannel-networking-6a1cb1f8ec7c
| Flannel | 48,984,659 | 15 |
Overview
kube-dns can't start (SetupNetworkError) after kubeadm init and network setup:
Error syncing pod, skipping: failed to "SetupNetwork" for
"kube-dns-654381707-w4mpg_kube-system" with SetupNetworkError:
"Failed to setup network for pod
\"kube-dns-654381707-w4mpg_kube-system(8ffe3172-a739-11e6-871f-000c2912631c)\"
using network plugins \"cni\": open /run/flannel/subnet.env:
no such file or directory; Skipping pod"
Kubernetes version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Environment
VMWare Fusion for Mac
OS
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
Kernel (e.g. uname -a)
Linux ubuntu-master 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
What is the problem
kube-system kube-dns-654381707-w4mpg 0/3 ContainerCreating 0 2m
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 {default-scheduler } Normal Scheduled Successfully assigned kube-dns-654381707-w4mpg to ubuntu-master
2m 1s 177 {kubelet ubuntu-master} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-w4mpg_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-w4mpg_kube-system(8ffe3172-a739-11e6-871f-000c2912631c)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
What I expected to happen
kube-dns Running
How to reproduce it
root@ubuntu-master:~# kubeadm init
Running pre-flight checks
<master/tokens> generated token: "247a8e.b7c8c1a7685bf204"
<master/pki> generated Certificate Authority key and certificate:
Issuer: CN=kubernetes | Subject: CN=kubernetes | CA: true
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2026-11-08 11:40:21 +0000 UTC
Public: /etc/kubernetes/pki/ca-pub.pem
Private: /etc/kubernetes/pki/ca-key.pem
Cert: /etc/kubernetes/pki/ca.pem
<master/pki> generated API Server key and certificate:
Issuer: CN=kubernetes | Subject: CN=kube-apiserver | CA: false
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2017-11-10 11:40:21 +0000 UTC
Alternate Names: [172.20.10.4 10.96.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local]
Public: /etc/kubernetes/pki/apiserver-pub.pem
Private: /etc/kubernetes/pki/apiserver-key.pem
Cert: /etc/kubernetes/pki/apiserver.pem
<master/pki> generated Service Account Signing keys:
Public: /etc/kubernetes/pki/sa-pub.pem
Private: /etc/kubernetes/pki/sa-key.pem
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 14.053453 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 0.508561 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 1.503838 seconds
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns
Kubernetes master initialised successfully!
You can now join any number of machines by running the following on each node:
kubeadm join --token=247a8e.b7c8c1a7685bf204 172.20.10.4
root@ubuntu-master:~#
root@ubuntu-master:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-eo1ua 1/1 Running 0 47s
kube-system etcd-ubuntu-master 1/1 Running 3 51s
kube-system kube-apiserver-ubuntu-master 1/1 Running 0 49s
kube-system kube-controller-manager-ubuntu-master 1/1 Running 3 51s
kube-system kube-discovery-1150918428-qmu0b 1/1 Running 0 46s
kube-system kube-dns-654381707-mv47d 0/3 ContainerCreating 0 44s
kube-system kube-proxy-k0k9q 1/1 Running 0 44s
kube-system kube-scheduler-ubuntu-master 1/1 Running 3 51s
root@ubuntu-master:~#
root@ubuntu-master:~# kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
root@ubuntu-master:~#
root@ubuntu-master:~#
root@ubuntu-master:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-eo1ua 1/1 Running 0 47s
kube-system etcd-ubuntu-master 1/1 Running 3 51s
kube-system kube-apiserver-ubuntu-master 1/1 Running 0 49s
kube-system kube-controller-manager-ubuntu-master 1/1 Running 3 51s
kube-system kube-discovery-1150918428-qmu0b 1/1 Running 0 46s
kube-system kube-dns-654381707-mv47d 0/3 ContainerCreating 0 44s
kube-system kube-proxy-k0k9q 1/1 Running 0 44s
kube-system kube-scheduler-ubuntu-master 1/1 Running 3 51s
kube-system weave-net-ja736 2/2 Running 0 1h
| It looks like you have configured flannel before running kubeadm init. You can try to fix this by removing flannel (it may be sufficient to remove config file rm -f /etc/cni/net.d/*flannel*), but it's best to start fresh.
| Flannel | 40,534,837 | 12 |
I have a problem trying exec'ing into a container:
kubectl exec -it busybox-68654f944b-hj672 -- nslookup kubernetes
Error from server: error dialing backend: dial tcp: lookup worker2 on 127.0.0.53:53: server misbehaving
Or getting logs from a container:
kubectl -n kube-system logs kube-dns-598d7bf7d4-p99qr kubedns
Error from server: Get https://worker3:10250/containerLogs/kube-system/kube-dns-598d7bf7d4-p99qr/kubedns: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving
I'm running out of ideas...
I have followed mostly kubernetes-the-hard-way, but have installed it on DigitalOcean and using Flannel for pod networking (I'm also using digitalocean-cloud-manager that seems to be working well).
Also, it seems kube-proxy works, everything looks good in the logs, and the iptable config looks good (to me/a noob)
Networks:
10.244.0.0/16 Flannel / Pod network
10.32.0.0/24 kube-proxy(?) / Service cluster
kube3 206.x.x.211 / 10.133.55.62
kube1 206.x.x.80 / 10.133.52.77
kube2 206.x.x.213 / 10.133.55.73
worker1 167.x.x.148 / 10.133.56.88
worker3 206.x.x.121 / 10.133.55.220
worker2 206.x.x.113 / 10.133.56.89
So, my logs:
kube-dns:
E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
I0522 12:22:32 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
F0522 12:22:34 dns.go:167] Timeout waiting for initialization
Kube-proxy:
I0522 12:36:37 flags.go:27] FLAG: --alsologtostderr="false"
I0522 12:36:37 flags.go:27] FLAG: --bind-address="0.0.0.0"
I0522 12:36:37 flags.go:27] FLAG: --cleanup="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-iptables="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-ipvs="true"
I0522 12:36:37 flags.go:27] FLAG: --cluster-cidr=""
I0522 12:36:37 flags.go:27] FLAG: --config="/var/lib/kube-proxy/kube-proxy-config.yaml"
I0522 12:36:37 flags.go:27] FLAG: --config-sync-period="15m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max="0"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max-per-core="32768"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-min="131072"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --feature-gates=""
I0522 12:36:37 flags.go:27] FLAG: --healthz-bind-address="0.0.0.0:10256"
I0522 12:36:37 flags.go:27] FLAG: --healthz-port="10256"
I0522 12:36:37 flags.go:27] FLAG: --help="false"
I0522 12:36:37 flags.go:27] FLAG: --hostname-override=""
I0522 12:36:37 flags.go:27] FLAG: --iptables-masquerade-bit="14"
I0522 12:36:37 flags.go:27] FLAG: --iptables-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --iptables-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-scheduler=""
I0522 12:36:37 flags.go:27] FLAG: --ipvs-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-burst="10"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-qps="5"
I0522 12:36:37 flags.go:27] FLAG: --kubeconfig=""
I0522 12:36:37 flags.go:27] FLAG: --log-backtrace-at=":0"
I0522 12:36:37 flags.go:27] FLAG: --log-dir=""
I0522 12:36:37 flags.go:27] FLAG: --log-flush-frequency="5s"
I0522 12:36:37 flags.go:27] FLAG: --logtostderr="true"
I0522 12:36:37 flags.go:27] FLAG: --masquerade-all="false"
I0522 12:36:37 flags.go:27] FLAG: --master=""
I0522 12:36:37 flags.go:27] FLAG: --metrics-bind-address="127.0.0.1:10249"
I0522 12:36:37 flags.go:27] FLAG: --nodeport-addresses="[]"
I0522 12:36:37 flags.go:27] FLAG: --oom-score-adj="-999"
I0522 12:36:37 flags.go:27] FLAG: --profiling="false"
I0522 12:36:37 flags.go:27] FLAG: --proxy-mode=""
I0522 12:36:37 flags.go:27] FLAG: --proxy-port-range=""
I0522 12:36:37 flags.go:27] FLAG: --resource-container="/kube-proxy"
I0522 12:36:37 flags.go:27] FLAG: --stderrthreshold="2"
I0522 12:36:37 flags.go:27] FLAG: --udp-timeout="250ms"
I0522 12:36:37 flags.go:27] FLAG: --v="4"
I0522 12:36:37 flags.go:27] FLAG: --version="false"
I0522 12:36:37 flags.go:27] FLAG: --vmodule=""
I0522 12:36:37 flags.go:27] FLAG: --write-config-to=""
I0522 12:36:37 feature_gate.go:226] feature gates: &{{} map[]}
I0522 12:36:37 iptables.go:589] couldn't get iptables-restore version; assuming it doesn't support --wait
I0522 12:36:37 server_others.go:140] Using iptables Proxier.
I0522 12:36:37 proxier.go:346] minSyncPeriod: 0s, syncPeriod: 30s, burstSyncs: 2
I0522 12:36:37 server_others.go:174] Tearing down inactive rules.
I0522 12:36:37 server.go:444] Version: v1.10.2
I0522 12:36:37 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
I0522 12:36:37 server.go:470] Running in resource-only container "/kube-proxy"
I0522 12:36:37 healthcheck.go:309] Starting goroutine for healthz on 0.0.0.0:10256
I0522 12:36:37 server.go:591] getConntrackMax: using conntrack-min
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0522 12:36:37 conntrack.go:52] Setting nf_conntrack_max to 131072
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0522 12:36:37 bounded_frequency_runner.go:170] sync-runner Loop running
I0522 12:36:37 config.go:102] Starting endpoints config controller
I0522 12:36:37 config.go:202] Starting service config controller
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for service config controller
I0522 12:36:37 reflector.go:202] Starting reflector *core.Endpoints (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Endpoints from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:202] Starting reflector *core.Service (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Service from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kubernetes-dashboard:" to [10.244.0.2:8443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/hostnames:" to [10.244.0.3:9376 10.244.0.4:9376 10.244.0.4:9376]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/kubernetes:https" to [10.133.52.77:6443 10.133.55.62:6443 10.133.55.73:6443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns" to []
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns-tcp" to []
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for service config controller
I0522 12:36:37 config.go:210] Calling handler.OnServiceSynced()
I0522 12:36:37 proxier.go:623] Not syncing iptables until Services and Endpoints have been received from master
I0522 12:36:37 proxier.go:619] syncProxyRules took 38.306µs
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for endpoints config controller
I0522 12:36:37 config.go:110] Calling handler.OnEndpointsSynced()
I0522 12:36:37 service.go:310] Adding new service port "default/kubernetes:https" at 10.32.0.1:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns" at 10.32.0.10:53/UDP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.32.0.10:53/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kubernetes-dashboard:" at 10.32.0.175:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "default/hostnames:" at 10.32.0.16:80/TCP
I0522 12:36:37 proxier.go:642] Syncing iptables rules
I0522 12:36:37 iptables.go:321] running iptables-save [-t filter]
I0522 12:36:37 iptables.go:321] running iptables-save [-t nat]
I0522 12:36:37 iptables.go:381] running iptables-restore [--noflush --counters]
I0522 12:36:37 healthcheck.go:235] Not saving endpoints for unknown healthcheck "default/hostnames"
I0522 12:36:37 proxier.go:619] syncProxyRules took 62.713913ms
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate
iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !localhost/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
KUBE-POSTROUTING all -- anywhere anywhere /* kubernetes postrouting rules */
MASQUERADE all -- 172.17.0.0/16 anywhere
RETURN all -- 10.244.0.0/16 10.244.0.0/16
MASQUERADE all -- 10.244.0.0/16 !base-address.mcast.net/4
RETURN all -- !10.244.0.0/16 worker3/24
MASQUERADE all -- !10.244.0.0/16 10.244.0.0/16
CNI-9f557b5f70a3ef9b57012dc9 all -- 10.244.0.0/16 anywhere /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
CNI-3f77e9111033967f6fe3038c all -- 10.244.0.0/16 anywhere /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
Chain CNI-3f77e9111033967f6fe3038c (1 references)
target prot opt source destination
ACCEPT all -- anywhere 10.244.0.0/16 /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
MASQUERADE all -- anywhere !base-address.mcast.net/4 /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
Chain CNI-9f557b5f70a3ef9b57012dc9 (1 references)
target prot opt source destination
ACCEPT all -- anywhere 10.244.0.0/16 /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
MASQUERADE all -- anywhere !base-address.mcast.net/4 /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-MARK-DROP (0 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x8000
Chain KUBE-MARK-MASQ (10 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-372W2QPHULAJK7KN (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.52.77 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255 tcp to:10.133.52.77:6443
Chain KUBE-SEP-F5C5FPCVD73UOO2K (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.55.73 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255 tcp to:10.133.55.73:6443
Chain KUBE-SEP-LFOBDGSNKNVH4XYX (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.55.62 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255 tcp to:10.133.55.62:6443
Chain KUBE-SEP-NBPTKIZVPOJSUO47 (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.4 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.4:9376
KUBE-MARK-MASQ all -- 10.244.0.4 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.4:9376
Chain KUBE-SEP-OT5RYZRAA2AMYTNV (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.2 anywhere /* kube-system/kubernetes-dashboard: */
DNAT tcp -- anywhere anywhere /* kube-system/kubernetes-dashboard: */ tcp to:10.244.0.2:8443
Chain KUBE-SEP-XDZOTYYMKVEAAZHH (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.3 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.3:9376
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.32.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.175 /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-SVC-XGLOHA7QRQ3V22RZ tcp -- anywhere 10.32.0.175 /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.16 /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-SVC-NWV5X2332I4OT4T3 tcp -- anywhere 10.32.0.16 /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target prot opt source destination
KUBE-SEP-372W2QPHULAJK7KN all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255
KUBE-SEP-LFOBDGSNKNVH4XYX all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255
KUBE-SEP-F5C5FPCVD73UOO2K all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255
KUBE-SEP-372W2QPHULAJK7KN all -- anywhere anywhere /* default/kubernetes:https */ statistic mode random probability 0.33332999982
KUBE-SEP-LFOBDGSNKNVH4XYX all -- anywhere anywhere /* default/kubernetes:https */ statistic mode random probability 0.50000000000
KUBE-SEP-F5C5FPCVD73UOO2K all -- anywhere anywhere /* default/kubernetes:https */
Chain KUBE-SVC-NWV5X2332I4OT4T3 (1 references)
target prot opt source destination
KUBE-SEP-XDZOTYYMKVEAAZHH all -- anywhere anywhere /* default/hostnames: */ statistic mode random probability 0.33332999982
KUBE-SEP-NBPTKIZVPOJSUO47 all -- anywhere anywhere /* default/hostnames: */ statistic mode random probability 0.50000000000
KUBE-SEP-NBPTKIZVPOJSUO47 all -- anywhere anywhere /* default/hostnames: */
Chain KUBE-SVC-XGLOHA7QRQ3V22RZ (1 references)
target prot opt source destination
KUBE-SEP-OT5RYZRAA2AMYTNV all -- anywhere anywhere /* kube-system/kubernetes-dashboard: */
kubelet
W12:43:36 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:36 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:46 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:46 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:56 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:56 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:44:06 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:44:06 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Config:
Worker:
kubelet:
systemd service:
/usr/local/bin/kubelet \
--config=/var/lib/kubelet/kubelet-config.yaml \
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
--image-pull-progress-deadline=2m \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--network-plugin=cni \
--register-node=true \
--v=2 \
--cloud-provider=external \
--allow-privileged=true
kubelet-config.yaml:
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.244.0.0/16"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/worker3.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/worker3-key.pem"
kube-proxy:
systemd service:
ExecStart=/usr/local/bin/kube-proxy \
--config=/var/lib/kube-proxy/kube-proxy-config.yaml -v 4
kube-proxy-config.yaml:
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.244.0.0/16"
kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ASLDJL...ALKJDS=
server: https://206.x.x.7:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: system:kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:kube-proxy
user:
client-certificate-data: ASDLJAL ... ALDJS
client-key-data: LS0tLS1CRUdJ...ASDJ
Controller:
kube-apiserver:
ExecStart=/usr/local/bin/kube-apiserver \
--advertise-address=10.133.55.62 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--enable-swagger-ui=true \
--etcd-cafile=/var/lib/kubernetes/ca.pem \
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
--etcd-servers=https://10.133.55.73:2379,https://10.133.52.77:2379,https://10.133.55.62:2379 \
--event-ttl=1h \
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
--kubelet-https=true \
--runtime-config=api/all \
--service-account-key-file=/var/lib/kubernetes/service-account.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--v=2
kube-controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.244.0.0/16 \
--allocate-node-cidrs=true \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--root-ca-file=/var/lib/kubernetes/ca.pem \
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
Flannel config/Log:
https://pastebin.com/hah0uSFX
(since the post is too long!)
Edit:
route:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 0 0 0 eth0
10.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.133.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cnio0
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
206.189.96.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0
ip route get 10.32.0.1: 10.32.0.1 via 206.189.96.1 dev eth0 src 206.189.96.121 uid 0
curl -k https://10.32.0.1:443/version
{
"major": "1",
"minor": "10",
"gitVersion": "v1.10.2",
"gitCommit": "81753b10df112992bf51bbc2c2f85208aad78335",
"gitTreeState": "clean",
"buildDate": "2018-04-27T09:10:24Z",
"goVersion": "go1.9.3",
"compiler": "gc",
"platform": "linux/amd64"
}
A reboot brought up all workers and pods, including kube-dns, so they are not crashing anymore, but when trying to do exec or run, I still have some issues:
kubectl run test --image=ubuntu -it --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: error dialing backend: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving
Error from server: Get https://worker3:10250/containerLogs/default/test-6954947c4f-6gkdl/test: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehavin
|
I have a problem trying exec'ing into a container
As you see, Kubernetes is trying to connect to your nodes use the names like worker1, which cannot be resolved in your network.
You have 2 ways to fix it:
Use real FQDN for all your nodes which can be resolved. Usually, VMs in clouds have resolvable DNS names, but it looks like in DO they are not. So, you need to create domain names manually and point them to your servers. Try to check DO DNS service. I recommend you that way, it is always good when you have your own DNS names for your infrastructure.
Make your names like worker1 resolvable for Kubernetes components, maybe some custom DNS server or records in /etc/hosts.
UPD:
From @Richard87, for future reference: the third way is to use option --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname.
| Flannel | 50,468,354 | 12 |
i have been trying to setup k8s in a single node,everything was installed fine. but when i check the status of my kube-system pods,
CNI -> flannel pod has crashed, reason -> Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: x.x.x.x x.x.x.x x.x.x.x
CoreDNS pods status is ContainerCreating.
In My Office, the current server has been configured to have an static ip and when i checked /etc/resolv.conf
This is the output
# Generated by NetworkManager
search ORGDOMAIN.BIZ
nameserver 192.168.1.12
nameserver 192.168.2.137
nameserver 192.168.2.136
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 192.168.1.10
nameserver 192.168.1.11
i'm unable to find the root cause, what should i be looking at?
| In short, you have too many entries in /etc/resolv.conf.
This is a known issue:
Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm (>= 1.11) automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.
Also
Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS nameserver records and 6 DNS search records. Kubernetes needs to consume 1 nameserver record and 3 search records. This means that if a local installation already uses 3 nameservers or uses more than 3 searches, some of those settings will be lost. As a partial workaround, the node can run dnsmasq which will provide more nameserver entries, but not more search entries. You can also use kubelet’s --resolv-conf flag.
If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check here for more information.
You possibly could change that in the Kubernetes code, but I'm not sure about the functionality. As it's set to that value for purpose.
Code can be located here
const (
// Limits on various DNS parameters. These are derived from
// restrictions in Linux libc name resolution handling.
// Max number of DNS name servers.
MaxDNSNameservers = 3
// Max number of domains in search path.
MaxDNSSearchPaths = 6
// Max number of characters in search path.
MaxDNSSearchListChars = 256
)
| Flannel | 59,890,834 | 11 |
i update my system by:
$ apt-get upgrade
then bad things happened, when i reboot the system, i had it get a timeout about network connection.
i am pretty sure that, my network connection is fine (it unchanged during update), i can get ip allocated (both ethernet and wlan)
i have consulted google:
# anyway, i was told to run
$ sudo netplan apply
# and i get
WARNING:root:Cannot call Open vSwitch: ovsdb-server.service is not running.
i have never installed this ovsdb stuff in my server, but this warning is really annoying
it may related to network timeout, or not
how can i fix this (to erase this waring or just help me to solve network connection problem)
i tried:
$ systemctl status systemd-networkd-wait-online.service
and i get:
× systemd-networkd-wait-online.service - Wait for Network to be Configured
Loaded: loaded (/lib/systemd/system/systemd-networkd-wait-online.service; enabled; vendor preset: disabled)
Active: failed (Result: timeout) since Tue 2023-08-22 05:12:01 CST; 2 months 3 days ago
Docs: man:systemd-networkd-wait-online.service(8)
Process: 702 ExecStart=/lib/systemd/systemd-networkd-wait-online (code=exited, status=0/SUCCESS)
Main PID: 702 (code=exited, status=0/SUCCESS)
CPU: 22ms
Aug 22 05:11:59 ubuntu systemd[1]: Starting Wait for Network to be Configured...
Aug 22 05:12:01 ubuntu systemd[1]: systemd-networkd-wait-online.service: start operation timed out. Terminating.
Aug 22 05:12:01 ubuntu systemd[1]: systemd-networkd-wait-online.service: Failed with result 'timeout'.
Aug 22 05:12:01 ubuntu systemd[1]: Failed to start Wait for Network to be Configured.
| i have solved this problem
netplan apply says ovsdb-server.service is not running, then i just install this openvswitch
since i run ubuntu server in raspberry pi, i need to install extra lib:
# run this first
$ sudo apt-get install linux-modules-extra-raspi
# run this then
$ sudo apt-get install openvswitch-switch-dpdk
you may need to check installation by run these command again
after the installation complete, no annoying WARNING shows again:
$ sudo netplan try
however, systemd-networkd-wait-online.service still timeout, no matter how many times you restart it
i have consulted the man page for systemd-networkd-wait-online.service usage
this service is just to wait all interface managed by systemd-networkd are ready
in fact, i only use ethernet interface and wlan interface, these interfaces work well
$ ip a
# status of my interfaces
so i asked chatgpt about how to wait specific interfaces for systemd-networkd-wait-online.service
it told my to add args in /lib/systemd/system/systemd-networkd-wait-online.service
$ vim /lib/systemd/system/systemd-networkd-wait-online.service
[Service]
Type=oneshot
# flag `--interface` is used to wait specific interface
# in this case, i need to wait wlan interface and ethernet interface
ExecStart=/lib/systemd/systemd-networkd-wait-online --interface=wlan0 --interface=eth0
RemainAfterExit=yes
# this parameter is used to set timeout, 30s is enough for my pi
TimeoutStartSec=30sec
after edition, you need to reload this script and restart service
$ systemctl daemon-reload
$ systemctl restart systemd-networkd-wait-online.service
that is all, everything gonna be fine (maybe)
| Open vSwitch | 77,352,932 | 17 |
My terraform remote states and lockers are configured on s3 and dynamodb under aws account, On gitlab runner some plan task has been crashed and on the next execution plan the following error pops up:
Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException:
The conditional request failed
Lock Info:
ID: <some-hash>
Path: remote-terrform-states/app/terraform.tfstate
Operation: OperationTypePlan
Who: root@runner-abc-project-123-concurrent-0
Version: 0.14.10
Created: 2022-01-01 00:00:00 +0000 UTC
Info: some really nice info
While trying to unlock this locker in order to perform additional execution plan again - I get the following error:
terraform force-unlock <some-hash-abc-123>
#output:
Local state cannot be unlocked by another process
How do we release this terraform locker?
| According to reference of terraform command: force-unlock
Manually unlock the state for the defined configuration.
This will not modify your infrastructure. This command removes the
lock on the state for the current configuration. The behavior of this
lock is dependent on the backend being used. Local state files cannot
be unlocked by another process.
Explanation: apparently the execution plan is processing the plan output file locally and being apply on the second phase of terraform steps, like the following example:
phase 1: terraform plan -out execution-plan.out
phase 2: terraform apply -input=false execution-plan.out
Make sure that filename is same in phase 1 and 2
However - if phase 1 is being terminated or accidentally crashing, the locker will be assigned to the local state file and therefore must be removed on the dynamodb itself and not with the terraform force-unlock command.
Solution: Locate this specific item under the dynamodb terraform lockers table and explicitly remove the locked item, you can do either with aws console or through the api.
For example:
aws dynamodb delete-item \
--table-name terraform-locker-bucket \
--key file://key.json
Contents of key.json:
{
"LockID": "remote-terrform-states/app/terraform.tfstate",
"Info": {
"ID":"<some-hash>",
"Operation":"OperationTypePlan",
"Who":"root@runner-abc-project-123-concurrent-0",
"Version":"0.14.10",
"Created":"2022-01-01 00:00:00 +0000 UTC",
"Info":"some really nice info"
}
}
| Terraform | 71,940,888 | 12 |
I'm trying to set up Terrafom validation on Gitlab CI.
However a build fails with an error: "Terraform has no command named "sh". Did you mean "show"?"
Why does it happen? How could it be fixed?
My .gitlab-ci.yml
image: hashicorp/terraform:light
before_script:
- terraform init
validate:
script:
- terraform validate
| You need to override the entrypoint in the terraform image so you have access to the shell.
image:
name: hashicorp/terraform:light
entrypoint: [""]
before_script:
- terraform init
validate:
script:
- terraform validate
You can also take a look at the official gitlab documentation how to integrate terraform with gitlab, as the have a template for that.
| Terraform | 67,115,574 | 12 |
Running Terraform v0.11.3 and I am trying to merge two maps into a single map using the merge() function. However I can't get the syntax right. Does merge() support using dynamic variables?
tags = "${merge({
Name = "${var.name}"
Env = "${var.environment}"
AutoSnapshot = "${var.auto_snapshot}"
}, "${var.tags}")}"
| In Terraform > 0.12 this can be done as:
tags = merge(tomap({
Name = var.name,
Env = var.environment,
AutoSnapshot = var.auto_snapshot }),
var.tags,
)
| Terraform | 66,180,680 | 12 |
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
Is it not possible to provide values for bucket and key above through variables file?
Because when I try doing the same like this:
terraform {
backend "s3" {
bucket = var.bucket
key = var.key
}
}
, I get the following error:
Error: Variables not allowed
on main.tf line 3, in terraform:
3: bucket = var.bucket
Variables may not be used here.
Error: Variables not allowed
on main.tf line 4, in terraform:
4: key = key
Variables may not be used here.
| Create a file named backend.tfvars with content:
bucket = "mybucket"
key = "path/to/my/key"
Specify this file name in a command line option to the terraform init command:
terraform init -backend-config=backend.tfvars
You need a separate backend config file instead of your usual tfvars file because these values are used when you set up your backend. That means they need to be provided when you run terraform init, not later when you use the backend with commands like terraform apply.
See the terraform documentation on partial configuration for more details.
| Terraform | 66,139,798 | 12 |
I'm creating a series of s3 buckets with this definition:
resource "aws_s3_bucket" "map" {
for_each = local.bucket_settings
bucket = each.key
...
}
I'd like to output a list of the website endpoints:
output "website_endpoints" {
# value = aws_s3_bucket.map["example.com"].website_endpoint
value = ["${keys(aws_s3_bucket.map)}"]
}
What's the syntax to pull out a list of the endpoints (rather than the full object properties)?
| If you just want to get a list of website_endpoint, then you can do:
output "website_endpoints" {
value = values(aws_s3_bucket.map)[*].website_endpoint
}
This uses splat expression.
| Terraform | 65,840,607 | 12 |
I'm trying to create elasticsearch cluster using terraform, But i'm getting this error
11:58:07 * aws_cloudwatch_log_resource_policy.elasticsearch-log-publishing-policy: Writing CloudWatch log resource policy failed: LimitExceededException: Resource limit exceeded.
11:58:07 * aws_elasticsearch_domain.es2: 1 error(s) occurred:
I initially thought that this resource limit error is unable to create log groups. But when i raised a Ticket with AWS team , they said there is "no throttling on CreateLogGroup API for this account in IAD"
ElasticSearch has about 10 clusters running. I'm not sure which resource limit has exceeded.
Can someone pls explain me the above error.
Update:
data "aws_iam_policy_document" "elasticsearch-log-publishing-policy" {
statement {
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:PutLogEventsBatch",
]
resources = ["arn:aws:logs:*"]
principals {
identifiers = ["es.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_cloudwatch_log_resource_policy" "elasticsearch-log-publishing-policy" {
policy_document = "${data.aws_iam_policy_document.elasticsearch-log-publishing-policy.json}"
policy_name = "elasticsearch-log-publishing-policy"
}
I tried to apply this using terraform target, i think the error is here, does AWS have a limit on number of custom policies we create, I could not find an option to request an increase.
|
does AWS have a limit on number of custom policies we create, I could not find an option to request an increase.
Yes, the limit can't be change and it is:
Up to 10 CloudWatch Logs resource policies per Region per account. This quota can't be changed.
| Terraform | 65,615,449 | 12 |
How do you parse a map variable to a string in a resource value with Terraform12?
I have this variable:
variable "tags" {
type = map
default = {
deployment_tool = "Terraform"
code = "123"
}
}
And want this: {deployment_tool=Terraform, code=123}
I've tried the following without success:
resource "aws_ssm_parameter" "myparamstore" {
***
value = {
for tag in var.tags:
join(",",value, join("=",tag.key,tag.values))
}
}
| Replacing ":" with "=" is not a perfect solution, just consider a map with such a value: https://example.com - it becomes https=//example.com. That's not good.
So here is my solution:
environment_variables = join(",", [for key, value in var.environment_variables : "${key}=${value}"])
| Terraform | 64,134,699 | 12 |
I am trying to create IAM binding for Bigquery dataset using the resource - google_bigquery_dataset_iam_binding. The requirement is I read the parameters in this resource (dataset_id, role, members) using a variable of the following structure -
bq_iam_role_bindings = {
"member1" = {
"dataset1" : ["role1","role2", "role5"],
"dataset2" : ["role3","role2"],
},
"member2" = {
"dataset3" : ["role1","role4"],
"dataset2" : ["role5"],
}
}
So, I need to loop over this variable and get the roles assigned on a dataset for each member. Here total resources created would be eight (for each member, each dataset and each role).
I am new to terraform and understand only how to apply simple for loop over a map and for_each loop in a resource. Want to understand how is it possible what I am trying to do.
This is the nearest what I have found - Map within a map in terraform variables where I can read the value in a nested map but I need to extract key also in my case.
Can anyone help here please.
| You could re-organize it into more for_each friendly list of objects and store it in a local helper_list.
For example:
variable "bq_iam_role_bindings" {
default = {
"member1" = {
"dataset1" : ["role1","role2", "role5"],
"dataset2" : ["role3","role2"],
},
"member2" = {
"dataset3" : ["role1","role4"],
"dataset2" : ["role5"],
}
}
}
locals {
helper_list = flatten([for member, value in var.bq_iam_role_bindings:
flatten([for dataset, roles in value:
[for role in roles:
{"member" = member
"dataset" = dataset
"role" = role}
]])
])
}
which will result in helper_list in the form of:
[
{
"dataset" = "dataset1"
"member" = "member1"
"role" = "role1"
},
{
"dataset" = "dataset1"
"member" = "member1"
"role" = "role2"
},
{
"dataset" = "dataset1"
"member" = "member1"
"role" = "role5"
},
{
"dataset" = "dataset2"
"member" = "member1"
"role" = "role3"
},
{
"dataset" = "dataset2"
"member" = "member1"
"role" = "role2"
},
{
"dataset" = "dataset2"
"member" = "member2"
"role" = "role5"
},
{
"dataset" = "dataset3"
"member" = "member2"
"role" = "role1"
},
{
"dataset" = "dataset3"
"member" = "member2"
"role" = "role4"
},
]
The above form is much easier to work with for_each, e.g.:
resource "google_bigquery_dataset_iam_binding" "reader" {
for_each = { for idx, record in local.helper_list : idx => record }
dataset_id = each.value.dataset
role = each.value.role
members = [
each.value.member
]
}
| Terraform | 63,500,554 | 12 |
Terraform version: 0.11
I am running multiple eks clusters and trying to enable IAM Roles-based service account in all cluster following this doc:
https://www.terraform.io/docs/providers/aws/r/eks_cluster.html#enabling-iam-roles-for-service-accounts
This works when I hardcode the cluster name in the policy statement and create multiple statements
data "aws_iam_policy_document" "example_assume_role_policy" {
# for cluster 1
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.example1.url, "https://", "")}:sub"
values = ["system:serviceaccount:kube-system:aws-node"]
}
principals {
identifiers = ["${aws_iam_openid_connect_provider.example1.arn}"]
type = "Federated"
}
}
}
Since I have multiple clusters, I want to be able to generate the statement dynamically
so I made the following changes:
I created a count variable and changed values in principals and and condition
count = "${length(var.my_eks_cluster)}"
condition {
test = "StringEquals"
variable = "${replace(element(aws_iam_openid_connect_provider.*.url, count.index), "https://", "")}:sub"
values = ["system:serviceaccount:kube-system:aws-node"]
}
principals {
identifiers = ["${element(aws_iam_openid_connect_provider.*.url, count.index)}"]
type = "Federated"
}
Terraform now is able to find the clusters BUT also generate multiple policies.
And this will not work, since in the following syntax, the assume_role_policy doesn't take the list
resource "aws_iam_role" "example" {
assume_role_policy = "${data.aws_iam_policy_document.example_assume_role_policy.*.json}"
name = "example"
}
It seems like instead of creating multiple policies, I need to generate multiple statements in one policy (so I can add to one iam_role). Has anyone done something similar before ? Thanks.
| You only want one policy, so you should not use the count argument in your policy.
What you want to have instead is multiple statements, like this
data "aws_iam_policy_document" "example" {
statement {
# ...
}
statement {
# ...
}
}
Now you could hard-code this directly (maybe that would be a good start to test if it works). If you want to generate this dynamically from a variable you would need a dynamic-block as described here: https://www.terraform.io/docs/configuration/expressions.html
In your case that would probably be
data "aws_iam_policy_document" "example" {
dynamic "statement" {
for_each = aws_iam_openid_connect_provider
content {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(statement.value.url, "https://", "")}:sub"
values = ["system:serviceaccount:kube-system:aws-node"]
}
principals {
identifiers = ["${statement.value.arn}"]
type = "Federated"
}
}
}
}
I think that "dynamic" is only available since TF 0.12, though.
| Terraform | 62,184,180 | 12 |
In my Terraform AWS Docker Swarm module I use cloud-init to initialize the EC2 instance. However, Terraform says the resource is ready before cloud-init finishes. Is there a way of making it wait for cloud-init to finish, ideally without SSHing or checking for a port to be up using a null resource?
| Your managers and workers both use template_cloudinit_config. They also have ec2:CreateTags.
You can use an EC2 resource tag like trajano/terraform-docker-swarm-aws/cloudinit-complete to indicate that the cloudinit has finished.
You could add this final part to each to invoke a tagging script:
part {
filename = "tag_complete.sh"
content = local.tag_complete_script
content_type = "text/x-shellscript"
}
And declare tag_complete_script be the following:
locals {
tag_complete_script = <<-EOF
#!/bin/bash
instance_id="${TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/instance-id}"
aws ec2 create-tags --resources "$instance_id" --tags 'Key=trajano/terraform-docker-swarm-aws/cloudinit-complete,Value=true'
EOF
}
Then with a null_resource, you wait for the tag to appear (wrote this on my phone, so use it for a general idea, but I don't expect that it will work without testing and edits):
resource "null_resource" "wait_for_cloudinit" {
provisioner "local-exec" {
command = <<-EOF
#!/bin/bash
poll_tags="aws ec2 describe-tags --filters 'Name=resource-id,Values=${join(",", aws_instance.managers[*].id)}' 'Name=key,Values=trajano/terraform-docker-swarm-aws/cloudinit-complete' --output text --query 'Tags[*].Value'"
expected='${join(",", formatlist("true", aws_instance.managers[*].id))}'
$tags="$($poll_tags)"
while [[ "$tags" != "$expected" ]] ; do
$tags="$($poll_tags)"
done
EOF
}
}
This way you can have dependencies on null_resource.wait_for_cloudinit on any resources that need to run after cloudinit has completed.
| Terraform | 62,116,684 | 12 |
Does anyone know if it is possible to have a Terraform script that uses multiple provider versions?
For example azurerm version 2.0.0 to create one resource, and 1.4.0 for another?
I tried specifying the providers, as documented here: https://www.terraform.io/docs/configuration/providers.html
However it doesn't seem to work as it tries to resolve a single provider that fullfills both 1.4.0 and 2.0.0.
It errors like:
No provider "azurerm" plugins meet the constraint "=1.4.0,=2.0.0".
I'm asking this because we have a large Terraform codebase and I would like to migrate bits by bits if doable.
There used to be a similar question raised, here: Terraform: How to install multiple versions of provider plugins?
But it got no valid answer
| How to use multiple version of the same Terraform provider
This allowed us a smooth transition from helm2 to helm3, while enabling new deployments to use helm3 right away, therefore reducing the accumulation of tech debt.
Of course you can do the same for most providers
How we've solved this
So the idea is to download a specific version of our provider (helm 0.10.6 in my case) and move it to one of the filesystem mirrors terraform uses by default. The key part is the renaming of our plugin binary. In the zip we can find terraform-provider-helm_v0.10.6, but we rename it to terraform-provider-helm2_v0.10.6
PLUGIN_PATH=/usr/share/terraform/plugins/registry.terraform.io/hashicorp/helm2/0.10.6/linux_amd64
mkdir -p $PLUGIN_PATH
curl -sLo_ 'https://releases.hashicorp.com/terraform-provider-helm/0.10.6/terraform-provider-helm_0.10.6_linux_amd64.zip'
unzip -p _ 'terraform-provider-helm*' > ${PLUGIN_PATH}/terraform-provider-helm2_v0.10.6
rm _
chmod 755 ${PLUGIN_PATH}/terraform-provider-helm2_v0.10.6
Then when we declare our two provider plugins
We can use hashicorp/helm2 plugin from the filesystem mirror, and let terraform directly download the latest hashicorp/helm provider, which uses helm3
terraform {
required_providers {
helm2 = {
source = "hashicorp/helm2"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.0.0"
}
}
}
# you will find the doc here https://registry.terraform.io/providers/hashicorp/helm/0.10.6/docs
provider "helm2" {
install_tiller = false
namespace = "kube-system"
kubernetes {
...
}
}
# you will find the doc at latest version https://registry.terraform.io/providers/hashicorp/helm/latest/docs
provider "helm" {
kubernetes {
...
}
}
When initializing terraform, you will find that
- Finding latest version of hashicorp/helm...
- Finding latest version of hashicorp/helm2...
- Installing hashicorp/helm v2.0.2...
- Installed hashicorp/helm v2.0.2 (signed by HashiCorp)
- Installing hashicorp/helm2 v0.10.6...
- Installed hashicorp/helm2 v0.10.6 (unauthenticated)
Using it
Its pretty straightforward from this point. By default, helm resources will pick our updated helm provider at v2.0.2. You must explicitly use provider = helm2 for old resources (helm_repositoryand helm_releases in our case). Once migrated, you can remove it to use the default helm provider.
| Terraform | 61,774,501 | 12 |
I've been trying to setup a terraform module to create private cluster, and I'm struggling with a strange situation.
When creating a cluster with a master authorized network, if I do it through the GCP console, I can create the private cluster just fine. But when I do it with Terraform, I get a strange error:
Invalid master authorized networks: network "<cidr>" is not a reserved network, which is required for private endpoints.
The interesting parts of the code are as follows:
....
master_authorized_networks_config {
cidr_blocks {
cidr_block = "<my-network-cidr>"
}
}
private_cluster_config {
enable_private_endpoint = true
enable_private_nodes = true
master_ipv4_cidr_block = "<cidr>"
}
....
Is there something I'm forgetting here?
| According to Google Cloud Platform documentation here, it should be possible to have both private and public endpoints, and the master_authorized_networks_config argument should have networks which can reach either of those endpoints.
If setting the enable_private_endpoint argument to false means that the private endpoint is created, but it also creates the public endpoint, then that is a horribly mis-named argument; enable_private_endpoint is actually flipping the public endpoint off and on, not the private one. Apparently, specifying a private_cluster_config is sufficient to enable the private endpoint, and the flag toggles the public endpoint, if reported behavior is to be believed.
That is certainly the experience that I had: specifying my local IP address in the master_authorized_networks_config caused cluster creation to fail when enable_private_endpoint is true. When I set it to false, I get both endpoints and the config. is not rejected.
| Terraform | 57,548,376 | 12 |
I want to set up a Terraform module that assigns a policy to an Azure resource according to Terraforms policy assignment example.
In order to assign the allowed locations policy, I want to pass the list of allowed locations as a list of strings from the variables.tf file to the main.tf, where the assignment is executed.
main.tf
#Allowed Locations Policy Assignment
resource "azurerm_policy_assignment" "allowedlocations" {
name = "allowed-locations"
scope = var.scope_allowedlocations
policy_definition_id = var.policy_allowedlocations.id
description = "This policy enables you to restrict the locations."
display_name = "Allowed Locations"
parameters = <<PARAMETERS
{
"allowedLocations": {
"value": ${var.listofallowedlocations}
}
}
PARAMETERS
}
variables.tf
# Scope of the Allowed Locations policy
variable "scope_allowedlocations" {
description = "The scope of the allowed locations assignment."
default = "Subscription"
}
# Scope of the Allowed Locations policy
variable "policy_allowedlocations" {
description = "The allowed locations policy (created by the policy-define module)."
default = "default"
}
# List of the Allowed Locations
variable "listofallowedlocations" {
type = list(string)
description = "The allowed locations list."
default = [ "West Europe", "North Europe", "East US" ]
}
Executing with terraform plan leads to the following error:
Error: Invalid template interpolation value
on modules/policy-assign/main.tf line 16, in resource "azurerm_policy_assignment" "allowedlocations":
12:
13:
14:
15:
16: "value": ${var.listofallowedlocations}
17:
18:
19:
|----------------
| var.listofallowedlocations is list of string with 3 elements
Cannot include the given value in a string template: string required.
Thus, I don't know how to exactly pass the list from the variables file to the PARAMETERS section of the policy assignment resource. In Terraforms policy assignment example the list is directly inline coded in the PARAMETERS section and it works. But there is no passing of variables...:
parameters = <<PARAMETERS
{
"allowedLocations": {
"value": [ "West Europe" ]
}
}
PARAMETERS
| When you are interpolating a value into a string that value must itself be convertible to string, or else Terraform cannot join the parts together to produce a single string result.
There are a few different alternatives here, with different tradeoffs.
The option I personally would choose here is to not use the <<PARAMETERS syntax at all and to just build that entire value using jsonencode:
parameters = jsonencode({
allowedLocations = {
value = var.listofallowedlocations
}
})
This avoids the need for your configuration to deal with any JSON syntax issues at all, and (subjectively) therefore makes the intent clearer and future maintenence easier.
In any situation where the result is a single valid JSON value, I would always choose to use jsonencode rather than the template language. I'm including the other options below for completeness in case a future reader is trying to include a collection value into a string template that isn't producing JSON.
A second option is to write an expression to tell Terraform a way to convert your list value into a string value in a suitable format. In your case you wanted JSON and so jsonencode again would probably be the most suitable choice:
parameters = <<PARAMETERS
{
"allowedLocations": {
"value": ${jsonencode(var.listofallowedlocations)}
}
}
PARAMETERS
In other non-JSON situations, when the result is a simple list of strings, the join function can be useful to just concatenate all of the strings together with a fixed delimiter.
Any of Terraform's functions that produce a single string as a result is a candidate here. The ones under "String Functions" and "Encoding Functions" are the most likely choices.
Finally, for situations where the mapping from the collection value to the resulting string is something custom that no standard function can handle, you can use the template repetition syntax:
parameters = <<CONFIG
%{ for port in var.ports ~}
listen 127.0.0.1:${port}
%{ endfor ~}
CONFIG
In this case, Terraform will evaluate the body of the repetition construct once for each element in var.ports and concatenate all of the results together in order to produce the result. You can generate all sorts of textual output formats using this approach, though if the template gets particularly complicated it could be better to factor it out into a separate file and use the templatefile function to evaluate it.
| Terraform | 57,218,755 | 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.