Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have currently deployed a small java application to my local Kubernetes cluster. I'm currently trying to test my application by port-forwarding the pod and then using postman to test my Controllers.</p>
<p>However, when I am testing I am getting a read timeout exception. No matter how long I set my timeout to be it will wait the entirety of the time and throw the exception.</p>
<p>This is strange because this only happens when it is running from my Kubernetes cluster and not when running the application locally. I can see this exception is thrown from a HttpClient I am using to retrieve some data from an External third-party API:</p>
<pre><code> @Client(value = "${rawg.api.url}")
public interface RawgClient {
@Get(value = "/{gameSlug}/${rawg.api.key}", produces = APPLICATION_JSON)
HttpResponse<RawgClientGameInfoResponse> retrieveGameInfo(@PathVariable("gameSlug") String gameSlug);
@Get(value = "${rawg.api.key}&search={searchTerm}",produces = APPLICATION_JSON)
HttpResponse<RawgClientSearchResponse> retrieveGameSearchByName(@PathVariable("searchTerm") String searchTerm);
}
</code></pre>
<p>However, When I check the logs after the exception is thrown I can see that the information was retrieved from the client:</p>
<pre><code> k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:15.220 [default-nioEventLoopGroup-1-2] TRACE i.m.c.e.PropertySourcePropertyResolver - Resolved value [?key=****] for property: rawg.api.key
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.242 [default-nioEventLoopGroup-1-2] ERROR i.m.r.intercept.RecoveryInterceptor - Type [com.agl.client.RawgClient$Intercepted] executed with error: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter io.micronaut.http.client.exceptions.ReadTimeoutException: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.exceptions.ReadTimeoutException.<clinit>(ReadTimeoutException.java:26)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.netty.DefaultHttpClient.lambda$exchangeImpl$45(DefaultHttpClient.java:1380)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onError(ReactorSubscriber.java:64)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.handleTimeout(FluxTimeout.java:295)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.doTimeout(FluxTimeout.java:280)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutTimeoutSubscriber.onNext(FluxTimeout.java:419)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.propagateDelay(MonoDelay.java:271)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:286)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.lang.Thread.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.257 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: RawgClient
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.259 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class com.agl.client.RawgClient$Intercepted null Definition: com.agl.client.RawgClient$Intercepted
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.259 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finalized bean definitions candidates: [Definition: com.agl.client.RawgClient$Intercepted]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.260 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class com.agl.client.RawgClient$Intercepted null Definition: com.agl.client.RawgClient$Intercepted
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.260 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [RawgClient] for qualifier: @Fallback
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.263 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - No qualifying beans of type [RawgClient] found for qualifier: @Fallback
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.276 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: ExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ContentLengthExceededHandler null Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.JsonExceptionHandler null Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.HttpStatusHandler null Definition: io.micronaut.http.server.exceptions.HttpStatusHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ConversionErrorHandler null Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.DuplicateRouteHandler null Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.validation.exceptions.ConstraintExceptionHandler null Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.URISyntaxHandler null Definition: io.micronaut.http.server.exceptions.URISyntaxHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.291 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.292 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finalized bean definitions candidates: [Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler, Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler, Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler, Definition: io.micronaut.http.server.exceptions.HttpStatusHandler, Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler, Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler, Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler, Definition: io.micronaut.http.server.exceptions.URISyntaxHandler, Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.292 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ContentLengthExceededHandler null Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.292 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.JsonExceptionHandler null Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.HttpStatusHandler null Definition: io.micronaut.http.server.exceptions.HttpStatusHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ConversionErrorHandler null Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.DuplicateRouteHandler null Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.validation.exceptions.ConstraintExceptionHandler null Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.URISyntaxHandler null Definition: io.micronaut.http.server.exceptions.URISyntaxHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.294 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [ExceptionHandler] for qualifier: <ReadTimeoutException,Object>
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.306 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.http.exceptions.ContentLengthExceededException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.306 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class com.fasterxml.jackson.core.JsonProcessingException,class java.lang.Object] of candidate Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.306 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.core.bind.exceptions.UnsatisfiedArgumentException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.http.exceptions.HttpStatusException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.HttpStatusHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.core.convert.exceptions.ConversionErrorException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.web.router.exceptions.DuplicateRouteException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class javax.validation.ConstraintViolationException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class java.net.URISyntaxException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.URISyntaxHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.308 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.web.router.exceptions.UnsatisfiedRouteException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.309 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - No qualifying beans of type [ExceptionHandler] found for qualifier: <ReadTimeoutException,Object>
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.311 [default-nioEventLoopGroup-1-2] ERROR i.m.http.server.RouteExecutor - Unexpected error occurred: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter io.micronaut.http.client.exceptions.ReadTimeoutException: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.exceptions.ReadTimeoutException.<clinit>(ReadTimeoutException.java:26)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.netty.DefaultHttpClient.lambda$exchangeImpl$45(DefaultHttpClient.java:1380)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onError(ReactorSubscriber.java:64)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.handleTimeout(FluxTimeout.java:295)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.doTimeout(FluxTimeout.java:280)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutTimeoutSubscriber.onNext(FluxTimeout.java:419)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.propagateDelay(MonoDelay.java:271)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:286)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.lang.Thread.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.323 [default-nioEventLoopGroup-1-2] TRACE i.m.h.s.netty.RoutingInBoundHandler - Encoding emitted response object [Internal Server Error] using codec: io.micronaut.json.codec.JsonMediaTypeCodec@6399551e
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.325 [default-nioEventLoopGroup-1-2] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: T
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.368 [default-nioEventLoopGroup-1-2] DEBUG i.m.c.beans.DefaultBeanIntrospector - Found BeanIntrospection for type: class io.micronaut.http.hateoas.JsonError,
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.372 [default-nioEventLoopGroup-1-2] DEBUG i.m.j.m.BeanIntrospectionModule - Updating 5 properties with BeanIntrospection data for type: class io.micronaut.http.hateoas.JsonError
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.394 [default-nioEventLoopGroup-1-2] DEBUG i.m.c.beans.DefaultBeanIntrospector - Found BeanIntrospection for type: class io.micronaut.http.hateoas.DefaultLink,
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.397 [default-nioEventLoopGroup-1-2] DEBUG i.m.j.m.BeanIntrospectionModule - Updating 8 properties with BeanIntrospection data for type: class io.micronaut.http.hateoas.DefaultLink
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.414 [default-nioEventLoopGroup-1-2] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Response 500 - PUT /api/games/search
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.418 [default-nioEventLoopGroup-1-2] DEBUG i.m.c.e.ApplicationEventPublisher - Publishing event: io.micronaut.http.context.event.HttpRequestTerminatedEvent[source=PUT /api/games/search]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.418 [default-nioEventLoopGroup-1-2] TRACE i.m.c.e.ApplicationEventPublisher - Established event listeners [io.micronaut.runtime.http.scope.RequestCustomScope@4f5af8bf] for event: io.micronaut.http.context.event.HttpRequestTerminatedEvent[source=PUT /api/games/search]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.419 [default-nioEventLoopGroup-1-2] TRACE i.m.c.e.ApplicationEventPublisher - Invoking event listener [io.micronaut.runtime.http.scope.RequestCustomScope@4f5af8bf] for event: io.micronaut.http.context.event.HttpRequestTerminatedEvent[source=PUT /api/games/search]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.451 [default-nioEventLoopGroup-1-2] DEBUG i.m.h.client.netty.DefaultHttpClient - Sending HTTP GET to https://rawg.io/api/games/?key=b0f66f77c214441d9864062ee5580ca4&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.454 [default-nioEventLoopGroup-1-2] TRACE i.m.h.client.netty.DefaultHttpClient - Accept: application/json
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.454 [default-nioEventLoopGroup-1-2] TRACE i.m.h.client.netty.DefaultHttpClient - host: rawg.io
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.454 [default-nioEventLoopGroup-1-2] TRACE i.m.h.client.netty.DefaultHttpClient - connection: close
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.747 [default-nioEventLoopGroup-1-2] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xfdc82b86, L:/10.1.0.167:44770 - R:rawg.io/172.67.75.230:443] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.826 [default-nioEventLoopGroup-1-2] DEBUG i.m.h.client.netty.DefaultHttpClient - Received response 301 from https://rawg.io/api/games/?key=****&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] DEBUG i.m.h.client.netty.DefaultHttpClient - Sending HTTP GET to https://rawg.io/api/games?key=****&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - Accept: application/json
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - host: rawg.io
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - connection: close
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.955 [default-nioEventLoopGroup-1-3] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x7b660180, L:/10.1.0.167:44776 - R:rawg.io/172.67.75.230:443] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:47.302 [default-nioEventLoopGroup-1-3] DEBUG i.m.h.client.netty.DefaultHttpClient - Received response 200 from https://rawg.io/api/games?key=****&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:47.302 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - HTTP Client Response Received (200 OK) for Request: GET https://rawg.io/api/games?key=****&search=Horizon
</code></pre>
<p>Has anyone got any thoughts what could be causing this read timeout when the client is clearly returning a response?</p>
| Aaron Sumner | <p>Some k8s deployments allow k8s pods to have only 1 CPU. In this circumstance it is not possible for the client and server to share an event loop (which is the default since it is more efficient to share an event loop if possible). The result is you get read timeouts. So you should configure a separate event loop for the client and the server to avoid this problem.</p>
<p>See <a href="https://docs.micronaut.io/latest/guide/#clientConfiguration" rel="nofollow noreferrer">https://docs.micronaut.io/latest/guide/#clientConfiguration</a> and the section "Configuring Event Loop Groups"</p>
| Graeme Rocher |
<p>I am creating namespace using <code>kubectl</code> with <code>yaml</code>. The following is my <code>yaml</code> configuration</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: "slackishapp"
labels:
name: "slackishapp"
</code></pre>
<p>But, when I run <code>kubectl create -f ./slackish-namespace-manifest.yaml</code>, I got an error like the following</p>
<pre><code>error: SchemaError(io.k8s.api.autoscaling.v2beta2.PodsMetricStatus): invalid object doesn't have additional properties.
</code></pre>
<p>What goes wrong on my <code>yaml</code>? I am reading about it on the <a href="https://kubernetes.io/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace" rel="noreferrer">documentation</a> as well. I don't see any difference with my configuration.</p>
| Set Kyar Wa Lar | <p>There is nothing wrong with your yaml but I suspect you have the wrong version of kubectl. </p>
<p>kubectl needs to be within 1 minor from the cluster you are using as described <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin" rel="noreferrer">here</a>.</p>
<p>You can check your versions with</p>
<pre><code>kubectl version
</code></pre>
| Andreas Wederbrand |
<p>When I am installing CoreDNS using this command ,by the way,the OS version is: CentOS 7.6 and Kubernetes version is: v1.15.2:</p>
<pre><code>kubectl create -f coredns.yaml
</code></pre>
<p>The output is:</p>
<pre><code>[root@ops001 coredns]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
service/kube-dns created
Error from server (BadRequest): error when creating "coredns.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Resources: v1.ResourceRequirements.Requests: Limits: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|__LIMIT__"},"request|..., bigger context ...|limits":{"memory":"__PILLAR__DNS__MEMORY__LIMIT__"},"requests":{"cpu":"100m","memory":"70Mi"}},"secu|...
</code></pre>
<p>this is my coredns.yaml:</p>
<pre><code># __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: coredns
image: gcr.azk8s.cn/google-containers/coredns:1.3.1
imagePullPolicy: IfNotPresent
resources:
limits:
memory: __PILLAR__DNS__MEMORY__LIMIT__
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
</code></pre>
<p>am I missing something?</p>
| Dolphin | <p>From this error message</p>
<pre><code>Error from server (BadRequest):
error when creating "coredns.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec:
v1.DeploymentSpec.Template: v
1.PodTemplateSpec.Spec:
v1.PodSpec.Containers: []v1.Container:
v1.Container.Resources:
v1.ResourceRequirements.Requests: Limits: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|__LIMIT__"},"request|..., bigger context ...|limits":{"memory":"__PILLAR__DNS__MEMORY__LIMIT__"},"requests":{"cpu":"100m","memory":"70Mi"}},"secu|...
</code></pre>
<p>This part is root-cause.</p>
<pre><code>unmarshalerDecoder:
quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
</code></pre>
<p>What <code>quantities</code> are there?
Seems like</p>
<pre><code>v1.ResourceRequirements.Requests: Limits:
</code></pre>
<p>So please, change <code>Requests.Limits</code> from <code>__PILLAR__DNS__MEMORY__LIMIT__</code> to other value.</p>
| ruseel |
<p>I have followed the AWS getting started guide to provision an EKS cluster (3 public subnets and 3 private subnets). After creating it, I get the following API server endpoint <a href="https://XXXXXXXXXXXXXXXXXXXX.gr7.us-east-2.eks.amazonaws.com" rel="nofollow noreferrer">https://XXXXXXXXXXXXXXXXXXXX.gr7.us-east-2.eks.amazonaws.com</a> (replaced the URL with X's for privacy reasons).</p>
<p>Accessing the URL in the browser I get the expected output from the cluster endpoint.</p>
<p>Question: How do I point my registered domain in Route 53 to my cluster endpoint?</p>
<p>I can't use a <code>cname</code> record because my domain is a root domain and will receive an apex domain error.</p>
<p>I don' have access to a static ip, and I don't believe my EKS cluster has a public IP address I can directly used. This would mean I can't use an A record (as I need an IP address).</p>
<p>Can I please get help/instructions as to how I can point my domain straight to my cluster?</p>
<p>Below is my AWS VPC architecture:</p>
<p><a href="https://i.stack.imgur.com/3Oc9g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Oc9g.png" alt="enter image description here" /></a></p>
| thatguyjono | <p>Don't try and assign a pretty name to the API endpoint. Your cluster endpoint is the address that's used to talk to the control plane. When you configure your kubectl tool, the api endpoint is what kubectl talks to.</p>
<p>Once you've got an application running on your EKS cluster, and have a load balancer, or Ingress, or something for incoming connections, that's when you worry about creating pretty names.</p>
<p>And yes, If you're dealing with AWS load balancers, you don't get the option of A records, so you can't use the apex of the domain, unless you're hosting DNS in route 53, in which case, you can use "alias" records to point the apex of a domain at a load balancer.</p>
<p>Kubernetes is a massively complex thing to try understand and get running. Given that this is the type of question you're asking, it sounds like you don't have the full picture yet. I recommend (1) joining the Kubenetes slack channel. It'll be a much faster way to get help than SO, and (2) take in Jeff Geerling's excellent Kubernetes 101 course on youtube.</p>
| Dale C. Anderson |
<p>I followed this tutorial for deploying vault into a minikube cluster: <a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes</a>.</p>
<p>I don't understand, however, how this is reproducible. These seem like a lot of manual steps. Is there a way to easily deploy these pods again if I destroy them? Would I need to script this or can I somehow get the consul and vault pods, output them as yaml and use that to recreate them?</p>
| Aaron | <p>I found this blog post on hashicorp's site which seems to address configuration once you are up and running: <a href="https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration" rel="nofollow noreferrer">https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration</a>.</p>
<p>There's also this: <a href="https://kubevault.com/docs/v2021.08.02/welcome/" rel="nofollow noreferrer">https://kubevault.com/docs/v2021.08.02/welcome/</a></p>
<p>Setting it up before the API is running seems to require either manual steps or a pretty simple shell script.</p>
| Aaron |
<p>On this document.</p>
<p><a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/kafka</a></p>
<p>it mentioned following:</p>
<pre><code>Note: the deployed ingress must contain the following block:
tcp:
9094: "{{ .Release.Namespace }}/{{ include "kafka.fullname" . }}-0-external:9094"
9095: "{{ .Release.Namespace }}/{{ include "kafka.fullname" . }}-1-external:9094"
9096: "{{ .Release.Namespace }}/{{ include "kafka.fullname" . }}-2-external:9094"
</code></pre>
<p>what does this means? what is this configuration? is this helm chart configuration or k8s configuration?</p>
| 王子1986 | <p>I resolved this by referring to this guide.</p>
<p><a href="https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/</a></p>
<p>I was missing this step</p>
<pre><code>kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n ingress-nginx
</code></pre>
<p>ingress-nginx-controller-patch.yaml</p>
<pre><code>spec:
template:
spec:
containers:
- name: controller
ports:
- containerPort: 6379
hostPort: 6379
</code></pre>
| 王子1986 |
<p>I have a kubernetes cluster into which I'm intending to implement a service in a pod - the service will accept a grpc request, start a long running process but return to the caller indicating the process has started. Investigation suggests that <code>IHostedService</code> (<code>BackgroundService</code>) is the way to go for this.</p>
<p>My question is, will use of <code>BackgroundService</code> behave nicely with various neat features of asp.net and k8s:</p>
<ul>
<li>Will horizontal scaling understand that a service is getting overloaded and spin up a new instance even though the service will appear to have no pending grpc requests because all the work is background (I appreciate there's probably hooks that can be implemented, I'm wondering what's default behaviour)</li>
<li>Will the notion of <code>await</code>ing allowing the current process to be swapped out and another run work okay with background services (I've only experienced it where one message received hits an <code>await</code> so allows another message to be processed, but backround services are not a messaging context)</li>
<li>I think asp.net will normally manage throttling too many requests, backing off if the server is too busy, but will any of that still work if the 'busy' is background processes</li>
<li>What's the best method to mitigate against overloading the service (if horizontal scaling is not an option) - I can have the grpc call reutrn 'too busy' but would need to detect it (not quite sure if that's cpu bound, memory or just number of background services)</li>
<li>Should I be considering something other than <code>BackgroundService</code> for this task</li>
</ul>
<p>I'm hoping the answer is that "it all just works" but feel it's better to have that confirmed than to just hope...</p>
| Richard Hunt | <blockquote>
<p>Investigation suggests that IHostedService (BackgroundService) is the way to go for this.</p>
</blockquote>
<p>I <a href="https://blog.stephencleary.com/2021/01/asynchronous-messaging-1-basic-distributed-architecture.html" rel="nofollow noreferrer">strongly recommend</a> using a durable queue with a separate background service. It's not that difficult to split into two images, one running ASP.NET GRPC requests, and the other processing the durable queue (this can be a console app - see the Service Worker template in VS). Note that solutions using <strong>non</strong>-durable queues are not reliable (i.e., work may be lost whenever a pod restarts or is scaled down). This includes in-memory queues, which are commonly suggested as a "solution".</p>
<p>If you do make your own background service in a console app, I recommend applying a <a href="https://blog.stephencleary.com/2020/05/backgroundservice-gotcha-startup.html" rel="nofollow noreferrer">few tweaks</a> (noted on my blog):</p>
<ul>
<li>Wrap <code>ExecuteAsync</code> in <code>Task.Run</code>.</li>
<li>Always have a top-level <code>try</code>/<code>catch</code> in <code>ExecuteAsync</code>.</li>
<li>Call <code>IHostApplicationLifetime.StopApplication</code> when the background service stops for any reason.</li>
</ul>
<blockquote>
<p>Will horizontal scaling understand that a service is getting overloaded and spin up a new instance even though the service will appear to have no pending grpc requests because all the work is background (I appreciate there's probably hooks that can be implemented, I'm wondering what's default behaviour)</p>
</blockquote>
<p>One reason I prefer using two different images is that they can scale on different triggers: GRPC requests for the API and queued messages for the worker. Depending on your queue, using "queued messages" as the trigger may require a custom metric provider. I do prefer using "queued messages" because it's a natural scaling mechanism for the worker image; out-of-the-box solutions like CPU usage don't always work well - in particular for asynchronous processors, which you mention you are using.</p>
<blockquote>
<p>Will the notion of awaiting allowing the current process to be swapped out and another run work okay with background services (I've only experienced it where one message received hits an await so allows another message to be processed, but backround services are not a messaging context)</p>
</blockquote>
<p>Background services can be asynchronous without any problems. In fact, it's not uncommon to grab messages in batches and process them all concurrently.</p>
<blockquote>
<p>I think asp.net will normally manage throttling too many requests, backing off if the server is too busy, but will any of that still work if the 'busy' is background processes</p>
</blockquote>
<p>No. ASP.NET only throttles requests. Background services do register with ASP.NET, but that is <em>only</em> to provide a best-effort at graceful shutdown. ASP.NET has no idea how busy the background services are, in terms of pending queue items, CPU usage, or outgoing requests.</p>
<blockquote>
<p>What's the best method to mitigate against overloading the service (if horizontal scaling is not an option) - I can have the grpc call reutrn 'too busy' but would need to detect it (not quite sure if that's cpu bound, memory or just number of background services)</p>
</blockquote>
<p>Not a problem if you use the durable queue + independent worker image solution. GRPC calls can pretty much always stick another message in the queue (very simple and fast), and K8 can autoscale based on your (possibly custom) metric of "outstanding queue messages".</p>
| Stephen Cleary |
<p>My cluster sometimes gets a "burst" of information and generates a large number of Kubernetes Jobs at once. And in other times I have ~0 active jobs.</p>
<p>I'm wondering how can I make it autoscale the number of nodes to continuously be able to process all these jobs in a reasonable time-frame.</p>
<p>I specifically use AWS EKS and each job takes a few minutes to complete.</p>
| user972014 | <p>EKS allows you to deploy <a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">cluster autoscaler</a> so when new job can not be scheduled due to lack of available cpu/memory, extra node will be added to the cluster.</p>
| Ilia Kondrashov |
<p>I am trying to get Service <code>label selectors</code> through <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a>. I am using <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_service_for_all_namespaces" rel="nofollow noreferrer">list_service_for_all_namespaces</a> method to retrieve the services, and filter it with <code>field_selector</code> parameter like:</p>
<pre><code>...
field_selector="spec.selector={u'app': 'redis'}
...
services = v1.list_service_for_all_namespaces(field_selector=field_selector, watch=False)
for service in services.items:
print(service)
...
</code></pre>
<p>I get this error:</p>
<pre><code>HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"\"spec.selector\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"","reason":"BadRequest","code":400}
</code></pre>
<p>So, it seems that only <code>name</code> and <code>namespace</code> are valid parameters, which is not documented:</p>
<blockquote>
<p>field_selector = 'field_selector_example' # str | A selector to restrict the list of returned objects by their fields. Defaults to everything. (optional)</p>
</blockquote>
<p>For now my workaround is to set the same <em>labels</em> as <em>label selectors</em> to the service, then to retrieve it through <code>label_selector</code> parameter, but I'd like to be able to get it through <code>label selectors</code>.</p>
<p>The thing is that from the beginning I need to get the endpoints behind the service (the backend pods), but the API call is not even returning this information, so I though I would get the selectors, match them against the labels on the pods, and there we go, but now I am realizing the selectors are not possible to get either. </p>
<p>This is too much limitation. I am thinking may be my approach is wrong. Does anyone know a way of getting <code>label selectors</code> from a service?</p>
| suren | <p>You should be able to get the selector from a service object, and then use that to find all the pods that match the selector.</p>
<p>For example (I am hoping I don't have typos, and my python is rusty):</p>
<pre><code>services = v1.list_service_for_all_namespaces(watch=False)
for svc in services.items:
if svc.spec.selector:
# convert the selector dictionary into a string selector
# for example: {"app":"redis"} => "app=redis"
selector = ''
for k,v in svc.spec.selector.items():
selector += k + '=' + v + ','
selector = selector[:-1]
# Get the pods that match the selector
pods = v1.list_pod_for_all_namespaces(label_selector=selector)
for pod in pods.items:
print(pod.metadata.name)
</code></pre>
| AlexBrand |
<p>I have the following ingress.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
labels:
app: ingress
spec:
rules:
- host:
http:
paths:
- path: /apistarter(/|$)(.*)
backend:
serviceName: svc-aspnetapistarter
servicePort: 5000
- path: //apistarter(/|$)(.*)
backend:
serviceName: svc-aspnetapistarter
servicePort: 5000
</code></pre>
<p>After deploying my ASP.Net Core 2.2 API application and navigate to <code>http://localhost/apistarter/</code>, browser debugger console shows errors loading the static content and Javascripts. In addition, navigating to <code>http://localhost/apistarter/swagger/index.html</code> results in</p>
<pre><code>Fetch error Not Found /swagger/v2/swagger.json
</code></pre>
<p>I am using the SAME ingress for multiple micro-services using different path prefix. It is running on my local kubernetes cluster using microk8s. Not on any cloud provider yet. I have checked out <a href="https://stackoverflow.com/questions/52404475/how-to-configure-an-asp-net-core-multi-microservice-application-and-azure-aks-in">How to configure an ASP.NET Core multi microservice application and Azure AKS ingress routes so that it doesn't break resources in the wwwroot folder</a> and <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1</a> but none of these helps.</p>
| Kok How Teh | <p>Follow these steps to run your code:</p>
<ol>
<li><strong>ingress</strong>: remove URL-rewriting from <em>ingress.yml</em></li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
labels:
app: ingress
spec:
rules:
- host:
http:
paths:
- path: /apistarter # <---
backend:
serviceName: svc-aspnetapistarter
servicePort: 5000
</code></pre>
<ol start="2">
<li><strong>deployment</strong>: pass environment variable with <em>path base</em> in <em>ingress.yml</em></li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
# ..
spec:
# ..
template:
# ..
spec:
# ..
containers:
- name: test01
image: test.io/test:dev
# ...
env:
# define custom Path Base (it should be the same as 'path' in Ingress-service)
- name: API_PATH_BASE # <---
value: "apistarter"
</code></pre>
<ol start="3">
<li><strong>program</strong>: enable loading environment params in <em>Program.cs</em></li>
</ol>
<pre class="lang-cs prettyprint-override"><code>var builder = new WebHostBuilder()
.UseContentRoot(Directory.GetCurrentDirectory())
// ..
.ConfigureAppConfiguration((hostingContext, config) =>
{
// ..
config.AddEnvironmentVariables(); // <---
// ..
})
// ..
</code></pre>
<ol start="4">
<li><strong>startup</strong>: apply <em>UsePathBaseMiddleware</em> in <em>Startup.cs</em></li>
</ol>
<pre class="lang-cs prettyprint-override"><code>public class Startup
{
public Startup(IConfiguration configuration)
{
_configuration = configuration;
}
private readonly IConfiguration _configuration;
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
var pathBase = _configuration["API_PATH_BASE"]; // <---
if (!string.IsNullOrWhiteSpace(pathBase))
{
app.UsePathBase($"/{pathBase.TrimStart('/')}");
}
app.UseStaticFiles(); // <-- StaticFilesMiddleware must follow UsePathBaseMiddleware
// ..
app.UseMvc();
}
// ..
}
</code></pre>
| vladimir |
<p>I want to know if there is versioning for ingress config similar to what we have in deployments. Suppose there is a misconfiguration I would like to revert to the previous config.
I would like to understand about <code>generation</code> in ingress YAML config.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/service-match: 'new-nginx: header("foo", /^bar$/)' #Canary release rule. In this example, the request header is used.
nginx.ingress.kubernetes.io/service-weight: 'new-nginx: 50,old-nginx: 50' #The route weight.
creationTimestamp: null
generation: 1
name: nginx-ingress
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress
spec:
rules: ##The Ingress rule.
- host: foo.bar.com
http:
paths:
- backend:
serviceName: new-nginx
servicePort: 80
path: /
- backend:
serviceName: old-nginx
servicePort: 80
path: /
</code></pre>
| Dinesh Kumar | <p>Kubernetes does not offer this natively, and neither does a management tool like Rancher.</p>
<p>If you want to do this, you need an infra-as-code tool, like Terreform, ansible, etc. The config files for these can be versioned in a repo.</p>
<p>Even without those, you can independently export a give ingress yaml, and commit it to a repo.</p>
| New Alexandria |
<p>I use Python kubernetes-client, and want to wait if the job is done:</p>
<pre class="lang-py prettyprint-override"><code>api_instance.create_namespaced_job("default", body, pretty=True)
</code></pre>
<p>This call just makes a submit job, it will return the response even though the job is still running. How can I wait for the job to finish?</p>
| Võ Trường Duy | <p>I found the solution. You can recognize the job is complete by watching the jobs and observing the events:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config, watch
config.load_kube_config()
api_client = client.BatchV1Api()
print("INFO: Waiting for event to come up...")
w = watch.Watch()
for event in w.stream(api_client.list_job_for_all_namespaces):
o = event['object']
print(o)
if (o.status.... = "Complete"): ....
</code></pre>
| smrt28 |
<p>I am trying to get my hands dirty on Kubernetes. I am firing following command:</p>
<pre><code>kubectl get deployment
</code></pre>
<p>and I get the following headers in the output:</p>
<p><a href="https://i.stack.imgur.com/mX2TR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mX2TR.png" alt="kubectll get deployment"></a></p>
<p>I can't find the difference between <code>current</code> and <code>available</code> columns in the following output?</p>
<p>I know that official documentation gives a small description each of these fields, but it doesn't answer my following questions:</p>
<ol>
<li>Is <code>Current</code> <= <code>Desired</code> true?</li>
<li>Is <code>Up-to-Date</code> <= <code>Current</code> true?</li>
<li>Is <code>Up-to-Date</code> > <code>Current</code> and <code>Up-to-Date</code> <= <code>Desired</code> true?</li>
<li>Is <code>Available</code> always <= <code>Current</code> OR it can be > <code>Available</code>?</li>
</ol>
<p>In short, what is the relation between all these fields?</p>
| Mangu Singh Rajpurohit | <p>The Deployment object specifies the desired state of your Deployment, and the Deployment Controller drives the current state of the system towards the desired state.</p>
<p>The <code>Desired</code> field specifies the number of replicas you asked for, while the <code>Current</code> field specifies the number of replicas that are currently running in the system. The <code>Up-To-Date</code> field indicates the number of replicas that are up to date with the desired state. The <code>Available</code> field shows the number of replicas that are passing readiness probes (if defined).</p>
<ol>
<li><p>Is <code>Current</code> always <= <code>Desired</code>? No, current can be greater than desired during a deployment update.</p></li>
<li><p>Is <code>Up-to-date</code> always <= <code>Current</code>? I believe the answer here is yes.</p></li>
<li><p>Is <code>Up-to-date</code> > <code>Current</code>? No, up-to-date should be the same as current, or less than current during a deployment update.</p></li>
<li><p>Is <code>Available</code> always <= <code>Current</code>? Yes.</p></li>
</ol>
<p>I encourage you to go through a deployment update and scale out/in while using <code>watch</code> to monitor these fields as the controller converges current state to desired state.</p>
| AlexBrand |
<p>I'm running an OpenShift cluster and am trying to figure out what version of OLM in installed in it. I'm considering an upgrade, but would like more details.</p>
<p>How can I find the version?</p>
| Josiah | <p><strong>From the CLI:</strong></p>
<p>You can change kubectl for oc since you are using OpenShift.</p>
<p>First find the name of an olm-operator pod. I'm assuming Operator Lifecycle Manager is installed in the olm namespace, but it might be "operator-lifecycle-manager".</p>
<pre><code>kubectl get pods -n olm |grep olm-operator
</code></pre>
<p>Then run a command on that pod like this:</p>
<pre><code>kubectl exec -n olm <POD_NAME> -- olm --version
</code></pre>
<p><strong>From the Console:</strong></p>
<p>Navigate to the namespace and find an olm-operator pod. Open the "Terminal" tap and run <code>olm --version</code>.</p>
<p>In either case, the output should be something like this:</p>
<pre><code>OLM version: 0.12.0
git commit: a611449366805935939777d0182a86ba43b26cbd
</code></pre>
| Josiah |
<p>Currently, I'm using Docker Desktop with <strong>WSL2</strong> integration. I found that <strong>Docker Desktop</strong> automatically had created a cluster for me. It means I don't have to install and use <strong>Minikube</strong> or <strong>Kind</strong> to create cluster.
The problem is that, how could I enable <strong>Ingress Controller</strong> if I use "built-in" cluster from Docker Desktop?
I tried to create an <strong>Ingress</strong> to check if this work or not, but as my guess, it didn't work.</p>
<p>The YAML file I created as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
minReadySeconds: 30
selector:
matchLabels:
app: webapp
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nodejs-helloworld:v1
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: http
port: 3000
nodePort: 30090 # only for NotPort > 30,000
type: NodePort #ClusterIP inside cluster
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
spec:
defaultBackend:
service:
name: webapp-service
port:
number: 3000
rules:
- host: ingress.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 3000
</code></pre>
<p>I tried to access <em>ingress.local/</em> but it was not successful. (I added ingress.local to point to 127.0.0.1 in host file. And the <strong>webapp</strong> worked fine at <em>kubernetes.docker.internal:30090</em> )</p>
<p>Could you please help me to know the root cause?
Thank you.</p>
| tuq | <p>Finally I found the way to fix. I have to deploy ingress Nginx by command:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>(Follows the instruction at <a href="https://kubernetes.github.io/ingress-nginx/deploy/#docker-for-mac" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#docker-for-mac</a>. It works just fine for Docker for Windows)</p>
<p>Now I can access <a href="http://ingress.local" rel="noreferrer">http://ingress.local</a> successfully.</p>
| tuq |
<p>I'm new to Kubernetes and I'm trying to understand some security stuff.</p>
<p>My question is about the Group ID (= gid) of the user running the container.</p>
<p>I create a Pod using this official example: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
</code></pre>
<p>In the documentation, they say:</p>
<blockquote>
<p>In the configuration file, the <strong>runAsUser</strong> field specifies that for any
Containers in the Pod, the <strong>first process runs with user ID 1000</strong>. The
<strong>fsGroup</strong> field specifies that <strong>group ID 2000 is associated with all</strong>
<strong>Containers in the Pod</strong>. Group ID 2000 is also associated with the
volume mounted at /data/demo and with any files created in that
volume.</p>
</blockquote>
<p>So, I go into the container:</p>
<pre><code>kubectl exec -it security-context-demo -- sh
</code></pre>
<p>I see that the first process (i.e. with PID 1) is running with user 1000 => OK, that's the behavior I expected.</p>
<pre><code> $ ps -f -p 1
UID PID PPID C STIME TTY TIME CMD
1000 1 0 0 13:06 ? 00:00:00 /bin/sh -c node server.js
</code></pre>
<p>Then, I create a file "testfile" in folder /data/demo. This file belongs to group "2000" because /data/demo has the "s" flag on group permission:</p>
<pre><code>$ ls -ld /data/demo
drwxrwsrwx 3 root 2000 39 Dec 29 13:26 /data/demo
$ echo hello > /data/demo/testfile
$ ls -l /data/demo/testfile
-rw-r--r-- 1 1000 2000 6 Dec 29 13:29 /data/demo/testfile
</code></pre>
<p>Then, I create a subfolder "my-folder" and remove the "s" flag on group permission. I create a file "my-file" in this folder:</p>
<pre><code>$ mkdir /data/demo/my-folder
$ ls -ld /data/demo/my-folder
drwxr-sr-x 2 1000 2000 6 Dec 29 13:26 /data/demo/my-folder
$ chmod g-s /data/demo/my-folder
$ ls -ld /data/demo/my-folder
drwxr-xr-x 2 1000 2000 6 Dec 29 13:26 /data/demo/my-folder
$ touch /data/demo/my-folder/my-file
$ ls -l /data/demo/my-folder/my-file
-rw-r--r-- 1 1000 root 0 Dec 29 13:27 /data/demo/my-folder/my-file
</code></pre>
<p>I'm surprised that this file belongs to group "root", i.e. group with GID 0.
I expected that it should belong to group "2000" according to this sentence in the documentation:</p>
<blockquote>
<p>The fsGroup field specifies that group ID 2000 is associated with all
Containers in the Pod</p>
</blockquote>
<p>With the following commands, I see that user with UID "1000" in the container has primary Unix group "0", not 2000.</p>
<pre><code>$ id
uid=1000 gid=0(root) groups=0(root),2000
$ cat /proc/1/status
...
Pid: 1
...
Uid: 1000 1000 1000 1000
Gid: 0 0 0 0
...
Groups: 2000
...
</code></pre>
<p>Does anyone have some explanations?</p>
<p>Why is not the user's GID set to the value of "fsGroup" field in the Pod's security context?</p>
<p>Why the user's GID is set to 0 = root?</p>
<p>Is it a bug in Kubernetes (I'm using v1.8.0)?</p>
<p>Did I misunderstand the documentation?</p>
<p>Thanks!</p>
| Sylmarch | <p>Unfortunately, setting the primary group ID is currently not supported in Kubernetes, and will default to <code>gid=0</code>.</p>
<p>There is an open issue for implementing this: <a href="https://github.com/kubernetes/features/issues/213" rel="noreferrer">https://github.com/kubernetes/features/issues/213</a></p>
| AlexBrand |
<p>I'm trying to verify a CSV I built out for my Kubernetes operator using Operator Framework's operator-sdk. While doing that, I'm running into the following error.</p>
<p>What does this error from <code>operator-courier verify</code> mean? </p>
<pre><code>ERROR: CRD.spec.version does not match CSV.spec.crd.owned.version
</code></pre>
| Josiah | <p>This can occur if you simply have a CRD in your bundle that isn't mentioned in the CSV, but has a <code>spec.version</code> that isn't the same as the CSV.</p>
<p>Otherwise, you have a CSV that is probably something like this:</p>
<pre><code>apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
spec:
customeresourcedefinitions:
owned:
- name: something
version: v1alpha1 <=================
</code></pre>
<p>and a CRD that is something like this</p>
<pre><code>apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: something
spec:
version: v1alpha2 <===================
</code></pre>
<p>Those two versions need to align.</p>
<p>You might also see this in your installPlan's <code>status.conditions.message</code> if you get that far:</p>
<pre><code> CustomResourceDefinition.apiextensions.k8s.io
"something.mycompany.com" is invalid: spec.version:
Invalid value: "v1alpha2": must match the first version in spec.versions
</code></pre>
| Josiah |
<p>I was looking into K8s and Amazon Web Services.<br />
I am confused if they are alternative approaches/solutions for the same problem or complementary.<br />
For instance:<br />
In my understanding if we have a classic 3 tier web application we can have it in AWS where we can have an Amazon LB, EC2 and database backend instance.<br />
Or we could have each component in a container and use K8s for orchestration which will deploy to scale up/down automatically.</p>
<p>So what exactly is the relationship between AWS and K8s?
Is K8s essentially what AWS is doing but all the responsibility of the infrastructure is on the users?</p>
| Jim | <p>AWS and Kubernetes aren't directly comparable, in my opinion. That would be akin to asking if Ford Motor Company and minivans are comparable.</p>
<p>AWS is a platform that provides you lots of options to run applications, including:</p>
<ol>
<li>a managed VM service called <a href="https://aws.amazon.com/ec2/" rel="nofollow noreferrer">EC2</a> upon which you could deploy and run Kubernetes, and</li>
<li>a managed Kubernetes service called <a href="https://aws.amazon.com/eks/" rel="nofollow noreferrer">EKS</a>.</li>
</ol>
<p>See <a href="https://aws.amazon.com/kubernetes/" rel="nofollow noreferrer">Kubernetes on AWS</a> for more details of the two main options described above.</p>
| jarmod |
<p>I set up a Kubernetes Cluster on Hetzner following theses steps: <a href="https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner" rel="nofollow noreferrer">https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner</a></p>
<pre><code>Client Version: v1.26.3
Kustomize Version: v4.5.7
Server Version: v1.26.4+k3s1
Mongosh Version: 1.8.1
</code></pre>
<p>I am unable to connect to either my own mongodb server (docker deployment) or a hosted one on <code>mongodb.net</code>:</p>
<pre><code>root@trustsigner-frontend-deployment-59644b6b55-pqgmm:/usr/share/nginx/html# mongosh mongodb+srv://<removed-user>:<removed-password>@cluster0.fdofntt.mongodb.net/test
Current Mongosh Log ID: 6447807561ebcee04b00165d
Connecting to: mongodb+srv://<credentials>@cluster0.fdofntt.mongodb.net/test?appName=mongosh+1.8.1
MongoServerSelectionError: Server selection timed out after 30000 ms
</code></pre>
<p>Same error when using my own one with <code>mongodb://</code> instead of <code>mongodb+srv//</code>.</p>
<p>But surprisingly it is possible to use the same connection string with mongodb compass or mongosh that is installed on my machine (not in a kubernetes pod)</p>
<p>Ping to 8.8.8.8 or to any other side works and I can fetch via curl. But no chance to establish a mongodb connection...</p>
| cre8 | <p>Experiencing the same issue while using <a href="https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner" rel="nofollow noreferrer">kube-hetzner</a> as you, I looked at the configuration file and found the culprit: by default, outbound traffic is filtered, except for a few popular ports like HTTP or HTTPS.</p>
<p>There are at least 2 solutions:</p>
<ol>
<li><p>you set the variable <code>restrict_outbound_traffic</code> to <em>false</em>:
<a href="https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/kube.tf.example#L395" rel="nofollow noreferrer">https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/kube.tf.example#L395</a></p>
</li>
<li><p>You add a firewall rule to allow outbound traffic on port 27017.</p>
</li>
</ol>
<p>Once done, the connection to a Mongo Atlas cluster is working perfectly!</p>
| Laurent |
<p>I'm trying to install Openshift 3.11 on a one master, one worker node setup.</p>
<p>The installation fails, and I can see in <code>journalctl -r</code>:</p>
<pre><code>2730 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
2730 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
</code></pre>
<p>Things I've tried:</p>
<ol>
<li>reboot master node</li>
<li>Ensure that <code>hostname</code> is the same as <code>hostname -f</code> on all nodes</li>
<li>Disable IP forwarding on master node as described on <a href="https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238</a> and <a href="https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux" rel="noreferrer">https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux</a></li>
<li>Applying kube-flannel, on master node as described on <a href="https://stackoverflow.com/a/54779881/265119">https://stackoverflow.com/a/54779881/265119</a></li>
<li><code>unset http_proxy https_proxy</code> on master node as described on <a href="https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637</a></li>
<li>modify <code>/etc/resolve.conf</code> to have <code>nameserver 8.8.8.8</code>, as described on <a href="https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710</a></li>
<li>created a file /etc/cni/net.d/80-openshift-network.conf with content <code>{ "cniVersion": "0.2.0", "name": "openshift-sdn", "type": "openshift-sdn" }</code>, as described on <a href="https://stackoverflow.com/a/55743756/265119">https://stackoverflow.com/a/55743756/265119</a></li>
</ol>
<p>The last step does appear to have allowed the master node to become ready, however the ansible openshift installer still fails with <code>Control plane pods didn't come up</code>.</p>
<p>For a more detailed description of the problem see <a href="https://github.com/openshift/openshift-ansible/issues/11874" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/11874</a></p>
| Magick | <p>The error was using a too recent version of Ansible. </p>
<p>Downgrading to Ansible 2.6 fixed the problem.</p>
| Magick |
<p>Sorry this is my first time working with Redis. I have a redis master deployment and a redis slave deployment (via K8s). The replication from master to slave is working as expected. However, when I kill the master altogether and bring it back up again, the sync wipes out the data of slave as well.</p>
<p>I have tried enabling <code>appendonly</code> on either and both but had no luck.</p>
<p>Question # 1: How can I preserve the data in slave when the master node comes back to life?
Question # 2: Is it a practice to sync data back from slave into master?</p>
| HelmBurger | <p>Yes, the correct practice would be to promote the slave to master and then slave the restarted node to it to sync the state. If you bring up an empty node that is declared as the master, the slave will faithfully replicate whatever is - or isn't - on it.</p>
<p>You can configure periodic saving to disk, so that you can restart a master node and have it load the state as of the last save to disk. You can also manually cause a save to disk via the SAVE command. See <a href="https://redis.io/docs/management/persistence/" rel="nofollow noreferrer">the persistence chapter</a> in the manual. If you SAVE to disk, then immediately restart the master node, the state as saved to disk will be loaded back up. Any writes that occur between the last SAVE and node shutdown will be lost.</p>
<p>Along these lines, Redis HA is often done with <a href="https://redis.io/docs/management/sentinel/" rel="nofollow noreferrer">Redis Sentinel</a>, which manages auto-promotion and discovery of master nodes within a replicated cluster, so that the cluster can survive and auto-heal from the loss of the current master. This lets slaves replicate from the active master, and on the loss of the master (or a network partition that causes a quorum of sentinels to lose visibility to the master), the Sentinel quorum will elect a new master and coordinate the re-slaving of other nodes to it for ensure uptime. This is an AP system, as Redis replication is eventually consistent, and therefore does have the potential to lose writes which are not replicated to a slave or flushed to disk before node shutdown.</p>
| Chris Heald |
<ol>
<li>i created a self-signed tls certificate and private key via terraform. The files are called server.key and server.crt</li>
<li><p>i create a kubernetes tls secret with this certificate and private key using this command:<br>
<em>kubectl create secret tls dpaas-secret -n dpaas-prod --key server.key --cert server.crt</em></p></li>
<li><p>this works fine, nginx ingress ssl termination works, and the following kubectl command: <em>kubectl get secret test-secret -o yaml -n dpaas-prod</em><br>
returns correct output with tls.crt pem data and tls.key pem data (see correct output in step 6 below)</p></li>
<li><p>Since we use terraform, i tried creating the same secret via terraform kubernetes provider with same server.key and servert.crt files. However this time, the command :<br>
<em>kubectl get secret test-secret -o yaml -n dpaas-prod</em><br>
returned weird output for crt pem and key pem (see output in step 5 below) and the ssl termination on my nginx ingress does not work.<br>
This is how i create the kubernetes secret via terraform:</p></li>
</ol>
<pre><code> resource "kubernetes_secret" "this" {
metadata {
name = "dpaas-secret"
namespace = "dpaas-prod"
}
data = {
"tls.crt" = "${path.module}/certs/server.crt"
"tls.key" = "${path.module}/certs/server.key"
}
type = "kubernetes.io/tls"
}
</code></pre>
<ol start="5">
<li>bad output following step 4. notice the short values of tls.crt and tls.key (secret created via terraform) : </li>
</ol>
<pre><code> *apiVersion: v1
data:
tls.crt: bXktdGVzdHMvY2VydHMvc2VydmVyLmNydA==
tls.key: bXktdGVzdHMvY2VydHMvc2VydmVyLmtleQ==
`enter code here` kind: Secret
metadata:
creationTimestamp: "2019-12-17T16:18:22Z"
name: dpaas-secret
namespace: dpaas-prod
resourceVersion: "9879"
selfLink: /api/v1/namespaces/dpaas-prod/secrets/dpaas-secret
uid: d84db7f0-20e8-11ea-aa92-1269ad9fd693
type: kubernetes.io/tls*
</code></pre>
<ol start="6">
<li>correct output following step 2 (secret created via kubectl command) </li>
</ol>
<pre><code>*apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR4RENDQXF5Z0F3SUJBZ0lRZGM0cmVoOHRQNFZPN3NkTWpzc1lTVEFOQmdrcWhraUc5dzBCQVFzRkFEQkUKTVJBd0RnWURWUVFLRXdkbGVHRnRjR3hsTVJRd0VnWURWUVFMRXd0bGVHRnRjR3hsS
UdSd2N6RWFNQmdHQTFVRQpBd3dSS2k1a2NITXVaWGhoYlhCc1pTNWpiMjB3SGhjTk1Ua3hNakUzTVRZeE5UVXpXaGNOTWpreE1qRTNNRE14Ck5UVXpXakJaTVFzd0NRWURWUVFHRXdKVlV6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTQmtjSE14R0RBV0JnTlYKQkFzV
EQyVjRZVzF3YkdVdVpIQnpMbU52YlRFYU1CZ0dBMVVFQXd3UktpNWtjSE11WlhoaGJYQnNaUzVqYjIwdwpnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDaDU0enBwVTBod1hscm1qVXpOeVl0Ckp5WG9NSFU4WFpXTzhoVG9KZ09YUDU5N
nZFVmJQRXJlQ1VxM1BsZXB5SkRrcHNWbHo1WWc1TWp4NkVGTnlxNVQKOHVLUlVZUXNPVzNhd1VCbzM2Y3RLZEVvci8wa0JLNXJvYTYyR2ZFcHJmNVFwTlhEWnY3T1Y1YU9VVjlaN2FFTwpNNEl0ejJvNWFVYm5mdHVDZVdqKzhlNCtBS1phVTlNOTFCbFROMzFSUUFSR
3RnUzE4MFRzcVlveGV3YXBoS3FRCmUvTm5TeWF6ejUyTU5jeml6WTRpWXlRUU9EbUdEOEtWRGRJbWxJYXFoYXhiVGVTMldWZFJzdmpTa2xVZ0pGMUUKb2VWaWo1KytBd0FBczYwZkI2M1A4eFB1NEJ3cmdGTmhTV2F2ZXdJV1RMUXJPV1I2V2wvWTY1Q3lnNjlCU0xse
gpBZ01CQUFHamdad3dnWmt3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CCkJnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUI4R0ExVWRJd1FZTUJhQUZPanRBdTJoNDN0WjhkS1YKaHUzc2xVS3VJYTlHTURrR0ExV
WRFUVF5TURDQ0VTb3VaSEJ6TG1WNFlXMXdiR1V1WTI5dGdoc3FMbVJ3Y3k1MQpjeTFsWVhOMExURXVaWGhoYlhCc1pTNWpiMjB3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUlxRVlubHdwQnEyCmNmSnhNUUl0alF4ZTlDK2FDTnZXS1VOZjlwajlhZ3V6YXNxTW9wU
URWTFc1dnZxU21FbHJrVXNBQzJPemZ3K2UKRkxKNUFvOFg3VFcxTHBqbk01Mm1FVjRZYUcvM05hVTg5dWhOb0FHd0ZPbU5TK3ZldU12N3RKQjhsUHpiQ1k3VApKaG9TL2lZVE9jUEZUN1pmNkVycjFtd1ZkWk1jbEZuNnFtVmxwNHZGZk1pNzRFWnRCRXhNaDV3aWU3Q
Wl4Z2tTCmZaVno4QUEzTWNpalNHWFB6YStyeUpJTnpYY0gvM1FRaVdLbzY5SUQrYUlSYTJXUUtxVlhVYmk0bmlZaStDUXcKeTJuaW5TSEVCSDUvOHNSWVZVS1ZjNXBPdVBPcFp0RmdqK1l6d1VsWGxUSytLRTR0R21Ed09teGxvMUNPdGdCUAorLzFXQWdBN1p0QT0KL
S0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb2VlTTZhVk5JY0Y1YTVvMU16Y21MU2NsNkRCMVBGMlZqdklVNkNZRGx6K2ZlcnhGCld6eEszZ2xLdHo1WHFjaVE1S2JGWmMrV0lPVEk4ZWhCVGNxdVUvTGlrVkdFT
ERsdDJzRkFhTituTFNuUktLLzkKSkFTdWE2R3V0aG54S2EzK1VLVFZ3MmIremxlV2psRmZXZTJoRGpPQ0xjOXFPV2xHNTM3YmdubG8vdkh1UGdDbQpXbFBUUGRRWlV6ZDlVVUFFUnJZRXRmTkU3S21LTVhzR3FZU3FrSHZ6WjBzbXM4K2RqRFhNNHMyT0ltTWtFRGc1C
mhnL0NsUTNTSnBTR3FvV3NXMDNrdGxsWFViTDQwcEpWSUNSZFJLSGxZbytmdmdNQUFMT3RId2V0ei9NVDd1QWMKSzRCVFlVbG1yM3NDRmt5MEt6bGtlbHBmMk91UXNvT3ZRVWk1Y3dJREFRQUJBb0lCQUZaNjFqdmUvY293QytrNwozM3JSMUdSOTZyT1JYcTIxMXpNW
mY2MVkwTVl6Ujc1SlhrcVRjL0lSeUlVRW1kS292U3hGSUY5M2VGdHRtU0FOCnpRUCtaUXVXU3dzUUhhZDVyWUlSZzVRQkVzeis3eWZxaVM1NkNhaVlIamhLdHhScVNkTk5tSmpkSlBHV3UyYWQKZEc4V2pOYUhFTnZqVkh3Q0RjdU5hVGJTSHhFOTAwSjhGQTg0c3d2M
lZFUGhSbExXVjJudVpLTko5aGIrY2IzVQpsZ2JrTnVxMkFsd2Y3MkRaTVRXZ21DM3N1Z004eGYwbWFCRWV3UXdETVdBZis2dWV6MEJ5V0hLdThwNHZRREJvCjBqQVYzOGx6UHppTDU3UTZYbFdnYjIxWUh2QmJMSVVKcEFRMGdrcGthaEFNVmJlbHdiSDJVR25wOXcrb
zU3MnIKTmhWMFJXRUNnWUVBeEtFU3FRWEV5dmpwWU40MGxkbWNhTmdIdFd1d3RpUThMMWRSQitEYXBUbWtZbzFlWXJnWgpzNi9HbStRalFtNlU1cG04dHFjTm1nL21kTDJzdk9uV1Y1dnk5eThmZHo3OFBjOFlobXFJRE5XRE9tZG9wUVJsCmxsVUZ6S0NwRmVIVTJRW
URYYjBsNWNvZzYyUVFUaWZIdjFjUGplWlltc2I5elF0cDd6czJZMGtDZ1lFQTBzcFoKTWRRUUxiRkZkWDlYT05FU2xvQlp3Slg5UjFQZVA0T2F4eUt2a01jQXFFQ0Npa05ZU3FvOU55MkZQNVBmQlplQgpWbzYvekhHR0dqVkFPQUhBczA5My8zZUZxSFRleWVCSzhQR
kJWMHh5em9ZZThxYUhBR1JxVnpjS240Zy9LVjhWClpjVGMwTm5aQzB5b09NZkhYUTVnQm1kWnpBVXBFOHlqZzhucGV0c0NnWUVBd0ZxU1ZxYytEUkhUdk4ranNiUmcKUG5DWG1mTHZ2RDlXWVRtYUc0cnNXaFk1cWUrQ0ZqRGpjOVRSQmsvMzdsVWZkVGVRVlY2Mi82L
3VVdVg2eGhRNwppeGtVWnB2Q3ZIVHhiY1hheUNRUFUvN0xrYWIzeC9hMUtvdWlVTHdhclQxdmE1OW1TNTF1SlkzSEJuK3RNOGZXCnNHZ0szMVluOThJVEp6T3pQa1UrdjRFQ2dZQUpRQkFCKzhocCtPbVBhbk10Yng5ZHMydzg0MWdtRlN3ZnBXcloKYWxCQ0RqbWRLS
mVSOGJxaUxDNWJpWWZiYm1YUEhRTDBCWGV0UlI0WmNGVE5JR2FRZCtCUU9iS0gzZmtZNnRyZgpEL2RLR1hUQVUycHdRNWFSRWRjSTFNV0drcmdTM0xWWHJmZnl3bHlmL2xFempMRFhDSloyTVhyalZTYWtVOHFwCk1lY3BHUUtCZ0c0ZjVobDRtL1EvaTNJdGZJbGw4W
DFRazFUSXVDK0JkL1NGM0xVVW5JUytyODA4dzVuaTNCUnEKNXgvQjFnRUhZbTNjSTROajR5ZzEvcE1CejhPMk1PeFhVbVNVZVh6dit1MG5oOFQxUE96eDJHOTNZaVlOL0cvNQpjMlBMSFMvTTlmVjhkTEVXL0hBVFM3K0hsMDFGQlVlREhrODQrVXlha2V2ZFU2djdVZ
2ErCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kind: Secret
metadata:
creationTimestamp: "2019-12-17T16:16:56Z"
name: dpaas-secret
namespace: dpaas-prod
resourceVersion: "9727"
selfLink: /api/v1/namespaces/dpaas-prod/secrets/dpaas-secret
uid: a5739134-20e8-11ea-a977-0a0ee9b505f9
type: kubernetes.io/tls*
</code></pre>
<p>Question: why is the kubernetes secret pem data short (corrupt?) when adding secret with terraform?<br>
we use terraform version 0.12.8 and kubernetes 1.13</p>
<p>for reproducing this here are the file links: </p>
<ul>
<li>server.crt - <a href="https://drive.google.com/open?id=1vFakHWkx9JxyDAEzFE_5fJomEkmQQ7Wt" rel="nofollow noreferrer">https://drive.google.com/open?id=1vFakHWkx9JxyDAEzFE_5fJomEkmQQ7Wt</a> </li>
<li>server.key - <a href="https://drive.google.com/open?id=1wc5Xn-yHWDDY9mFQ2l42k2KqeCObb-L3" rel="nofollow noreferrer">https://drive.google.com/open?id=1wc5Xn-yHWDDY9mFQ2l42k2KqeCObb-L3</a></li>
</ul>
| Assaf | <p>The problem is that you are encoding the <strong>paths</strong> of the certificate files into the secret and not the <strong>contents</strong> of the files.</p>
<p>You can see this is the case if you base64-decode the secret strings in your example:</p>
<pre><code>$ echo -n bXktdGVzdHMvY2VydHMvc2VydmVyLmNydA== | base64 -d
my-tests/certs/server.crt
$ echo -n bXktdGVzdHMvY2VydHMvc2VydmVyLmtleQ== | base64 -d
my-tests/certs/server.key
</code></pre>
<p>So, instead of this:</p>
<pre><code>data = {
"tls.crt" = "${path.module}/certs/server.crt"
"tls.key" = "${path.module}/certs/server.key"
}
</code></pre>
<p>do this:</p>
<pre><code>data = {
"tls.crt" = file("${path.module}/certs/server.crt")
"tls.key" = file("${path.module}/certs/server.key")
}
</code></pre>
| John |
<p>I'm setting up a 2 node cluster in kubernetes. 1 master node and 1 slave node.
After setting up master node I did installation steps of docker, kubeadm,kubelet, kubectl on worker node and then ran the join command. On master node, I see 2 nodes in Ready state (master and worker) but when I try to run any kubectl command on worker node, I'm getting connection refused error as below. I do not see any admin.conf and nothing set in .kube/config . Are these files also needed to be on worker node? and if so how do I get it? How to resolve below error? Appreciate your help</p>
<p>root@kubework:/etc/kubernetes# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
<p>root@kubework:/etc/kubernetes# kubectl cluster-info</p>
<p>To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@kubework:/etc/kubernetes#</p>
| Nitish Goel | <blockquote>
<p>root@kubework:/etc/kubernetes# kubectl get nodes The connection to the
server localhost:8080 was refused - did you specify the right host or
port?</p>
</blockquote>
<p>Kubcetl is by default configured and working on the master. It requires a kube-apiserver pod and <code>~/.kube/config</code>.</p>
<p>For worker nodes, we don’t need to use kube-apiserver but what we want is using the master configuration to pass by it.
To achieve it we have to copy the <code>~/.kube/config</code> file from the master to the <code>~/.kube/config</code> on the worker. Value <code>~</code> with the user executing kubcetl on the worker and master (that may be different of course).<br />
Once that done you could use the <code>kubectl</code> command from the worker node exactly as you do that from the master node.</p>
| davidxxx |
<p>I try to run the <a href="https://github.com/GoogleCloudPlatform/elasticsearch-docker/blob/master/5/README.md" rel="nofollow noreferrer">elasticsearch6</a> container on a google cloud instance. Unfortunately the container always ends in CrashLoopBackOff.
This is what I did:</p>
<h3>install gcloud and kubectl</h3>
<pre><code>curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb http://packages.cloud.google.com/apt cloud-sdk-$(lsb_release -c -s) main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get update && sudo apt-get install google-cloud-sdk kubectl
</code></pre>
<h3>configure gcloud</h3>
<pre><code>gcloud init
gcloud config set compute/zone europe-west3-a # For Frankfurt
</code></pre>
<h3>create kubernetes cluster</h3>
<pre><code>gcloud container clusters create elasticsearch-cluster --machine-type=f1-micro --num-nodes=3
</code></pre>
<h3>Activate pod</h3>
<pre><code>kubectl create -f pod.yml
apiVersion: v1
kind: Pod
metadata:
name: test-elasticsearch
labels:
name: test-elasticsearch
spec:
containers:
- image: launcher.gcr.io/google/elasticsearch6
name: elasticsearch
</code></pre>
<p>After this I get the status:</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
test-elasticsearch 0/1 CrashLoopBackOff 10 31m
</code></pre>
<p>A <code>kubectl logs test-elasticsearch</code> does not show any output.</p>
<p>And here the output of <code>kubectl describe po test-elasticsearch</code> with some info XXX out.</p>
<pre><code>Name: test-elasticsearch
Namespace: default
Node: gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv/XX.XXX.X.X
Start Time: Sat, 12 May 2018 14:54:36 +0200
Labels: name=test-elasticsearch
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container elasticsearch
Status: Running
IP: XX.XX.X.X
Containers:
elasticsearch:
Container ID: docker://bb9d093df792df072a762973066d504a4e7d73b0e87d0236a94c3e8b972d9c41
Image: launcher.gcr.io/google/elasticsearch6
Image ID: docker-pullable://launcher.gcr.io/google/elasticsearch6@sha256:1ddafd5293dbec8fb73eabffa29614916e4933bb057db50231084d89f4a0b3fa
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Sat, 12 May 2018 14:55:06 +0200
Finished: Sat, 12 May 2018 14:55:09 +0200
Ready: False
Restart Count: 2
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-XXXXX (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-XXXXX:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-XXXXX
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned test-elasticsearch to gke-elasticsearch-cluste-def
Normal SuccessfulMountVolume 51s kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv MountVolume.SetUp succeeded for volume "default-token-XXXXX"
Normal Pulling 22s (x3 over 49s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv pulling image "launcher.gcr.io/google/elasticsearch6"
Normal Pulled 22s (x3 over 49s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Successfully pulled image "launcher.gcr.io/google/elasticsearch6"
Normal Created 22s (x3 over 48s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Created container
Normal Started 21s (x3 over 48s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Started container
Warning BackOff 4s (x3 over 36s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Back-off restarting failed container
Warning FailedSync 4s (x3 over 36s) kubelet, gke-elasticsearch-cluste-default-pool-XXXXXXXX-wtbv Error syncing pod
</code></pre>
| Fabian | <p>The problem was the f1-micro instance. It doesn't have enough memory to run. Only after upgrading to an instance with 4GB it works. Unfortunately this is way too expensive for me, so I have to look for something else.</p>
| Fabian |
<p>Using Spring Boot and managing database changes by Liquibase all changes are executed on application start. This is totally fine for fast running changes.</p>
<p>Some changes, e.g. adding DB index, can run for a while. If running application on K8s it happens that liveness/readyness checks trigger an application restart. In this case Liquibase causes an end-less loop.</p>
<p>Is there a pattern how to manage long running scripts with Liquibase? Any examples?
One approach might be splitting the the changes in two groups:</p>
<ul>
<li>Execute before application start.</li>
<li>Or execute while application is already up and running.</li>
</ul>
| lunanigra | <p>You can use <a href="https://www.liquibase.com/blog/using-liquibase-in-kubernetes" rel="nofollow noreferrer">init container</a> to init your database before application startup.</p>
| bilak |
<p>I installed a one node Kubernetes with <code>kubeadm</code>. This is on a <code>vServer</code> "bare metal" with dedicated external static IP.</p>
<p>With these components/settings:</p>
<ul>
<li>calico (default settings)</li>
<li>MetalLB (Layer 2 config with address range <code>192.168.1.240-192.168.1.250</code>)</li>
<li>Traefik (default settings)</li>
</ul>
<p>Now a traefik service is exposed as an "external IP" but the IP is <code>192.168.1.240</code>. This is external from k8s point of view but how do I expose the ingress service to the internet?</p>
<p>I do not want to set up an additional external load balancer. How can I achieve this?</p>
| Dieshe | <p>metalLB is not needed at all. When you install traefik add this value file (as <code>traefik.yaml</code> in this case):</p>
<pre><code>service:
externalIPs:
- <your_external_static_ip_here_without_the_brackets>
</code></pre>
<p>and then install it like this: <code>helm install --values=./traefik.yaml traefik traefik/traefik -n traefik --create-namespace</code></p>
| Dieshe |
<p>I need to configure a TCP port on my AKS Cluster to allow RabbitMQ to work</p>
<p>I have installed nginx-ingress with helm as follows:</p>
<pre><code>kubectl create namespace ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
</code></pre>
<p>I have setup an A record with our DNS provider to point to the public IP of the ingress controller.</p>
<p>I have created a TLS secret (to enable https)</p>
<p>I have created an ingress route with:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: rabbit-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- my.domain.com
secretName: tls-secret
rules:
- http:
paths:
- backend:
serviceName: rabbitmq-cluster
servicePort: 15672
path: /(.*)
</code></pre>
<p>I can navigate to my cluster via the domain name from outside and see the control panel (internally on 15672) with valid https. So the ingress is up and running, and I can create queues etc... so rabbitmq is working correctly.</p>
<p>However, I can't get the TCP part to work to post to the queues from outside the cluster.</p>
<p>I have edited the yaml of the what I believe is the configmap (azure - cluster - configuration - nginx-ingress-ingress-nginx-controller) for the controller (nginx-ingress-ingress-nginx-controller) via the azure portal interface and added this to the end</p>
<pre class="lang-yaml prettyprint-override"><code>data:
'5672': 'default/rabbitmq-cluster:5672'
</code></pre>
<p>I have then edited they yaml for the service itself via the azure portal and added this to the end</p>
<pre class="lang-yaml prettyprint-override"><code> - name: amqp
protocol: TCP
port: 5672
</code></pre>
<p>However, when I try to hit my domain using a test client the request just times out. (The client worked when I used a LoadBalancer and just hit the external IP of the cluster, so I know the client code should work)</p>
<p>Is there another step that I should be doing?</p>
| Mark McGookin | <p>I believe the issue here was that helm was configuring so much of my own stuff that I wasn't able to customise too much.</p>
<p>I uninstalled the ingress with helm and changed the ingress creation script to this:</p>
<pre class="lang-yaml prettyprint-override"><code>helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set tcp.5672="default/rabbitmq-cluster:5672"
</code></pre>
<p>Which pre-configures the TCP port forwarding and I don't have to do anything else. I don't know if it effected it, but this seemed to 'break' my SSL implementation, so I upgraded the ingress route creation script from v1beta to v1 and https was working again perfectly.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbit-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- my.domain.com
secretName: tls-secret
rules:
- host: my.domain.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: rabbitmq-cluster
port:
number: 15672
</code></pre>
| Mark McGookin |
<p>I have set up a AWS kops cluster for Kubernetes, i have multiple microservices where in each application need to interact with one another.</p>
<p><strong>Scenario: My ta2carbon app tries to invoke a function in ta1carbon app through service(dns) name.</strong> </p>
<p><strong>Result: It is failing with timeout error by trying to hit to port 80 (but configured port -3000)</strong> </p>
<p>my nodejs app console log,
apiUrl: <a href="http://ta1carbon/api/app1/app1Func2" rel="nofollow noreferrer">http://ta1carbon/api/app1/app1Func2</a></p>
<pre><code>{ Error: connect ETIMEDOUT 100.66.7.165:80
at Object._errnoException (util.js:992:11)
at _exceptionWithHostPort (util.js:1014:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1186:14)
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect',
address: '100.66.7.165',
port: 80 }
</code></pre>
<p>same error logs for curl, when i tried to curl my ta1carbon app inside ta2carbon pod.</p>
<pre><code>root@ta2carbon-5fdcfb97cc-8j4nl:/home/appHome# curl -i http://ta1carbon/api/app1/app1Func2
curl: (7) Failed to connect to ta1carbon port 80: Connection timed out
</code></pre>
<p>but my defined port in service.yaml is 3000 not 80!
below are the yml configurations for service for both microservices.</p>
<p>ta1carbon service yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ta1carbon
labels:
app: ta1carbon
spec:
ports:
- port: 3000
targetPort: 3000
type: ClusterIP
selector:
app: ta1carbon
</code></pre>
<p>ta2carbon service yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ta2carbon
labels:
app: ta2carbon
spec:
ports:
- port: 3001
targetPort: 3001
type: ClusterIP
selector:
app: ta2carbon
</code></pre>
<p>And below is the describe service details for both ta1carbon and ta2 carbon.</p>
<pre><code>kubectl describe service ta1carbon
Name: ta1carbon
Namespace: default
Labels: app=ta1carbon
Annotations: <none>
Selector: app=ta1carbon
Type: ClusterIP
IP: 100.66.7.165
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 100.96.1.13:3000
Session Affinity: None
Events: <none>
kubectl describe service ta2carbon
Name: ta2carbon
Namespace: default
Labels: app=ta2carbon
Annotations: <none>
Selector: app=ta2carbon
Type: ClusterIP
IP: 100.67.129.126
Port: <unset> 3001/TCP
TargetPort: 3001/TCP
Endpoints: 100.96.1.12:3001
Session Affinity: None
Events: <none>
</code></pre>
<p>So based on what i observe, for the url <a href="http://ta1carbon/api/app1/app1Func2" rel="nofollow noreferrer">http://ta1carbon/api/app1/app1Func2</a>
service dns <strong>ta1carbon</strong> is being resolved in to <strong>100.67.24.69:80</strong> resulting in timeout.</p>
<p>but however if i curl in to <strong>100.67.24.69:3000</strong> from inside ta2carbon pod i get a <strong>success response</strong></p>
<p>Also if i change my service yaml <strong>- port: 80</strong> and deploy and test again i get <strong>success response</strong></p>
<p>i am finding this behaviour in kubernetes quite weird, not sure weather i am making mistake or with environment.</p>
<p>My query is -</p>
<p><strong>why is it resolving service ta1carbon in to 100.67.24.69:80 and timing out, when the port should have been 3000!</strong></p>
<p>Any input on this will be much appreciated. please let me know what is missing in this.</p>
| Shruthi Bhaskar | <p>DNS resolves a doman name to an IP address, not an IP address + port. </p>
<p>There are two potential solutions:</p>
<ol>
<li><p>Modify your application source to issue API requests to <code>http://ta1carbon:3000</code></p></li>
<li><p>Set the <code>port</code> on your <code>ta1carbon</code> service to <code>80</code>.</p></li>
</ol>
<p>I recommend going with option 2. In this scenario, you are taking advantage of the power of Kubernetes services. Kubernetes will expose the service on port 80, but send requests to the pods backing the service on port 3000 (because of the <code>targetPort: 3000</code>).</p>
| AlexBrand |
<p>I have created a hashicorp vault deployment and configured kubernetes auth. The vault container calls kubernetes api internally from the pod to do k8s authentication, and that call is failing with 500 error code (connection refused). I am using docker for windows kubernetes.</p>
<p>I added the below config to vault for kubernetes auth mechanism.</p>
<p><strong>payload.json</strong></p>
<pre><code>{
"kubernetes_host": "http://kubernetes",
"kubernetes_ca_cert": <k8s service account token>
}
</code></pre>
<pre><code>curl --header "X-Vault-Token: <vault root token>" --request POST --data @payload.json http://127.0.0.1:8200/v1/auth/kubernetes/config
</code></pre>
<p>I got 204 response as expected.</p>
<p>And I created a role for kubernetes auth using which I am trying to login to vault:</p>
<p><strong>payload2.json</strong></p>
<pre><code>{
"role": "tanmoy-role",
"jwt": "<k8s service account token>"
}
</code></pre>
<pre><code>curl --request POST --data @payload2.json http://127.0.0.1:8200/v1/auth/kubernetes/login
</code></pre>
<p>The above curl is giving below response:</p>
<blockquote>
<p>{"errors":["Post <a href="http://kubernetes/apis/authentication.k8s.io/v1/tokenreviews" rel="nofollow noreferrer">http://kubernetes/apis/authentication.k8s.io/v1/tokenreviews</a>: dial tcp 10.96.0.1:80: connect: connection refused"]}</p>
</blockquote>
<p>Below is my kubernetes service up and running properly and I can also access kubernetes dashboard by using proxy.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d
</code></pre>
<p>I am not able to figure out why 'kubernetes' service is not accessible from inside the container. Any help would be greatly appreciated.</p>
<p><strong>Edit 1.</strong> My vault pod and service are working fine:</p>
<p><strong>service</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vault-elb-int LoadBalancer 10.104.197.76 localhost,192.168.0.10 8200:31650/TCP,8201:31206/TCP 26h
</code></pre>
<p><strong>Pod</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
vault-84c65db6c9-pj6zw 1/1 Running 0 21h
</code></pre>
<p><strong>Edit 2.</strong>
As John suggested, I changed the 'kubernetes_host' in payload.json to '<a href="https://kubernetes" rel="nofollow noreferrer">https://kubernetes</a>'. But now I am getting this error:</p>
<pre><code>{"errors":["Post https://kubernetes/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority"]}
</code></pre>
| Tanmoy Banerjee | <p>Your login request is being sent to the <code>tokenreview</code> endpoint on port 80. I think this is because your <code>kubernetes_host</code> specifies a <code>http</code> URL. The 500 response is because it's not listening on port 80, but on 443 instead (as you can see in your service list output).</p>
<p>Try changing to <code>https</code> when configuring the auth, i.e. </p>
<pre><code>payload.json
{
"kubernetes_host": "https://kubernetes",
"kubernetes_ca_cert": <k8s service account token>
}
</code></pre>
| John |
<p>I use "<a href="https://hub.docker.com/r/chentex/random-logger/" rel="nofollow noreferrer">chentex/random-logger</a>" image, which write stdout/stderr to the container. I want to make deployment yaml, which run the chentex's image, and put it's logs in file inside shared volume. Can I do it without modifying the image?</p>
<p>This is the simple deployment of the image:</p>
<pre><code>apiVersion: v1
kind: Deployment
metadata:
name: random-logger
spec:
replicas: 1
template:
metadata:
labels:
app: random-logger
spec:
containers:
- name: random-logger
image: chentex/random-logger:latest
</code></pre>
| Yagel | <p>It is best practice to send log message to <code>stdout</code> for applications running in a container. The <code>chentex/random-logger</code> just <a href="https://github.com/chentex/random-logger/blob/master/entrypoint.sh#L10-L16" rel="nofollow noreferrer">follows this approach</a> without any option to configure this, but we can bring up a hack like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: random-logger
spec:
selector:
matchLabels:
app: random-logger
template:
metadata:
labels:
app: random-logger
spec:
containers:
- name: random-logger
image: chentex/random-logger:latest
command: ["sh", "-c", "./entrypoint.sh &> /logfile"]
</code></pre>
<p>When requesting the logs from the running <code>pod</code> there is nothing to see:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl logs random-logger-76c6fd98d5-8d5fm
</code></pre>
<p>The application logs are written to <code>logfile</code> within the container:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec random-logger-76c6fd98d5-8d5fm cat /logfile
2019-02-28T00:23:23+0000 DEBUG first loop completed.
2019-02-28T00:23:25+0000 ERROR something happened in this execution.
2019-02-28T00:23:29+0000 INFO takes the value and converts it to string.
2019-02-28T00:23:31+0000 WARN variable not in use.
2019-02-28T00:23:37+0000 INFO takes the value and converts it to string.
</code></pre>
<p>Although this is possible, it is in general not advised. See the Kubernetes documentation about <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">Logging Architecture</a> for more background information.</p>
| webwurst |
<p>I'm using <a href="http://manpages.courier-mta.org/htmlman5/slapd-mdb.5.html" rel="nofollow noreferrer">slapd-mdb</a> and I'm looking for a way to programmatically update "maxsize" parameter using ldapmodify or similar: <a href="http://manpages.courier-mta.org/htmlman5/slapd-mdb.5.html" rel="nofollow noreferrer">http://manpages.courier-mta.org/htmlman5/slapd-mdb.5.html</a></p>
<p>My main problem is that I have a huge dataset and I need more space.</p>
<p>Any suggestions on how to update OpenLDAP configuration programmatically would be appreciated.</p>
<p>My environment is Kubernetes and I deployed OpenLDAP as a container.</p>
| Michel Gokan Khan | <p>The <a href="http://www.openldap.org/doc/admin24/quickstart.html" rel="nofollow noreferrer">"Quickstart"</a> section of the OpenLDAP documentation includes a mdb sample configuration:</p>
<pre><code>dn: olcDatabase=mdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: mdb
OlcDbMaxSize: 1073741824
olcSuffix: dc=<MY-DOMAIN>,dc=<COM>
olcRootDN: cn=Manager,dc=<MY-DOMAIN>,dc=<COM>
olcRootPW: secret
olcDbDirectory: /usr/local/var/openldap-data
olcDbIndex: objectClass eq
</code></pre>
<ul>
<li><p>Replace the placeholders in <code>olcSuffix</code>, <code>olcRootDN</code> and <code>olcRootPW</code> with your values, change the <code>OlcDbMaxSize</code> value to suit your requirement.</p></li>
<li><p>Import your configration database:</p></li>
</ul>
<pre><code>su root -c /usr/local/sbin/slapadd -n 0 -F /usr/local/etc/slapd.d -l /usr/local/etc/openldap/slapd.ldif
</code></pre>
<ul>
<li>Start SLAPD:</li>
</ul>
<pre><code>su root -c /usr/local/libexec/slapd -F /usr/local/etc/slapd.d
</code></pre>
| Richard Neish |
<p>I have used nginx-ingress controller as a sub-chart and I want to override controller.service.nodePorts.http in the subchart. I tried few things and nothing seem to work. Here is what I've tried</p>
<ul>
<li>using --set controller.service.nodePorts.http=32080 during helm install command</li>
<li>declaring this path in my chart's value.yaml</li>
</ul>
<p>I've also gone over the helm documentation for overriding sub-chart values but none seem to work.</p>
<p>Any points what I may be missing ? Thanks in advance...</p>
| Gaurav Sharma | <p>When overriding values of a sub-chart, you need to nest those configurations under the name of the subchart. For example in values.yaml:</p>
<pre><code>mysubchart:
x: y
</code></pre>
<p>In your case, if you imported the nginx controller chart as <code>nginx-controller</code>, you could add this to the main chart:</p>
<pre><code>nginx-controller:
controller:
service:
nodePorts:
http: "32080"
</code></pre>
<p>This topic is covered in the helm docs under: <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-of-a-child-chart" rel="noreferrer">https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-of-a-child-chart</a></p>
| itaysk |
<p>I would like to run some <code>kubectl</code> commands to verify the cluster post install of Helm charts. I could not find any documentation around this. In Helm, theres the concept of showing notes as part of <code>NOTES.txt</code> but doesnt look like you can run any commands at that stage.</p>
<p>Is this currently impossible to do with Helm ?</p>
| nixgadget | <p>You can define a <code>job</code> that is executed at a certain point in the <a href="https://helm.sh/docs/topics/charts_hooks/#hooks-and-the-release-lifecycle" rel="nofollow noreferrer">lifecycle</a> during <code>helm install</code>. The <a href="https://helm.sh/docs/topics/charts_hooks/#the-available-hooks" rel="nofollow noreferrer">list of available hooks</a> also contains a <code>post-install</code> hook you are probably looking for. An <a href="https://helm.sh/docs/topics/charts_hooks/#writing-a-hook" rel="nofollow noreferrer">example</a> can be found in the official documentation.</p>
<p>You basically provide a Kubernetes Job, add necessary helm labels and then also an annotation like this:</p>
<pre class="lang-yaml prettyprint-override"><code> annotations:
"helm.sh/hook": post-install
</code></pre>
<p>In case you are looking for something running on the client side, maybe you can <a href="https://helm.sh/docs/topics/plugins/" rel="nofollow noreferrer">use or create a Helm plugin</a>. There is a list in the official documentation: <a href="https://helm.sh/docs/community/related/#helm-plugins" rel="nofollow noreferrer">Helm Plugins</a>. You can find some more by filtering GitHub repositories for the topic <a href="https://github.com/topics/helm-plugin" rel="nofollow noreferrer">helm-plugin</a>.</p>
<p>There are ideas for <a href="https://github.com/helm/community/blob/master/hips/archives/helm/helm-v3/005-plugins.md" rel="nofollow noreferrer">future development</a> to support <code>Lua</code> for scripting plugins. But the current format will still be supported.</p>
| webwurst |
<p>We got a couple spring boot applications in k8s that write both application log and tomcat access log to stdout.</p>
<p>When the log throughput is really high (either caused by amount of requests or amount of applciation logs) then it sometimes happens that log lines get interrupted.</p>
<p>In our case this looks like this:</p>
<pre><code>[04/Aug/2021:13:39:27 +0200] - "GET /some/api/path?listWithIds=22838de1,e38e2021-08-04 13:39:26.774 ERROR 8 --- [ SomeThread-1] a.b.c.foo.bar.FooBarClass : Oh no, some error occured
e7fb,cd089756,1b6248ee HTTP/1.1" 200 (1 ms)
</code></pre>
<p>desired state:</p>
<pre><code>[04/Aug/2021:13:39:27 +0200] - "GET /some/api/path?listWithIds=22838de1,e38ee7fb,cd089756,1b6248ee HTTP/1.1" 200 (1 ms)
2021-08-04 13:39:26.774 ERROR 8 --- [ SomeThread-1] a.b.c.foo.bar.FooBarClass : Oh no, some error occured
</code></pre>
<p>is there some way to prevent this?
maybe a tomcat, java or spring-boot setting?
or a setting on a container level to make sure that each line is buffered correctly</p>
| FloxD | <p><code>System.out</code> had better be thread-safe, but that doesn't mean it won't interleave text when multiple threads write to it. Writing both application logs and HTTP server logs to the same stream seems like a mistake to me for at least this reason, but others as well.</p>
<p>If you want to aggregate logs together, using a character stream is <em>not</em> the way to do it. Instead, you need to use a logging framework that understands separate log-events which it can write coherently to that aggregate destination.</p>
<p>You may need to write your own <code>AccessLogValve</code>subclass which uses your logging framework instead of writing directly to a stream.</p>
| Christopher Schultz |
<p>With <code>helm inspect [CHART]</code> I can view the content of <code>chart.yaml</code> and <code>values.yaml</code> of a chart. Is there a way to also view the template files of a chart? Preferably through a Helm command.</p>
<p>On a sidenote: this seems like a pretty important feature to me. I would always want to know what the chart exactly does before installing it. Or is this not what <code>helm inspect</code> was intended for? Might the recommended way be to simply check GitHub for details how the chart works?</p>
| Nick Muller | <p><code>helm install yourchart --dry-run --debug</code></p>
<p>This will print to stdout all the rendered templates in the chart (and won't install the chart)</p>
| itaysk |
<p>I am trying to run my spark job on Amazon EKS cluster. My spark job required some static data (reference data) at each data nodes/worker/executor and this reference data is available at S3.</p>
<p>Can somebody kindly help me to find out a clean and performant solution to mount S3 bucket on pods ?</p>
<p>S3 API is an option and I am using it for my input records and output results. But "Reference data" is static data so I dont want to download it in each run/execution of my spark job. In first run job will download the data and upcoming jobs will check if data is already available locally and there is no need to download it again.</p>
| Ajeet | <p>We recently opensourced a project that looks to automate this steps for you: <a href="https://github.com/IBM/dataset-lifecycle-framework" rel="noreferrer">https://github.com/IBM/dataset-lifecycle-framework</a></p>
<p>Basically you can create a dataset:</p>
<pre><code>apiVersion: com.ie.ibm.hpsys/v1alpha1
kind: Dataset
metadata:
name: example-dataset
spec:
local:
type: "COS"
accessKeyID: "iQkv3FABR0eywcEeyJAQ"
secretAccessKey: "MIK3FPER+YQgb2ug26osxP/c8htr/05TVNJYuwmy"
endpoint: "http://192.168.39.245:31772"
bucket: "my-bucket-d4078283-dc35-4f12-a1a3-6f32571b0d62"
region: "" #it can be empty
</code></pre>
<p>And then you will get a pvc you can mount in your pods</p>
| Yiannis Gkoufas |
<p>In my application, I have a rest server which locally interacts with a database via the command line (it's a long story). Anyway, the database is mounted in a local ssd on the node. I can guarantee that only pods of that type will by scheduled in the node pool, as I have tainted the nodes and added tolerances to my pods.</p>
<p>What I want to know is, how can I prevent kubernetes from scheduling multiple instances of my pod on a single node? I want to avoid this as I want my pod to be able to consume as much CPU as possible, and I also don't want multiple pods to interact via the local ssd.</p>
<p>How do I prevent scheduling of more than one pod of my type onto the node? I was thinking daemon sets at first, but I think down the line, I want to set my node pool to auto scale, that way when I have n nodes in my pool, and I request n+1 replicas, the node pool automatically scales up.</p>
| Andy | <p>You can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer"><code>Daemonsets</code></a> in combination with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes" rel="nofollow noreferrer"><code>nodeSelector</code> or <code>affinity</code></a>. Alternatively you could configure <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node" rel="nofollow noreferrer"><code>podAntiAffinity</code></a> on your <code>Pod</code>s, for example:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-server
spec:
selector:
matchLabels:
app: rest-server
replicas: 3
template:
metadata:
labels:
app: rest-server
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rest-server
topologyKey: "kubernetes.io/hostname"
containers:
- name: rest-server
image: nginx:1.12-alpine
</code></pre>
| webwurst |
<p>So I'm working on a project that involves managing many postgres instances inside of a k8s cluster. Each instance is managed using a <code>Stateful Set</code> with a <code>Service</code> for network communication. I need to expose each <code>Service</code> to the public internet via DNS on port 5432. </p>
<p>The most natural approach here is to use the k8s <code>Load Balancer</code> resource and something like <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external dns</a> to dynamically map a DNS name to a load balancer endpoint. This is great for many types of services, but for databases there is one massive limitation: the <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html" rel="noreferrer">idle connection timeout</a>. AWS ELBs have a maximum idle timeout limit of 4000 seconds. There are many long running analytical queries/transactions that easily exceed that amount of time, not to mention potentially long-running operations like <code>pg_restore</code>. </p>
<p>So I need some kind of solution that allows me to work around the limitations of Load Balancers. <code>Node IPs</code> are out of the question since I will need port <code>5432</code> exposed for every single postgres instance in the cluster. <code>Ingress</code> also seems less than ideal since it's a layer 7 proxy that only supports HTTP/HTTPS. I've seen workarounds with nginx-ingress involving some configmap chicanery, but I'm a little worried about committing to hacks like that for a large project. <code>ExternalName</code> is intriguing but even if I can find better documentation on it I think it may end up having similar limitations as <code>NodeIP</code>. </p>
<p>Any suggestions would be greatly appreciated. </p>
| Lee Hampton | <p>The Kubernetes ingress controller implementation <a href="https://github.com/heptio/contour" rel="nofollow noreferrer">Contour</a> from Heptio can <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md#tcp-proxying" rel="nofollow noreferrer">proxy <code>TCP</code> streams</a> when they are encapsulated in <code>TLS</code>. This is required to use the <code>SNI</code> handshake message to direct the connection to the correct backend service.</p>
<p>Contour can handle <code>ingresses</code>, but introduces additionally a new ingress API <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md" rel="nofollow noreferrer">IngressRoute</a> which is implemented via a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions" rel="nofollow noreferrer"><code>CRD</code></a>. The TLS connection can be <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md#tls-passthrough-to-the-backend-service" rel="nofollow noreferrer">terminated at your backend</a> service. An <code>IngressRoute</code> might look like this:</p>
<pre><code>apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: postgres
namespace: postgres-one
spec:
virtualhost:
fqdn: postgres-one.example.com
tls:
passthrough: true
tcpproxy:
services:
- name: postgres
port: 5432
routes:
- match: /
services:
- name: dummy
port: 80
</code></pre>
| webwurst |
<p>I'm still getting to grips with PromQL. I wrote this query in an attempt to detect the number of kubernetes pods that existed in the last 24 hours within a given namespace.</p>
<p><strong>My process here was:</strong></p>
<ul>
<li>Get the metric filtered to the relevant name-spaces (any airflow ones).</li>
<li>Get that metric over 24 hours.
<ul>
<li>Each pod will just have lots of duplicates of the same creation time here.</li>
</ul>
</li>
<li>Use <code>increase()</code> to get the range vectors for each pod back into instant vectors. The value will always be 0 as the creation time does not increase.</li>
<li>Now that we have 1 value per pod, use <code>count()</code> to see how many existed in that time frame.</li>
</ul>
<pre><code>count(increase(kube_pod_created{namespace=~".*-airflow"}[1d]))
</code></pre>
<p>Can anyone that knows prometheus well tell me if this logic follows? Since it isn't a normal database/etc I'm having trouble working out how to validate this query. It "looks" like it probably does the right thing when expanded out to a day though.</p>
| John Humphreys | <p>I'd recommend substituting <code>increase()</code> with <code>count_over_time()</code>, since <code>increase</code> may miss short-living pods with lifetime smaller than 2x scrape interval. The following query should return the total number of pods seen during the last 24 hours:</p>
<pre><code>count(count_over_time(kube_pod_created{namespace=~".*airflow"}[24h]))
</code></pre>
| valyala |
<p>I've seen <a href="https://cloudplatform.googleblog.com/2018/05/Kubernetes-best-practices-Resource-requests-and-limits.html" rel="nofollow noreferrer">articles recommending</a> that resource requests/limit should be implemented. However, none I've found that discuss on <em>what</em> numbers to fill in.</p>
<p>For example, consider a container use zero CPU while idle, 80% under normal user requests and 200% CPU when hit by some rare requests:</p>
<ul>
<li>If I put the maximum, 2000m as CPU request then a core would sit idle most of the time</li>
<li>On the other hand, if I request 800m and several pods are hitting their CPU limit at the same time the context switch overhead will kicks in</li>
</ul>
<p>There are also cases like</p>
<ul>
<li>Internal tools that sit idle most of the time, then jump to 200% on active use</li>
<li>Apps that have different peak time. For example, a SaaS that people use during working hours and a chatbot that start getting load after people leave work. It'd be nice if they could share the unused capacity.</li>
</ul>
<p>Ideally <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">vertical pod autoscaler</a> would probably solve these problems automatically, but it is still in alpha today.</p>
| willwill | <p>What I've been doing is to use telegraf to collect resource usage, and use the 95th percentile while the limit is set to 1 CPU and twice the memory request.</p>
<p><img src="https://i.stack.imgur.com/FOFRu.png" alt="Screenshot"></p>
<p>The problem with this method is</p>
<ul>
<li>App that utilize multicores during startup, then under a core throughout their life will take longer to starts. I've observed a 2 minutes Spring startup become 5 minutes</li>
<li>Apps that are rarely used will have less resource reserved, and so have to rely on bursting capacity when it get invoked. This could be a problem if it has a surge in popularity.</li>
</ul>
| willwill |
<p>I'm looking for a way to uniquely identify a cluster:</p>
<ul>
<li>Something that can't be moved to another cluster such as a secret</li>
<li>Something that can be accessed by the application (e.g. an environment variable or stored in an object that can populate env vars</li>
<li>Something that is unlikely to change over time for a given cluster.</li>
</ul>
<p>What would that something be?</p>
| znat | <p>With OpenShift 4.x, you can find the unique Cluster ID for each cluster in the <code>clusterversion</code> CRD:</p>
<pre><code>$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
</code></pre>
<p>The <code>clusterversion</code> object looks like this:</p>
<pre><code>$ oc get clusterversion version -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterVersion
metadata:
name: version
[..]
spec:
channel: stable-4.4
clusterID: 990f7ab8-109b-4c95-8480-2bd1deec55ff
[..]
</code></pre>
<p>Source: <a href="https://docs.openshift.com/container-platform/4.2/support/gathering-cluster-data.html#support-get-cluster-id_gathering-cluster-data" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.2/support/gathering-cluster-data.html#support-get-cluster-id_gathering-cluster-data</a></p>
| Simon |
<p>I have a Redis pod, and I expect connection requests to this pod from different clusters and applications not running in the cloud.
Since Redis does not work with the http protocol, accessing as the route I have done below does not work with this connection string "<code>route-redis.local:6379</code>".</p>
<ul>
<li>route.yml</li>
</ul>
<pre><code>apiVersion: v1
kind: Route
metadata:
name: redis
spec:
host: route-redis.local
to:
kind: Service
name: redis
</code></pre>
<ul>
<li><p>service.yml</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
name: redis
</code></pre>
</li>
</ul>
<p>You may have encountered this situation. In short, is there any way to access to the redis pod via route? If not, how do you solve this problem?</p>
| seyid yagmur | <p>You already discovered that Redis does not work via the HTTP protocol, which is correct as far as I know. Routes work by inspecting the HTTP Host header for each request, which will not work for Redis. This means that you will <strong>not be able to use Routes for non-HTTP workload</strong>.</p>
<p>Typically, such non-HTTP services are exposed via a <code>Service</code> and <code>NodePorts</code>. This means that each Worker Node that is part of your cluster will open this port and will forward the traffic to your application.</p>
<p>You can find more information in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p>NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.</p>
</blockquote>
<p>You can define a NodePort like so (this example is for MySQL, which is also non-HTTP workload):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30036
name: http
selector:
name: mysql
</code></pre>
<p>Of course, your administrator may limit the access to these ports, so it may or may not be possible to use these types of services on your OpenShift cluster.</p>
| Simon |
<p>I have an application running inside <code>kubernetes</code> which has a file mounted through <code>configmaps</code>. Now, from inside the application I want to perform some action when this file (from configmap) gets updated (lets say through <code>kubectl update configmaps xyz</code> command).</p>
<p>Lets say I have created a configmap using the following command:</p>
<pre><code>kubectl create configmap myy-config --from-file=config.json
</code></pre>
<p>and I have my Deployment created like this:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
-
image: "xyz"
name: myapp
ports:
-
containerPort: 5566
volumeMounts:
-
mountPath: /myapp/config
name: config
dnsPolicy: ClusterFirstWithHostNet
volumes:
-
name: config
configMap:
name: my-config
</code></pre>
<p>Now, if I do <code>kubectl exec -it <pod> sh</code> I can see the file. If I edit the configmap using <code>kubectl edit configmap my-config</code> and change the content, the application running in my pod doesn't get the file changed notification. I am using GO Lang for the application and it doesn't receive the fsnotify on the file <code>/myapp/config/config.json</code> even though I can see that after the edit, the file has changed.</p>
<p>If I run the same application in my laptop, of course, the code gets the fsnotify and my application updates it configuration. The same code from within the kubernetes with the file coming from configmap, it doesn't work. I have read other SOF questions <a href="https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes">like this</a> and various others, but nothing specifically has solution for the problem I face.</p>
<p>I understand that the file (which comes from the configmap) is a symlink and the actual file is in a folder called <code>..data/config.json</code>. I tried to add that file also, but still not getting the fsnotify signal. Is it possible to get fsnotify signal for files which come from configmap (as well as secrets) within the application? If so, can someone please help me and show how to do it (preferably in GO lang)?</p>
| Amudhan | <p>You might be experience a problem <a href="https://medium.com/@xcoulon/kubernetes-configmap-hot-reload-in-action-with-viper-d413128a1c9a" rel="nofollow noreferrer">like this</a>:</p>
<blockquote>
<p>When a ConfigMap changes, the real path to the config files it contains changed, but this is kinda “hidden” by 2 levels of symlinks: [..]</p>
</blockquote>
<p>So it seems you need to follow the chain of symlinks and watch that. Since your application is written in <code>go</code> you could just use <a href="https://github.com/spf13/viper" rel="nofollow noreferrer"><code>spf13/viper</code></a> since the feature <a href="https://github.com/spf13/viper/commit/e0f7631cf3ac7e7530949c7e154855076b0a4c17" rel="nofollow noreferrer">WatchConfig and Kubernetes</a> was added.</p>
<p>Alternatively you can get notified by the Kubernetes API on <a href="https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/config-map-v1/#list-list-or-watch-objects-of-kind-configmap-1" rel="nofollow noreferrer">changes of a ConfigMap</a>. This requires configuring some <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-examples" rel="nofollow noreferrer">access rules</a> upfront most probably.</p>
| webwurst |
<p>I want to display pod details in the following format using promql/Prometheus.</p>
<p><a href="https://i.stack.imgur.com/AGPBm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AGPBm.png" alt="Image1" /></a></p>
<p>Furthermore, I want to display CPU and memory utilization of application/component in below format using promql</p>
<p><a href="https://i.stack.imgur.com/SLrzt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SLrzt.png" alt="Image2" /></a></p>
<p>promql query: sum(container_memory_working_set_bytes) by (pod)</p>
<p>I can get the consumed memory by pod using above query.</p>
<p>How to calculate percentage of memory used ? I am not able to fetch memory limit of stateful pod using promql
Could you please suggest any query/API details ?</p>
| Peter | <h3>Per-pod CPU usage in percentage (the query doesn't return CPU usage for pods without CPU limits)</h3>
<pre><code>100 * max(
rate(container_cpu_usage_seconds_total[5m])
/ on (container, pod)
kube_pod_container_resource_limits{resource="cpu"}
) by (pod)
</code></pre>
<p>The <code>kube_pod_container_resource_limits</code> metric can be scraped incorrectly if scrape config for <a href="https://github.com/kubernetes/kube-state-metrics/" rel="noreferrer">kube-state-metrics</a> pod is improperly configured. In this case the original <code>pod</code> label for this metric is moved to the <code>exported_pod</code> label because of <code>honor_labels</code> behavior - see <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config" rel="noreferrer">these docs</a> for details. In this case <a href="https://docs.victoriametrics.com/MetricsQL.html#label_replace" rel="noreferrer">label_replace</a> function must be used for moving <code>exported_pod</code> label to <code>pod</code> label:</p>
<pre><code>100 * max(
rate(container_cpu_usage_seconds_total[5m])
/ on (container, pod)
label_replace(kube_pod_container_resource_limits{resource="cpu"}, "pod", "$1", "exported_pod", "(.+)")
) by (pod)
</code></pre>
<h3>Per-pod memory usage in percentage (the query doesn't return memory usage for pods without memory limits)</h3>
<pre><code>100 * max(
container_memory_working_set_bytes
/ on (container, pod)
kube_pod_container_resource_limits{resource="memory"}
) by (pod)
</code></pre>
<p>If the <code>kube_pod_container_resource_limits</code> metric is scraped incorrectly as mentioned above, then the <a href="https://docs.victoriametrics.com/MetricsQL.html#label_replace" rel="noreferrer">label_replace</a> function must be used for moving <code>exported_pod</code> label value to <code>pod</code>:</p>
<pre><code>100 * max(
container_memory_working_set_bytes
/ on (container, pod)
label_replace(kube_pod_container_resource_limits{resource="memory"}, "pod", "$1", "exported_pod", "(.+)")
) by (pod)
</code></pre>
| valyala |
<p>In an Openshift environment (Kubernetes v1.18.3+47c0e71)
I am trying to run a very basic container which will contain:</p>
<ul>
<li>Alpine (latest version)</li>
<li>JDK 1.8</li>
<li>Jmeter 5.3</li>
</ul>
<p>I just want it to boot and run in a container, expecting connections to run Jmeter CLI from the command line terminal.</p>
<p>I have gotten this to work perfectly in my local Docker distribution. This is the Dokerfile content:</p>
<pre><code>FROM alpine:latest
ARG JMETER_VERSION="5.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
USER root
ARG TZ="Europe/Amsterdam"
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/ \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
WORKDIR ${JMETER_HOME}
</code></pre>
<p>For some reason, when I configure a Pod with a container with that exact configuration previously uploaded to a private Docker images registry, it does not work.</p>
<p>This is the Deployment configuration (yaml) file (very basic aswell):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jmeter
namespace: myNamespace
labels:
app: jmeter
group: myGroup
spec:
selector:
matchLabels:
app: jmeter
replicas: 1
template:
metadata:
labels:
app: jmeter
spec:
containers:
- name: jmeter
image: myprivateregistry.azurecr.io/jmeter:dev
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
imagePullPolicy: Always
restartPolicy: Always
imagePullSecrets:
- name: myregistrysecret
</code></pre>
<p>Unfortunately, I am not getting any logs:</p>
<p><a href="https://i.stack.imgur.com/cGcHx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cGcHx.png" alt="enter image description here" /></a></p>
<p>A screenshot of the Pod events:</p>
<p><a href="https://i.stack.imgur.com/Biu06.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Biu06.png" alt="enter image description here" /></a></p>
<p>Unfortunately, not getting either to access the terminal of the container:
<a href="https://i.stack.imgur.com/MGFvy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MGFvy.png" alt="enter image description here" /></a></p>
<p>Any idea on:</p>
<ul>
<li>how to get further logs?</li>
<li>what is going on?</li>
</ul>
| ElPiter | <p>On your local machine, you are likely using <code>docker run -it <my_container_image></code> or similar. Using the <code>-it</code> option will run an interactive shell in your container without you specifying a <code>CMD</code> and will keep that shell running as the primary process started in your container. So by using this command, you are basically already specifying a command.</p>
<p>Kubernetes expects that the container image contains a process that is run on start (<code>CMD</code>) and that will run as long as the container is alive (for example a webserver).</p>
<p>In your case, Kubernetes is starting the container, but you are not specifying what should happen when the container image is started. This leads to the container immediately terminating, which is what you can see in the Events above. Because you are using a <code>Deployment</code>, the failing Pod is then restarted again and again.</p>
<p>A possible workaround to this is to run the <code>sleep</code> command in your container on startup by specifing a <code>command</code> in your Pod like so:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: alpine
command: ["/bin/sleep", "infinite"]
restartPolicy: OnFailure
</code></pre>
<p>(<a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">Kubernetes documentation</a>)</p>
<p>This will start the Pod and immediately run the <code>/bin/sleep infinite</code> command, leading to the primary process being this <code>sleep</code> process that will never terminate. Your container will now run indefinitely. Now you can use <code>oc rsh <name_of_the_pod</code> to connect to the container and run anything you would like interactively (for example <code>jmeter</code>).</p>
| Simon |
<p>I would like to run a sequence of Kubernetes jobs one after another. It's okay if they are run on different nodes, but it's important that each one run to completion before the next one starts. Is there anything built into Kubernetes to facilitate this? Other architecture recommendations also welcome!</p>
| Alex Flint | <p>This requirement to add control flow, even if it's a simple sequential flow, is outside the scope of Kubernetes native entities as far as I know.<br>
There are many workflow engine implementations for Kubernetes, most of them are focusing on solving CI/CD but are generic enough for you to use however you want.</p>
<ul>
<li>Argo: <a href="https://applatix.com/open-source/argo/" rel="nofollow noreferrer">https://applatix.com/open-source/argo/</a>
Added a custom resource deginition in Kubernetes entity for Workflow</li>
<li>Brigade: <a href="https://brigade.sh/" rel="nofollow noreferrer">https://brigade.sh/</a>
Takes a more serverless like approach and is built on Javascript which is very flexible</li>
<li>Codefresh: <a href="https://codefresh.io" rel="nofollow noreferrer">https://codefresh.io</a>
Has a unique approach where you can use the SaaS to easily get started without complicated installation and maintenance, and you can point Codefresh at your Kubernetes nodes to run the workflow on. </li>
</ul>
<p>Feel free to Google for "Kubernetes Workflow", and discover the right platform for yourself.</p>
<p>Disclaimer: I work at Codefresh</p>
| itaysk |
<p>I am running different versions of our application in different namespaces and I have set up a prometheus and grafana stack to monitor them. I am using below promql for getting the cpu usage of different pods (as percentage of 1 core) and the value that it returns is matching the values that I get from the <code>kubectl top pods -n namespace</code>:</p>
<pre><code>sum (rate (container_cpu_usage_seconds_total{id!="/",namespace=~"$Namespace",pod=~"^$Deployment.*$"}[1m])) by (pod)*100
</code></pre>
<p>The problem is I want to get the total cpu usage of all pods in a namespace cluster-wide and I tried different queries but the values that they return is not matching the total cpu usage that I get from the above promql or <code>kubectl top pods -n namespace</code>.</p>
<p>The promql queries that I tried:</p>
<pre><code>sum (rate (container_cpu_usage_seconds_total{namespace=~"$Namespace",pod=~"^$Deployment.*$"}[1m])) by (namespace)
sum (rate (container_cpu_usage_seconds_total{namespace=~"$Namespace",pod=~"^$Deployment.*$"}[1m]))
</code></pre>
<p>I am using the <code>Singlestat</code> for this and also at <code>visualization</code> from <code>Value</code> section I tried different <code>show</code> methods such as Average, total, current but non returned the correct value.</p>
<p>My question is how I can get the total cpu usage of all the pods in a namespace cluster-wide?</p>
| AVarf | <p>The following PromQL query should return summary per-namespace CPU usage across all the pods in Kuberentes (the CPU usage is expressed in the number of used CPU cores):</p>
<pre><code>sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace)
</code></pre>
<p>The <code>{container!~""}</code> filter is needed for removing CPU usage metrics for <code>cgroups hierarchy</code>, since these metrics are already included with non-empty <code>container</code> labels.</p>
<p>The query uses the following PromQL functions:</p>
<ul>
<li><a href="https://docs.victoriametrics.com/MetricsQL.html#sum" rel="nofollow noreferrer">sum()</a></li>
<li><a href="https://docs.victoriametrics.com/MetricsQL.html#rate" rel="nofollow noreferrer">rate()</a></li>
</ul>
| valyala |
<p>I am building an application which should execute tasks in a separate container/pods.
this application would be running in a specific namespace the new pods must be created in the same namespace as well.</p>
<p>I understand we can similar via custom CRD and Operators, but I found it is overly complicated and we need Golang knowledge for the same.</p>
<p>Is there any way this could be achived without having to learn Operators and GoLang?</p>
<p>I am ok to use <code>kubctl</code> or <code>api</code> within my container and wanted to connect the host and to the same namespace.</p>
| Manish Bansal | <p>Yes, this is certainly possible using a <code>ServiceAccount</code> and then connecting to the API from within the Pod.</p>
<ul>
<li><p>First, create a <code>ServiceAccount</code> in your namespace using</p>
<pre><code>kubectl create serviceaccount my-service-account
</code></pre>
</li>
<li><p>For your newly created <code>ServiceAccount</code>, give it the permissions you want using <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer"><code>Roles</code> and <code>RoleBindings</code></a>. The subject would be something like this:</p>
<pre><code>subjects:
- kind: ServiceAccount
name: my-service-account
namespace: my-namespace
</code></pre>
</li>
<li><p>Then, add the <code>ServiceAccount</code> to the Pod from where you want to create other Pods from (see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">documentation</a>). Credentials are automatically mounted inside the Pod using <code>automountServiceAccountToken</code>.</p>
</li>
<li><p>Now from inside the Pod you can either use <code>kubectl</code> or call the API using the credentials inside the Pod. There are libraries for a lot of programming languages to talk to Kubernetes, use those.</p>
</li>
</ul>
| Simon |
<p>I need to do configs for Openshift and Kubernetes in one repo. I use Templates in Openshift, but Kubernetes can't understand what a Template is. What can I use that will work in both OpenShift and vanilla Kubernetes?</p>
| Mankasss | <p>As you noted, <code>Templates</code> is an OpenShift-specific resource. I would not recommend using it but instead use something like <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a> to do templating for both OpenShift and also for Kubernetes.</p>
<p>Since Kubernetes 1.14, there is the <code>kubectl kustomize</code> command built in, so you do not even need to install another tool for it.</p>
<p>Here is a basic tutorial for Kustomize in the Kubernetes documentation: <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/</a></p>
| Simon |
<p>I'm using Kubernetes with kube-state-metrics and Prometheus/grafana to graph various metrics of the Kubernetes Cluster.</p>
<p>Now I'd like to Graph how many <strong>new</strong> PODs have been created per Hour over Time.</p>
<p>The Metric <code>kube_pod_created</code> contains the Creation-Timestamp as Value but since there is a Value in each Time-Slot, the following Query also returns Results >0 for Time-Slots where no new PODs have been created:</p>
<pre><code>count(rate(kube_pod_created[1h])) by(namespace)
</code></pre>
<p>Can I use the Value in some sort of criteria to only count if Value is within the "current" Time-Slot ?</p>
| powo | <p>The following query returns the number of pods created during the last hour:</p>
<pre><code>count(last_over_time(kube_pod_created[1h]) > time() - 3600)
</code></pre>
<p>How does it work?</p>
<p>The <code>last_over_time(kube_pod_created[1h])</code> returns creation timestamps for pods, which were active during the last hour (see <a href="https://docs.victoriametrics.com/MetricsQL.html#last_over_time" rel="nofollow noreferrer">last_over_time()</a> docs). This includes pods, which could be started long time ago and are still active alongside pods, which where created during the last hour.</p>
<p>We need to filter out pods, which were created more than a hour ago. This is performed by comparing pod creation timestamps to <code>time() - 3600</code> (see <a href="https://docs.victoriametrics.com/MetricsQL.html#time" rel="nofollow noreferrer">time()</a> docs). Such comparison removes time series for pods created more than a hour ago. See <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/#comparison-binary-operators" rel="nofollow noreferrer">these docs</a> for details on how doe comparison operators work in PromQL.</p>
<p>Then the outer <a href="https://docs.victoriametrics.com/MetricsQL.html#count" rel="nofollow noreferrer">count()</a> returns the number of time series, which equals to the number of pods created during the last hour.</p>
| valyala |
<p>I made a OKD 4.9 cluster.</p>
<p>I successfully installed openshift and made a cluster, but I got a question now.</p>
<p>There is a sentence in OKD document.</p>
<p>"The only supported values is 3, which is the default value"</p>
<p>I think there is an important reason to do not use 5, 7 masters.</p>
<p>Could you guys tell me why?</p>
| jonghyun Kim | <blockquote>
<p>I think there is an important reason to do not use 5, 7 masters.</p>
<p>Could you guys tell me why?</p>
</blockquote>
<p>Technically it is still possible to have 5 or 7 Master Nodes, there is nothing in place to stop you. It is just not supported, so if you have a problem with it you'll have to deal with it yourself.</p>
<p>However, you should understand why this recommendation is there. Many Kubernetes Master components (<strong>Controller Manager</strong>, <strong>Scheduler</strong>) are only active on one single Master Node anyway, while the same component on all other Masters are not active. <strong>API Server</strong> is active-active, so API Server scales nicely.</p>
<p>For <strong>etcd</strong>, Raft is leader-based; the leader handles all client requests which need cluster consensus. Any request that requires consensus sent to a follower is automatically forwarded to the leader. So for write operations or for consistent reads, there is an overhead if you have more than 3 etcd members. If you have more etcd members, this overhead gets larger, but YMMV.</p>
<p>So it comes down to additional <em>overhead</em> with 5 or 7 Master Nodes.</p>
| Simon |
<p>After the upgrading <code>kubectl</code>, I am unable to login on Openshift cluster.</p>
<p>When i am trying to login with <code>oc login</code> command it is reverting the below error message:</p>
<pre><code>Error: unknown command "login" for "kubectl"
Did you mean this?
logs
plugin
Run 'kubectl --help' for usage.
</code></pre>
<p>Below are the some details of version from machine where i have the login issue</p>
<pre><code># kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
</code></pre>
<p>Below are the some details of version from machine where i have no issue , i am able to login and didn't upgrade kubectl version</p>
<pre><code># kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v0.0.0-master+$Format:%h$", GitCommit:"$Format:%H$", GitTreeState:"", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.2+f2384e2", GitCommit:"f2384e2", GitTreeState:"clean", BuildDate:"2020-06-16T03:21:27Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Addition: I have checked by downgrade the <code>kubectl</code> versions , but getting same error on that machine where i unable to login using <code>oc login</code>.</p>
| Imtiaz Ahmed | <p>It looks like you have symlinked <code>oc</code> to <code>kubectl</code> somehow.</p>
<p>As you note, <code>kubectl</code> does not have a <code>login</code> method, you need to actually use the <code>oc</code> CLI tool to log into your OpenShift cluster. This will get the proper tokens that you need to talk to the OpenShift API.</p>
<p>Alternatively, you can get the necessary token via the OpenShift Web Console (top right, "Copy Login Command" or something like that).</p>
| Simon |
<p>I've setup cert mananger on microk8s following <a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">these</a> instructions, I had it working 6 months ago but have since had to start again from scratch. Now when I setup my Cluster Issuer I'm getting the error below.</p>
<p>Everything else seems fine and in a good state. I'm struggling to know where to start debugging this.</p>
<pre><code>Error initializing issuer: Get "https://acme-v02.api.letsencrypt.org/directory": remote error: tls: handshake failure
</code></pre>
<p>Cluster Issuer yaml</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: <myemail>
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: prod-issuer-account-key
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p><strong>UPDATE</strong>
Some extra info</p>
<p>All pods for cert manager are running, here are the logs</p>
<p>cert-manager pod <a href="https://pastebin.com/aT5jAVbt" rel="nofollow noreferrer">logs</a>
cert-manager-cainjector logs only shows some warnings about deprecated apis
cert-mananger-webhook <a href="https://pastebin.com/9jbFjdtY" rel="nofollow noreferrer">logs</a></p>
<p><a href="https://pastebin.com/V59FdgaK" rel="nofollow noreferrer">Describe ClusterIssuer</a></p>
<p>I've tried to get a cert for an ingress resource but it errors saying the cluster issuer isn't in a ready state</p>
| matt_lethargic | <p>After uninstalling and reinstalling everything including Microk8s I tried again no luck. Then I tried using the latest helm chart v1.0.2 which had a newer cert-manager version, seemed to work straight away.</p>
<p>Another note, mainly to myself. This issue was also caused by having search domains setup in netplan, once removed everything started working.</p>
| matt_lethargic |
<p>I am setting up HPA on custom metrics - basically on no. of threads of a deployment.</p>
<p>I have created a PrometheusRule to get average of threads (5 min. based). On the container, I am doing cont. load to increase the threads and average value is also going high.</p>
<p>I started with 2 replicas and when current value is crossing the target value, am not seeing my deployment scaling out.</p>
<p>As you can see, have set target as 44 and current value is 51.55 for more than 10 min but still no scale up.
<a href="https://i.stack.imgur.com/Uo6No.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uo6No.png" alt="enter image description here" /></a></p>
<p><strong>Version Info</strong></p>
<ul>
<li>Kubernetes (AKS) : 1.19.11</li>
<li>Prometheus : 2.22.1</li>
<li>Setup done via prometheus-operator (0.7)</li>
<li>Autoscaling api version : autoscaling/v2beta2</li>
</ul>
<p><strong>Prometheus Rule</strong></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: rdp-rest
namespace: default
labels:
app.kubernetes.io/name: node-exporter
app.kubernetes.io/version: 1.0.1
prometheus: k8s
role: alert-rules
run: rdp-rest
app: rdp-rest
spec:
groups:
- name: hpa-rdp-rest
interval: 10s
rules:
- expr: 'avg_over_time(container_threads{container="rdp-rest"}[5m])'
record: hpa_custom_metrics_container_threads_rdp_rest
labels:
service: rdp-rest
</code></pre>
<p><strong>Manifests</strong> - <a href="https://github.com/prometheus-operator/kube-prometheus/tree/release-0.7/manifests" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus/tree/release-0.7/manifests</a></p>
<p><strong>Update (6th July)</strong> - HPA with custom metrics is working fine for any other technology like nodejs/ngnix, etc. but not working for netty api</p>
<p>Any thoughts?</p>
| Sunil Agarwal | <p>Finally after a week, found the root cause.</p>
<p>So the issue was with the label. I had 2 deployments with same label. So what internal hpa is doing is it's getting stats for all the pods with that label and then doing scale up/down. As soon as I corrected the labels, hpa worked as expected.</p>
<p>But the same on prometheus UI shows stats for ONLY one type of pods.
Looks like some internal bug or something. Not getting when we provide name why it's going and fetching stats based on label.</p>
<p>Point to remember : Always double check your labels.</p>
| Sunil Agarwal |
<p>I want to monitor disk usages of persistent volumes in the cluster. I am using <a href="https://github.com/coreos/kube-prometheus" rel="noreferrer">CoreOS Kube Prometheus</a>. A dashboard is trying to query with a metric called <strong>kubelet_volume_stats_capacity_bytes</strong> which is not available anymore with Kubernetes versions starting from v1.12.</p>
<p>I am using Kubernetes version v1.13.4 and <a href="https://github.com/MaZderMind/hostpath-provisioner" rel="noreferrer">hostpath-provisioner</a> to provision volumes based on persistent volume claim. I want to access current disk usage metrics for each persistent volume.</p>
<ul>
<li><p><strong>kube_persistentvolumeclaim_resource_requests_storage_bytes</strong> is available but it shows only the persistent claim request in bytes</p></li>
<li><p><strong>container_fs_usage_bytes</strong> is not fully covers my problem.</p></li>
</ul>
| Cemal Unal | <p>Per-PVC disk space usage in percentage can be determined with the following query:</p>
<pre><code>100 * sum(kubelet_volume_stats_used_bytes) by (persistentvolumeclaim)
/
sum(kubelet_volume_stats_capacity_bytes) by (persistentvolumeclaim)
</code></pre>
<p>The <code>kubelet_volume_stats_used_bytes</code> metric shows per-PVC disk space usage in bytes.</p>
<p>The <code>kubelet_volume_stats_capacity_bytes</code> metric shows per-PVC disk size in bytes.</p>
| valyala |
<p>I want to generate a password in a Helm template, this is easy to do using the <code>randAlphaNum</code> function. However the password will be changed when the release is upgraded. Is there a way to check if a password was previously generated and then use the existing value? Something like this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: db-details
data:
{{ if .Secrets.db-details.db-password }}
db-password: {{ .Secrets.db-details.db-password | b64enc }}
{{ else }}
db-password: {{ randAlphaNum 20 | b64enc }}
{{ end }}
</code></pre>
| Mikhail Janowski | <p>You can build on <a href="https://stackoverflow.com/users/435563/shaunc">shaunc</a>'s idea to use the <a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function" rel="noreferrer">lookup</a> function to fix the original poster's code like this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: db-details
data:
{{- if .Release.IsInstall }}
db-password: {{ randAlphaNum 20 | b64enc }}
{{ else }}
# `index` function is necessary because the property name contains a dash.
# Otherwise (...).data.db_password would have worked too.
db-password: {{ index (lookup "v1" "Secret" .Release.Namespace "db-details").data "db-password" }}
{{ end }}
</code></pre>
<p>Only creating the <code>Secret</code> when it doesn't yet exist won't work because Helm will delete objects that are no longer defined during the upgrade.</p>
<p>Using an annotation to keep the object around has the disadvantage that it will not be deleted when you delete the release with <code>helm delete ...</code>.</p>
| Jan Dubois |
<p>I see that kubernets uses pod and then in each pod there can be multiple containers.</p>
<p>Example I create a pod with</p>
<pre><code>Container 1: Django server - running at port 8000
Container 2: Reactjs server - running at port 3000
</code></pre>
<p>Since the containers inside cant have port conflicts, then its better to put all of them in one containers. Because I see the advantage of using containers is no need to worry about port conflict.</p>
<pre><code>Container 1: BOTH Django server - running at port 8000 and Reactjs server - running at port 3000
</code></pre>
<p>No need of container2.</p>
<p>and also</p>
<p>When i run different docker containers on my PC i cant access them like local host</p>
<p>But then how is this possible inside a POD with multiple containers.</p>
<p>Whats the difference between the docker containers run on PC and inside a POD.</p>
| Santhosh | <p>The typical way to think about this delineation is "which parts of my app scale together?"</p>
<p>So for your example, you probably wouldn't even choose a common pod for them. You should have a Django pod and separately, a ReactJS server pod. Thus you can scale these independently.</p>
<p>The typical case for deploying pods with multiple containers is a pattern called "sidecar", where the added container enhances some aspect of the deployed workload, and always scales right along with that workload container. Examples are:</p>
<ol>
<li>Shipping logs to a central log server</li>
<li>Security auditing</li>
<li>Purpose-built Proxies - e.g. handles DB connection details</li>
<li>Service Mesh (intercepts all network traffic and handles routing, circuit breaking, load balancing, etc.)</li>
</ol>
<p>As for deploying the software into the same container, this would only be appropriate if the two pieces being considered for co-deployment into the same container are developed by the same team and address the same concerns (that is - they really are only one piece when you think about it). If you can imagine them being owned/maintained by distinct teams, let those teams ship a clean container image with a contract to use networking ports for interaction.</p>
<p>(some of) The details are this:
Pods are a shared Networking and IPC namespace. Thus one container in a pod can modify iptables and the modification applies to all other containers in that pod. This may help guide your choice: Which containers should have <em>that</em> intimate a relationship to each other?
Specifically I am referring to Linux Namespaces, a feature of the kernel that allows different processes to share a resource but not "see" each other. Containers are normal Linux processes, but with a few other Linux features in place to stop them from seeing each other. <a href="https://www.youtube.com/watch?v=zGw_xKF47T0&t=369s" rel="nofollow noreferrer">This video</a> is a great intro to these concepts. (timestamp in link lands on a succinct slide/moment)</p>
<p>Edit - I noticed the question edited to be more succinctly about networking. The answer is in the Namespace feature of the Linux kernel that I mentioned. Every process belongs to a Network namespace. Without doing anything special, it would be the default network namespace. Containers usually launch into their own network namespace, depending on the tool you use to launch them. Linux then includes a feature where you can virtually connect two namespaces - this is called a Veth Pair (Pair of Virtual Ethernet devices, connected). After a Veth pair is setup between the default namespace and the container's namespace, both get a new eth device, and can talk to each other. Not all tools will setup that veth pair by default (example: Kubernetes will not do this by default). You can, however, tell Kubernetes to launch your pod in "host" networking mode, which just uses the system's default network namespace so the veth pair is not even required.</p>
| Chris Trahey |
<p>I have a Kubernetes v1.17.0 cluster with multiple nodes. I've created PVC with access mode set to RWO. From the Kubernetes docs:</p>
<blockquote>
<p>ReadWriteOnce -- the volume can be mounted as read-write by a single node</p>
</blockquote>
<p>I'm using a Cinder volume plugin which doesn't support ReadWriteMany.</p>
<p>When I create two different deployments that mount the same PVC Kubernetes sometimes deploys them on two different nodes which cause pods to fail. </p>
<p>Is this desired behaviour or is there a problem in my configuration?</p>
| Lukas | <p>As I gathered from your answers to the comments, you do not want to use affinity rules but want the scheduler to perform this work for you.</p>
<p>It seems that this issue has been known since at least 2016 but has not yet been resolved, as the scheduling is considered to be working as expected: <a href="https://github.com/kubernetes/kubernetes/issues/26567" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/26567</a></p>
<p>You can read the details in the issue, but the core problem seems to be that in the definition of Kubernetes, a <code>ReadWriteOnce</code> volume can never be accessed by two Pods at the same time. By definition. What would need to be implemented is a flag saying "it is OK for this RWO volume to be accessed by two Pods at the same time, even though it is RWO". But this functionality has not been implemented yet.</p>
<p>In practice, you can typically work around this issue by using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate Deployment Strategy</a>: <code>.spec.strategy.type: Recreate</code>. Alternatively, use the affinity rules as described by the other answers.</p>
| Simon |
<p>Typically, if I have a remote server, I could access it using ssh, and VS Code gives a beautiful extension for editing and debugging codes for the remote server. But when I create pods in Kuberneters, I can't really ssh into the container and so I cannot edit the code inside the pod or machine. And the kuberneters plugin in VSCode does not really help because the plugin is used to deploy the code. So, I was wondering whether there is a way edit codes inside a pod using VSCode. </p>
<p>P.S. Alternatively if there is a way to ssh into a pod in a kuberneters, that will do too.</p>
| user3086871 | <p>If your requirement is "kubectl edit xxx" to use VSCode.</p>
<p>The solution is:</p>
<ul>
<li><p>For <strong>Linux,macos</strong>: <code>export EDITOR='code --wait'</code></p>
</li>
<li><p>For <strong>Windows</strong>: <code>set EDITOR=code --wait</code></p>
</li>
</ul>
| Hlex |
<p>I have a question regarding the timeout in exec type container probe in openshift/kubernetes.
My openshift version is 3.11 which has kubernetes version 1.11</p>
<p>I have defined a readiness probe as stated below</p>
<pre><code>readinessProbe:
exec:
command:
- /bin/sh
- -c
- check_readiness.sh
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 1
</code></pre>
<p>according to openshift documentation timeoutSeconds parameter has no effect on the container probe for exec type probe.
check_readiness.sh script is a long running script and may take more than 5 mins to return.
After the container start i logged into the container to check the status of the script.
What i found is that after approx 2 min another check_readiness.sh script was started while the first one was still running and another one after approx 2 min.</p>
<p>Can someone explain what openshift or kubernetes doing with the probe in this case ?</p>
| dsingh | <p>Yes, that is correct, Container Execution Checks do not support the <code>timeoutSeconds</code> argument. However, <a href="https://docs.openshift.com/container-platform/3.11/dev_guide/application_health.html" rel="nofollow noreferrer">as the documentation notes</a>, you can implement similar functionality with the <code>timeout</code> command:</p>
<pre><code>[...]
readinessProbe:
exec:
command:
- /bin/bash
- '-c'
- timeout 60 /opt/eap/bin/livenessProbe.sh
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
[...]
</code></pre>
<p>So in your case I am guessing the following is happening:</p>
<ul>
<li>Your container is started.</li>
<li>After the duration <code>initialDelaySeconds</code> (30 seconds in your case), the first readiness probe is started and your script is executed.</li>
<li>Then, after <code>periodSeconds</code> (30s) the next probe is launched, in your case leading to the script being executed the second time.</li>
<li>Every 30s, the script is started again, even though the previous iteration(s) are still running.</li>
</ul>
<p>So in your case you should either use the <code>timeout</code> command as seen in the documentation or increase the <code>periodSeconds</code> to make sure the two scripts are not executed simultaneously.</p>
<p>In general, I would recommend that you make sure your readiness-check-script returns much faster than multiple minutes to avoid these kind of problems.</p>
| Simon |
<p>I am trying to deploy a docker image on a kubernetes cluster.</p>
<p>What I want to achieve on the cluster is the same output as I achieve when I run this command locally (The output will be some generated files)</p>
<pre><code>sudo docker run \
--env ACCEPT_EULA="I_ACCEPT_THE_EULA" \
--volume /my-folder:/opt/data \
--volume /my-folder:/opt/g2 \
test/installer:3.0.0
</code></pre>
<p>What I have created for the deployment on the kubernetes side is this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: default
spec:
template:
metadata:
name: test
labels:
app: test
spec:
volumes:
- name: nfs-volume
nfs:
# URL for the NFS server
server: SERVER_HOSTNAME
path: PATH
containers:
- name: test-container
image: DOCKER_IMAGE
resources:
requests:
memory: "1Gi"
cpu: "1000m"
limits:
memory: "2Gi"
cpu: "2000m"
env:
- name: ACCEPT_EULA
value: "I_ACCEPT_THE_EULA"
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
replicas: 1
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
</code></pre>
<p>The problem that I have is wrt/ to these two arguments, I can not understand how to perform the related actions on the k8 side. Any suggestions ?</p>
<pre><code>--volume /my-folder:/opt/data
--volume /my-folder:/opt/g2
</code></pre>
<p>Currently I get errors like:
cp: cannot create directory '/opt/test': Permission denied</p>
| Ioan Kats | <p>try this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: default
spec:
template:
metadata:
name: test
labels:
app: test
spec:
volumes:
- name: my-folder
hostPath:
path: /my-folder
containers:
- name: test-container
image: test/installer:3.0.0
resources:
requests:
memory: "1Gi"
cpu: "1000m"
limits:
memory: "2Gi"
cpu: "2000m"
env:
- name: ACCEPT_EULA
value: "I_ACCEPT_THE_EULA"
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
- name: my-folder
mountPath: /opt/data
- name: my-folder
mountPath: /opt/g2
replicas: 1
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
</code></pre>
| Bijan |
<p>I tried to set externalPI with below command</p>
<p><code>oc patch svc <serviceName> -p '{"spec":{"externalIPs":["giving.my.ip.here"]}}'</code></p>
<p>but getting below error</p>
<p><code>Error from server (Forbidden): services "<myServiceName>" is forbidden: spec.externalIPs: Forbidden: externalIPs have been disabled</code></p>
| Ranjeet | <p><code>Forbidden: externalIPs have been disabled</code> is likely due to the configuration of your OpenShift cluster that currently does not allow you to create Services with an external IP.</p>
<p>So you may need to contact your OpenShift Administrator to allow these.</p>
<p>In OpenShift 3.x, you need to specify the <code>networkConfi.gexternalIPNetworkCIDRs</code> in the <code>master-config.yaml</code> (see <a href="https://docs.openshift.com/container-platform/3.11/admin_guide/tcp_ingress_external_ports.html" rel="nofollow noreferrer">documentation</a>).
In OpenShift 4.x, this needs to be configured in the <code>Network</code> configuration <code>spec.externalIP.policy.allowedCIDRs</code> (see <a href="https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-externalip.html#example-policy-objects_configuring-externalip" rel="nofollow noreferrer">documentation</a>)</p>
| Simon |
<p>I am trying to expose my RabbitMQ deployment and access it on my browser. For the deployment I created the following yml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blur-rabbitmq
labels:
app: blur-rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: blur-rabbitmq
template:
metadata:
labels:
app: blur-rabbitmq
spec:
containers:
- name: blur-rabbitmq
image: rabbitmq:3-management
ports:
- containerPort: 15672
</code></pre>
<p>And for the service the following:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: blur-service
labels:
app: blur-rabbitmq
spec:
selector:
app: blur-rabbitmq
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 15672
</code></pre>
<p>After creating the deployment and the service I expected to access the homepage of RabbitMQ on localhost:8080 but its not working. What am I missing? Any Idea?</p>
| marcelo | <p>You need to either port forward a local port to the cluster via <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer"><code>kubectl port-forward</code></a> or you need to create an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> object to map a public IP to the service endpoint within the cluster.</p>
<p>Basically, the cluster has its own internal network, and you need to instruct Kubernetes to "punch a hole" for you to access the service endpoint within that network from outside.</p>
| asthasr |
<p>As beginener, I have tried k9s and kubernetes '<code>kubectl top nodes</code>',for the cpu and memory usage and values are matched.Meanwhile I tried with prometheus UI, with '<code>avg(container_cpu_user_seconds_total{node="dev-node01"})</code>' and '<code>avg(container_cpu_usage_seconds_total{node="dev-node01"})</code>' for dev-node01. I cant get matching values.Any help will be appreciated as I am beginner.please any help would be appreciated.</p>
| Aksahy Awate | <p>The following PromQL query returns CPU usage (as the number of used CPU cores) by pod in every namespace:</p>
<pre><code>sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace, pod)
</code></pre>
<p>The <code>container!=""</code> filter is needed for filtering out cgroups hierarchy metrics - see <a href="https://stackoverflow.com/questions/69281327/why-container-memory-usage-is-doubled-in-cadvisor-metrics/69282328#69282328">this answer</a> for details.</p>
<p>The following query returns memory usage by pod in every namespace:</p>
<pre><code>sum(container_memory_usage_bytes{container!=""}) by (namespace, pod)
</code></pre>
<p>The following query returns CPU usage (as the number of used CPU cores) by node:</p>
<pre><code>sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
</code></pre>
<p>The following query returns memory usage by node:</p>
<pre><code>sum(container_memory_usage_bytes{container!=""}) by (node)
</code></pre>
| valyala |
<p>I have a <strong>deployment.yaml</strong> containing deployment of <em>3 containers</em> + <em>LB service</em> and the <strong>cloudbuild.yaml</strong> containing <em>steps to build container images every time there's new commit to a certain branch on Bitbucket git repo</em>.</p>
<p>All is working fine except the fact that my deplyment isn't updated whenever there's a new image version (<em>I used :latest tag in deployment</em>) and to change this I understood that my deployment images should use something unique, other than :latest, such as a git commit SHA.</p>
<p>Problem:
<strong>I'm not sure how to perform image declaration update during GCB CI process to contain new commit SHA.</strong></p>
<p>YAML's: <a href="https://paste.ee/p/CsETr" rel="nofollow noreferrer">https://paste.ee/p/CsETr</a></p>
| dzhi | <p>Found a solution by using image tag or URI variables in deployment fine and substituting them with sed during build-time.</p>
<p><strong>deplyment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev
name: app
labels:
app: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
initContainers:
- name: init
image: INIT_IMAGE_NAME
imagePullPolicy: Always
command: ['sh', '-c', 'cp -r /app /srv; chown -R 82:82 /srv/app']
volumeMounts:
- name: code
mountPath: /srv
containers:
- name: nginx
image: NGINX_IMAGE_NAME
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: code
mountPath: /srv
- name: php-socket
mountPath: /var/run
livenessProbe:
httpGet:
path: /health.html
port: 80
httpHeaders:
- name: X-Healthcheck
value: Checked
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
readinessProbe:
httpGet:
path: /health.html
port: 80
httpHeaders:
- name: X-Healthcheck
value: Checked
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
- name: php
image: PHP_IMAGE_NAME
imagePullPolicy: Always
volumeMounts:
- name: code
mountPath: /srv
- name: php-socket
mountPath: /var/run
livenessProbe:
httpGet:
path: /health.html
port: 80
httpHeaders:
- name: X-Healthcheck
value: Checked
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
readinessProbe:
httpGet:
path: /health.html
port: 80
httpHeaders:
- name: X-Healthcheck
value: Checked
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
volumes:
- name: code
emptyDir: {}
- name: php-socket
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: app-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: app
</code></pre>
<p><strong>cloudbuild.yaml</strong></p>
<pre><code>steps:
# Build Images
- id: Building Init Image
name: gcr.io/cloud-builders/docker
args: ['build','-t', 'eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA', '-f', 'init.dockerfile', '.']
- id: Building Nginx Image
name: gcr.io/cloud-builders/docker
args: ['build','-t', 'eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA', '-f', 'nginx.dockerfile', '.']
waitFor: ['-']
- id: Building PHP-FPM Image
name: gcr.io/cloud-builders/docker
args: ['build','-t', 'eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA', '-f', 'php.dockerfile', '.']
waitFor: ['-']
# Push Images
- id: Pushing Init Image
name: gcr.io/cloud-builders/docker
args: ['push','eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA']
- id: Pushing Nginx Image
name: gcr.io/cloud-builders/docker
args: ['push','eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA']
- id: Pushing PHP-FPM Image
name: gcr.io/cloud-builders/docker
args: ['push','eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA']
# Update Image Tags
- id: 'Setting Init Image Tag'
name: ubuntu
args: ['bash','-c','sed -i "s,INIT_IMAGE_NAME,eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA," deployment.yaml']
- id: 'Setting Nginx Image Tag'
name: ubuntu
args: ['bash','-c','sed -i "s,NGINX_IMAGE_NAME,eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA," deployment.yaml']
- id: 'Setting PHP Image Tag'
name: ubuntu
args: ['bash','-c','sed -i "s,PHP_IMAGE_NAME,eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA," deployment.yaml']
# Update Deployment
- id: Updating Deployment
name: gcr.io/cloud-builders/kubectl
args: ['apply','-f','deployment.yaml']
env:
- CLOUDSDK_COMPUTE_ZONE=europe-west2-b
- CLOUDSDK_CONTAINER_CLUSTER=clusterx
# Images
images:
- eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA
- eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA
- eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA
# Tags
tags:
- master
- dev
- init
</code></pre>
| dzhi |
<p>I deploy Prometheus on my minikube cluster which has 5 nodes. I also deploy a microservice on the cluster. Prometheus can collect the data normally.</p>
<p>I use <code>wrk2</code> which is a workload generator to send requests to my microservices. Jaeger shows that the requests are processed normally.</p>
<p>The following is what confuse me. After I test the service for <code>duration</code> seconds, I try to use <code>sum(irate(container_cpu_usage_seconds_total{{{constraint}}}[{duration}s])) by (container, pod)</code> to get the CPU usage of the pods. However, The vast majority of pods have zero CPU usage, which means no query results. I was very surprised by this because in <code>duration</code> seconds I increased the load on the service (i.e., sent a lot of requests to it), but it didn't increase the CPU usage compared to when there was no load.</p>
<p>Following is the python function I used to query Prometheus:</p>
<pre class="lang-py prettyprint-override"><code># endtime=starttime+duration, starttime is the time when I start wrk2 to generate workload
def get_cpu_usage(self, starttime, endtime, duration, diaplay=False):
# Define Prometheus query to get CPU usage for each service
constraint = f'namespace="{self.namespace}", container!="POD", container!=""'
prometheus_query = (
f"sum(irate(container_cpu_usage_seconds_total{{{constraint}}}[{duration}s])) by (container, pod)"
+ " / " + f"(sum(container_spec_cpu_quota{{{constraint}}}/({duration}*1000)) by (container, pod)) * 100"
)
# Send query to Prometheus endpoint
sleep(1)
response = requests.get(self.prometheus_url + '/api/v1/query_range', params={
'query': prometheus_query,
'start': starttime,
'end': endtime,
'step': 1
})
# Parse response
usage = response.json()
cpu_result = pd.DataFrame(columns=["microservice", "pod", "usage"])
</code></pre>
<p>Does there any bug in the code or the Prometheus setting?</p>
| MissSirius | <p>Prometheus requires at least two <a href="https://docs.victoriametrics.com/keyConcepts.html#raw-samples" rel="nofollow noreferrer">raw samples</a> on the lookbehind window specified in square brackets for calculating the <a href="https://docs.victoriametrics.com/MetricsQL.html#irate" rel="nofollow noreferrer">irate()</a>. Otherwise it returns an empty result.</p>
<p>P.S. It is recommended to use <a href="https://docs.victoriametrics.com/MetricsQL.html#rate" rel="nofollow noreferrer">rate()</a> instead of <code>irate()</code>, since <code>irate()</code> doesn't catch spikes. See <a href="https://valyala.medium.com/why-irate-from-prometheus-doesnt-capture-spikes-45f9896d7832" rel="nofollow noreferrer">this article</a> for details.</p>
| valyala |
<p>I am trying to create a GKE cluster of node size 1. However, it always create a cluster of 3 nodes. Why is that? </p>
<pre><code>resource "google_container_cluster" "gke-cluster" {
name = "sonarqube"
location = "asia-southeast1"
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "gke-node-pool" {
name = "sonarqube"
location = "asia-southeast1"
cluster = google_container_cluster.gke-cluster.name
node_count = 1
node_config {
machine_type = "n1-standard-1"
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = "sonarqube"
}
}
}
</code></pre>
<p><a href="https://i.stack.imgur.com/6bwMX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6bwMX.png" alt="enter image description here"></a></p>
| Jiew Meng | <p>Ok, found I can do so using <code>node_locations</code>: </p>
<pre><code>resource "google_container_cluster" "gke-cluster" {
name = "sonarqube"
location = "asia-southeast1"
node_locations = [
"asia-southeast1-a"
]
remove_default_node_pool = true
initial_node_count = 1
}
</code></pre>
<p>Without that, it seems GKE will create 1 node per zone. </p>
| Jiew Meng |
<p>I have the following terraform configuration in my terraform code</p>
<pre><code>data "external" "region" {
program = ["sh", "test.sh"]
query = {
aws_region = var.aws_region
vault_url = var.vault_url
vault_role = var.vault_role
}
}
provider "vault" {
address = "http://3.218.2.138:8200"
token = data.external.region.result["vault_token"]
}
</code></pre>
<p>The external Program runs a command vault login -method=aws role=test-role and then it returns a vault token.</p>
<p>Is there a way to avoid this external program and make the vault token to be generated whenever I execute terraform apply and terraform show.</p>
<p>So basically Is there a method to avoild the external script from being executed and get the vault token without executing the external script.</p>
| jyothi swarup | <p>A typical approach for this is to run <code>vault login</code> (or some other equivalent process) <em>before</em> running Terraform, and then have Terraform read those ambient credentials the same way that the Vault client itself would.</p>
<p>Although Terraform providers typically accept credentials as part of their configurations to allow for more complex cases, the ideal way to pass credentials to a Terraform provider is indirectly via whatever mechanism is standard for the system in question. For example, the Terraform AWS provider understands how to read credentials the same way as the <code>aws</code> CLI does, and the Vault provider looks for credentials in the same environment variables that the Vault CLI uses.</p>
<p>Teams that use centralized systems like Vault with Terraform will typically <a href="https://learn.hashicorp.com/tutorials/terraform/automate-terraform" rel="nofollow noreferrer">run Terraform in automation</a> so that the configuration for those can be centralized, rather than re-implemented for each user running locally. Your automation script can therefore obtain a temporary token from Vault prior to running Terraform and then explicitly revoke the token after Terraform returns, even if the Terraform operation itself fails.</p>
| Martin Atkins |
<p>What is the best way to create a persistent volume claim with ReadWriteMany attaching the volume to multiple pods? </p>
<p>Based off the support table in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes</a>, GCEPersistentDisk does not support ReadWriteMany natively.</p>
<p>What is the best approach when working in the GCP GKE world? Should I be using a clustered file system such as CephFS or Glusterfs? Are there recommendations on what I should be using that is production ready? </p>
<p>I was able to get an NFS pod deployment configured following the steps here - <a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="noreferrer">https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266</a> however it seems a bit hacky and adds another layer of complexity. It also seems to only allow one replica (which makes sense as the disk can't be mounted multiple times) so if/when the pod goes down, my persistent storage will as well.</p>
| leeman24 | <p>It's possible now with <a href="https://cloud.google.com/filestore/" rel="noreferrer">Cloud Filestore</a>. </p>
<p>First create a Filestore instance.</p>
<pre class="lang-sh prettyprint-override"><code>gcloud filestore instances create nfs-server
--project=[PROJECT_ID]
--zone=us-central1-c
--tier=STANDARD
--file-share=name="vol1",capacity=1TB
--network=name="default",reserved-ip-range="10.0.0.0/29"
</code></pre>
<p>Then create a persistent volume in GKE.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: fileserver
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
nfs:
path: /vol1
server: [IP_ADDRESS]
</code></pre>
<p>[IP_ADDRESS] is available in filestore instance details.</p>
<p>You can now request a persistent volume claim. </p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fileserver-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: "fileserver"
resources:
requests:
storage: 100G
</code></pre>
<p>Finally, mount the volume in your pod. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my container
image: nginx:latest
volumeMounts:
- mountPath: /workdir
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: fileserver-claim
readOnly: false
</code></pre>
<p>Solution is detailed here : <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="noreferrer">https://cloud.google.com/filestore/docs/accessing-fileshares</a></p>
| Antoine |
<p>I'm using Shedlock version 4.22.0 with JDBC integration.
I have one configuration class</p>
<pre><code>@Configuration
@EnableScheduling
@EnableSchedulerLock(defaultLockAtMostFor = "PT30S")
public class ShedLockConfiguration {
@Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(dataSource);
}
}
</code></pre>
<p>For execution I'm using annotation @SchedulerLock
eg.</p>
<pre><code>@SchedulerLock(name = "MyTask", lockAtMostFor = "PT1M", lockAtLeastFor = "PT1M")
</code></pre>
<p>Everything works perfectly on a local PC but when app is deployed on Kubernetes (AKS), no task are triggered. Logs are empty. App after startup does nothing</p>
<p>I added also debug to spring configuration on</p>
<pre><code>logging:
level:
net.javacrumbs.shedlock: DEBUG
</code></pre>
<p>Nothing is logged</p>
| Bożek Mariusz | <p>Execution should be triggered by a <code>@Scheduled</code> <a href="https://spring.io/guides/gs/scheduling-tasks/" rel="nofollow noreferrer">annotation</a>. ShedLock only prevents parallel task execution, it does not schedule anything. If you have the <code>@Scheduled</code> annotation, check the <code>shedlock</code> table in the DB as described in <a href="https://github.com/lukas-krecan/ShedLock#troubleshooting" rel="nofollow noreferrer">troubleshooting</a>.</p>
| Lukas |
<p>I have a simple Kubernetes deployment. It consists of a single, unreplicated container. There is no service exposing the container. The container has a health check which checks that it is correctly configured and can communicate with its external dependencies. I update the deployment using <code>kubectl apply</code>.</p>
<p>After updating the deployment, I would like to check that the new version has been rolled out completely and is passing its health check. I can't work out how to configure my deployment to achieve that.</p>
<p>I have tried various combinations of liveness and readiness probes, deployment strategies and ready/progress deployment properties. I've tried inspecting the status of the deployment, its pods and the rollout command. All to no avail.</p>
<p>I get the impression that I should be looking at deployment conditions to understand the status, but I can't find clear documentation of what those conditions are or how to bring them into being.</p>
| Ben Butler-Cole | <p>You have not mentioned your deployment strategy. But one generic problem I have seen with k8s deployments is that if the application fails to boot up, it will be restarted infinitely. So you might have to <code>kubectl delete deploy/******</code> explicitly after detecting the deployment failed status. (There is also <code>failureThreshold</code> for probes, but I didn't try yet).</p>
<p>Case <strong>Recreate</strong>:</p>
<p>You can use the combination of <code>progressDeadlineSeconds</code> and <code>readinessProbe</code>. Let's say your application needs 60 seconds to boot-up/spin-up. You need to configure <code>progressDeadlineSeconds</code> a bit more than 60 seconds just be in the safer side. Now, after running your <code>kubectl apply -f my-deploy.yaml</code>, run the <code>kubectl rollout status deploy/my-deployment</code> command. For me it looks like this:</p>
<pre><code>12:03:37 kubectl apply -f deploy.yaml
12:03:38 deployment "my-deployment" configured
12:04:18 kubectl rollout status deploy/my-deployment
12:04:18 Waiting for rollout to finish: 0 of 1 updated replicas are available (minimum required: 1)...
12:04:44 deployment "my-deployment" successfully rolled out
</code></pre>
<p>Once you execute the <code>rollout</code> command, kubectl will keep waiting till it has some answer. Also it returns with a proper exit code <code>echo $?</code> - you can check this programmatically and delete the deployment.</p>
<p>Case <strong>rollingUpdate</strong>:</p>
<p>If you have multiple replicas, then the above mentioned trick should work.
If you have just one replica, then use <code>maxUnavailable: 0</code> and <code>maxSurge: 1</code> along with the above config.</p>
| Amrit |
<p>I have all my JSON files located in Redis cache.</p>
<p>I want to get it served through Nginx.</p>
<pre><code> # redis-cli -h redis-master
redis-master:6379> get "zips/80202.json"
"{\"zipCode\":\"80202\",\"City\":\"DENVER\",\"StateCode\":\"CO\"}"
</code></pre>
<p>I would like to request the url to be like,</p>
<pre><code>http://nginx-host/zips/80202.json
</code></pre>
<p>where <code>nginx-host</code> and <code>redis-master</code> are the services in Kubernetes.</p>
<p>expecting response with,</p>
<pre><code>{"zipCode":"80202","City":"DENVER","StateCode":"CO"}
</code></pre>
<p>I am not finding a clear idea on how to configure Nginx. I am running with Kubernetes containers, so services are local with no authentication required to the Redis servers.</p>
| Kannaiyan | <p>Here is the nginx configuration used to serve with redis key,</p>
<pre><code>resolver local=on ipv6=off;
server {
listen 9000;
location / {
set $target '';
access_by_lua '
local key = ngx.var.request_uri
if not key then
ngx.log(ngx.ERR, "no key requested")
return ngx.exit(400)
end
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(1000) -- 1 second
local ok, err = red:connect("redis-master.default.svc.cluster.local", 6379)
if not ok then
ngx.log(ngx.ERR, "failed to connect to redis: ", err)
return ngx.exit(500)
end
local value, err = red:get(key)
if not value then
ngx.log(ngx.ERR, "failed to get redis key: ", err)
ngx.status = 500
ngx.say("Something not right")
return ngx.exit(ngx.OK)
end
if value == ngx.null then
ngx.log(ngx.ERR, "no host found for key ", key)
ngx.status = 404
ngx.say("Not found")
return ngx.exit(ngx.OK)
end
if value then
ngx.say(value)
return
end
ngx.var.target = http
';
}
}
</code></pre>
| Kannaiyan |
<p>I have an application such as below structure which multiple services has their own Dockerfile.ı would like to deploy my application via Jenkins using Helm to kubernetes but I can not decide what is the best way to handle this?</p>
<p>Should I try to use multi-stage builds if yes how can I handle this?
Should I create two helm charts for each of them or any way to handle this with one helm chart?</p>
<pre><code>└── app-images-dashboard
├── Readme.md
├── cors-proxy
│ ├── Dockerfile
│ ├── lib
│ │ ├── cors-anywhere.js
│ │ ├── help.txt
│ │ ├── rate-limit.js
│ │ └── regexp-top-level-domain.js
│ ├── package.json
│ └── server.js
└── app-images-dashboard
├── Dockerfile
├── components
│ └── image_item.js
├── images
│ └── beta.png
├── index.html
├── main.js
└── stylesheets
└── style.css
</code></pre>
| semural | <p>A helm chart represent a whole application. You have 1 application with 2 slices. So you need only 1 helm chart.</p>
| Softlion |
<p><code>OpenShift 4.6</code></p>
<p>There is a basic set up in OpenShift: [Pod <- Service <- Route].
A service running in the pod have an HTTP endpoint responding for 90 seconds or longer. And in some cases it is normal, so I would like to allow this behavior.</p>
<p>Nevertheless, after a request to the route is sent (and a response did not arrive back) some time later (approx. 60-70 seconds) route responds with <code>HTTP 504</code>:</p>
<pre><code><html>
<body>
<h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body>
</html>
</code></pre>
<p>I am not sure at what point OpenShift decides to break the circuit and I can't find any configuration options that allow to change this timeout.</p>
<p><strong>How to set custom timeout for a service and a pod to extend duration of request-response cycle?</strong></p>
| diziaq | <p>You might be looking for the <code>haproxy.router.openshift.io/timeout</code> annotation, with which you can annotate your Route:</p>
<pre><code>oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s
</code></pre>
<p>You can find more information about Route configuration in the documentation: <a href="https://docs.openshift.com/container-platform/4.6/networking/routes/route-configuration.html#nw-configuring-route-timeouts_route-configuration" rel="noreferrer">https://docs.openshift.com/container-platform/4.6/networking/routes/route-configuration.html#nw-configuring-route-timeouts_route-configuration</a></p>
| Simon |
<p>I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration. </p>
<p><strong>Confusion</strong> </p>
<p>Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod? </p>
| Mr.DevEng | <p>It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).</p>
<p>Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.</p>
<p>As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.</p>
<p>A simple pipeline:</p>
<ol>
<li>Code pushed and build triggered.</li>
<li>Build with unit tests.</li>
<li>Generate the docker image and push to registry.</li>
<li>Run your kubectl / helm / etc to deploy the newly build image on
STAGING</li>
<li>Check if the deployment was successful</li>
</ol>
<p>If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well <a href="https://jenkins.io/doc/pipeline/steps/pipeline-input-step/" rel="nofollow noreferrer">https://jenkins.io/doc/pipeline/steps/pipeline-input-step/</a>):</p>
<ol start="6">
<li>Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION</li>
<li>Check if the deployment was successful</li>
</ol>
<p>If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)</p>
<p>If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.</p>
<p><strong>EDIT</strong> (response to comment):
You are making REST calls to the k8s API. Even <code>kubectl apply -f foo.yaml</code> will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use <code>kubectl --context <staging-cluster> apply -f foo.yaml</code>. You can pick the context name from jenkins env variable or some other mechanism.</p>
| Amrit |
<p>I am trying to create/apply this kubectl .yaml file <a href="https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/deployment.yaml" rel="nofollow noreferrer">https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/deployment.yaml</a> via Terrraform`s null_resource to the AKS to install Azure AD Pod Identity. It needed to deploy Azure Gateway Ingress Controller.
Using Windows 10 with VS Code</p>
<p><strong>main.tf:</strong></p>
<pre><code>data "template_file" "aad_pod" {
template = "${file("${path.module}/templates/aad_pod.yaml")}"
}
resource "null_resource" "aad_pod_deploy" {
triggers = {
manifest_sha1 = "${sha1("${data.template_file.aad_pod.rendered}")}"
}
provisioner "local-exec" {
command = "kubectl apply -f -<<EOF\n${data.template_file.aad_pod.rendered}\nEOF"
}
}
</code></pre>
<p>After terraform apply I have this error:</p>
<pre><code>Error: Error running command 'kubectl apply -f -<<EOF
'cutted listing of yaml file'
EOF': exit status 1. Output: << was unexpected at this time.
</code></pre>
<p>Any help will be appreciated</p>
| S0meth1ng | <p>Because of differences between Unix-like operating systems and Windows, it's rarely possible to use <code>local-exec</code> in a portable way unless your use-case is very simple. This is one of the reasons why <a href="https://www.terraform.io/docs/provisioners/index.html#provisioners-are-a-last-resort" rel="noreferrer">provisioners are a last resort</a>.</p>
<p>I think the most portable answer would be to use <a href="https://registry.terraform.io/providers/hashicorp/kubernetes" rel="noreferrer">the official Kubernetes provider</a> to interact with Kubernetes here. Alternatively, if using <code>kubectl</code>'s input format in particular is important for what you are doing, you could use <a href="https://github.com/gavinbunney/terraform-provider-kubectl" rel="noreferrer">a community-maintained <code>kubectl</code> provider</a>.</p>
<pre><code>resource "kubectl_manifest" "example" {
yaml_body = data.template_file.aad_pod.rendered
}
</code></pre>
<hr />
<p>If you have a strong reason to use the <code>local-exec</code> provisioner rather than a native Terraform provider, you'll need to find a way to write a command that can be interpreted in a compatible way by both a Unix-style shell and by Windows's command line conventions. I expect it would be easier to achieve that by writing the file out to disk first and passing the filename to <code>kubectl</code>, because that avoids the need to use any special features of the shell and lets everything be handled by <code>kubectl</code> itself:</p>
<pre><code>resource "local_file" "aad_pod_deploy" {
filename = "${path.module}/aad_pod.yaml"
content = data.template_file.aad_pod.rendered
provisioner "local-exec" {
command = "kubectl apply -f ${self.filename}"
}
}
</code></pre>
<p>There are still some caveats to watch out for with this approach. For example, if you run Terraform under a directory path containing spaces then <code>self.filename</code> will contain spaces and therefore probably won't be parsed as you want by the Unix shell or by the <code>kubectl</code> Windows executable.</p>
| Martin Atkins |
<p>There's a need to create an AWS security group rule using Terraform, then triggering a null resource.</p>
<p>E.G.</p>
<ul>
<li>health_blue. (aws_security_group_rule)</li>
<li>wait_for_healthcheck. (null_resource)</li>
</ul>
<p>I have already tried adding a dependency between the security group rule and the null resource, but the null resource is not always triggered, or it not triggered before the rule is created or destroyed.</p>
<p>The null resource needs to be triggered when the security group rule is created, amended, or destroyed.</p>
<p>Here is the config:</p>
<pre><code>resource "aws_security_group_rule" "health_blue" {
count = data.external.blue_eks_cluster.result.cluster_status == "online" ? 1 : 0
description = "LB healthchecks to blue cluster"
cidr_blocks = values(data.aws_subnet.eks_gateway).*.cidr_block
from_port = 80
protocol = "tcp"
security_group_id = data.aws_security_group.blue_cluster_sg[0].id
to_port = 80
type = "ingress"
}
resource "null_resource" "wait_for_healhtcheck" {
triggers = {
value = aws_security_group_rule.health_blue[0].id
}
provisioner "local-exec" {
command = "echo 'Waiting for 25 seconds'; sleep 25"
}
depends_on = [aws_security_group_rule.health_blue]
}
</code></pre>
<p>Any tips or pointers would be much appreciated :~)</p>
| Theo Sweeny | <p>With the configuration you showed here, <code>null_resource.wait_for_healhtcheck</code> depends on <code>aws_security_group_rule.health_blue</code>.</p>
<p>(You currently have that dependency specified redundantly: the reference to <code>aws_security_group_rule.health_blue</code> in <code>triggers</code> already establishes the dependency, so the <code>depends_on</code> argument is doing nothing here and I would suggest removing it.)</p>
<p>The general meaning of a dependency in Terraform is that any actions taken against the dependent object must happen after any actions taken against its dependency. In your case, Terraform guarantees that after it has created the plan if there are any actions planned for both of these resources then the action planned for <code>aws_security_group_rule.health_blue</code> will always happen first during the apply step.</p>
<p>You are using the <code>triggers</code> argument of <code>null_resource</code>, which adds an additional behavior that's implemented by the <code>hashicorp/null</code> provider rather than by Terraform Core itself: during planning, <code>null_resource</code> will compare the <code>triggers</code> value from the prior state with the <code>triggers</code> value in the current configuration and if they are different then it will propose the action of replacing the (purely conceptual) <code>null_resource</code> object.</p>
<p>Because <code>triggers</code> includes <code>aws_security_group_rule.health_blue[0].id</code>, <code>triggers</code> will take on a new value each time the security group is planned for creation or replacing. Therefore taken altogether your configuration declares the following:</p>
<ul>
<li>There are either zero or one <code>aws_security_group_rule.health_blue</code> objects.</li>
<li>Each time the <code>id</code> attribute of the security group changes, the <code>null_resource.wait_for_healhtcheck</code> must be replaced.</li>
<li>Whenever creating or replacing <code>null_resource.wait_for_healhtcheck</code>, run the given provisioner.</li>
<li>Therefore, if the <code>id</code> attribute of the security group changes there will always be both a plan to create (or replace) <code>aws_security_group_rule.health_blue</code> <em>and</em> a plan to replace <code>null_resource.wait_for_healhtcheck</code>. The dependency rules mean that the creation of the security group will happen before the creation of the <code>null_resource</code>, and therefore before running the provisioner.</li>
</ul>
<p>Your configuration as shown therefore seems to meet your requirements as stated. However, it does have one inconsistency which could potentially cause a problem: you haven't accounted for what ought to happen if there are zero instances of <code>aws_security_group_rule.health_blue</code>. In that case <code>aws_security_group_rule.health_blue[0].id</code> isn't valid because there isn't a zeroth instance of that resource to refer to.</p>
<p>To address that I would suggest a simplification: the <code>null_resource</code> resource isn't really adding anything here that you couldn't already do with the <code>aws_security_group_rule</code> resource directly:</p>
<pre><code>resource "aws_security_group_rule" "health_blue" {
count = data.external.blue_eks_cluster.result.cluster_status == "online" ? 1 : 0
description = "LB healthchecks to blue cluster"
cidr_blocks = values(data.aws_subnet.eks_gateway).*.cidr_block
from_port = 80
protocol = "tcp"
security_group_id = data.aws_security_group.blue_cluster_sg[0].id
to_port = 80
type = "ingress"
provisioner "local-exec" {
command = "echo 'Waiting for 25 seconds'; sleep 25"
}
}
</code></pre>
<p>Provisioners for a resource run as part of the creation action for that resource, and changing the configuration of a security group rule should cause it to get recreated, so with the above configuration the <code>sleep 25</code> command will run each time an instance of the security group rule is created, without any need for a separate resource.</p>
<p>This solution <em>does</em> assume that you only need to run <code>sleep 25</code> when creating (or replacing) the security group rule. The <code>null_resource</code> approach would be needed if the goal were to run a provisioner in response to <em>updating</em> some other resource, because in that case the <code>null_resource</code> resource would act as a sort of adapter to allow treating an update of any value in <code>triggers</code> to be treated as if it were a replacement of a resource.</p>
| Martin Atkins |
<p>I am trying to run a simple wordcount application in Spark on Kubernetes. I am getting following issue.</p>
<pre><code>Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-wordcount-1545506479587-driver] in namespace: [non-default-namespace] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:57)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:55)
at scala.Option.map(Option.scala:146)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:55)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:89)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2788)
... 20 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
</code></pre>
<p>I have followed all the steps mentioned in the <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#rbac" rel="noreferrer">RBAC setup</a>. Only thing I could not do was I could not create clusterbinding spark-role since I don't have access to the default namespace. Instead I create rolebinding.</p>
<pre><code>kubectl create rolebinding spark-role --clusterrole=edit --serviceaccount=non-default-namespace:spark --namespace=non-default-namespace
</code></pre>
<p>I am using following spark-submit command.</p>
<pre><code>spark-submit \
--verbose \
--master k8s://<cluster-ip>:<port> \
--deploy-mode cluster --supervise \
--name spark-wordcount \
--class WordCount \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-test \
--conf spark.kubernetes.driver.limit.cores=1 \
--conf spark.kubernetes.executor.limit.cores=1 \
--conf spark.executor.instances=1 \
--conf spark.kubernetes.container.image=<image> \
--conf spark.kubernetes.namespace=non-default-namespace \
--conf spark.kubernetes.driver.pod.name=spark-wordcount-driver \
local:///opt/spark/work-dir/spark-k8s-1.0-SNAPSHOT.jar
</code></pre>
<p>Update:
I was able to fix the first SockerTimeoutException issue. I did not have the network policy defined so the driver and executors were not able to talk to each other. This was the reason why it was timing out. I changed the network policy from default-deny-all to allow-all for ingress and egress and the timeout exception went away. However I am still getting the Operation get for kind pod not found error with following excepiton.</p>
<pre><code>Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
</code></pre>
<p>Any suggestion or help will be appreciated.</p>
| hp2326 | <p>This is because your dns is unable to resolve. kubernetes.default.svc.
Which in turn could be issue of your networking and iptables. </p>
<p>run this on specific node </p>
<pre><code>kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
</code></pre>
<p>and check</p>
<pre><code>nslookup kubernetes.default.svc
</code></pre>
<p>Edit:
I had this issue because in my case, flannel was using different network(10.244.x.x) any my kubernetes cluster was configured with networking (172.x.x.x)</p>
<p>I blindly ran the default one from <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</a> inside which pod network is configured to 10.244.x.x. To fix it , i download the file , change it to correct pod network and applied it.</p>
| user303730 |
<p>I cannot really understand the purpose and usage of topologyKey in pod affinity. The documentations says:</p>
<blockquote>
<p><strong>topologyKey</strong> is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.</p>
</blockquote>
<p>And example usage is as follows:</p>
<pre><code>kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
containers:
- name: with-pod-affinity
image: k8s.gcr.io/pause:2.0
</code></pre>
<p>So where does <strong>topology.kubernetes.io/zone</strong> come from? How can I know what value should I provide for this <strong>topologyKey</strong> field in my yaml file, and what happens if I just put a random string here? Should I label my node and use the key of this label in topologyKey field?</p>
<p>Thank you.</p>
| yrazlik | <p>Required as part of a <strong>affinity.podAffinity</strong> or <strong>affinity.podAntiAffinity</strong> <em>spec</em> section, the <strong>topologyKey</strong> field is used by the scheduler to determine the domain for Pod placement.</p>
<p>The topologyKey <em>domain</em> is used to determine relative placement of the Pods being scheduled relative to the Pods identified by the <strong>...labelSelector.matchExpressions</strong> section.</p>
<p>With <strong>podAffinity</strong>, a Pod <em>will</em> be scheduled in the same domain as the Pods that match the expression.</p>
<p>Two common label options are <strong>topology.kubernetes.io/zone</strong> and <strong>kubernetes.io/hostname</strong>. Others can be found in the Kubernetes <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/" rel="noreferrer">Well-Known Labels, Annotations and Taints</a> documentation.</p>
<ul>
<li><strong>topology.kubernetes.io/zone</strong>: Pods will be scheduled <em>in the same zone</em> as a Pod that matches the expression.</li>
<li><strong>kubernetes.io/hostname</strong>: Pods will be scheduled <em>on the same hostname</em> as a Pod that matches the expression.</li>
</ul>
<p>For <strong>podAntiAffinity</strong>, the opposite is true: Pods <em>will not</em> be scheduled in the same domain as the Pods that match the expression.</p>
<p>The Kubernetes documentation <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-as%20a%20Pod%20that%20matches%20the%20expression.node/#inter-pod-affinity-and-anti-affinity" rel="noreferrer"><strong>Assigning Pods to Nodes</strong> documentation (Inter-pod affinity and anti-affinity section)</a> provides a additional explanation.</p>
| Drew |
<p>This thread shows multiple containers can be created using YAML configuration, creating one Pod with multiple containers inside: <a href="https://stackoverflow.com/questions/67082250/create-a-pod-with-a-single-app-container-with-multiple-images-in-kubernetes-clus">Create a pod with a single app container with multiple images in kubernetes cluster</a></p>
<p>Does the <code>oc new-app</code> have this functionality too? Are there any any other CLI oc tools which would allow this function?</p>
| solarflare | <p>So it seems that <code>oc new-app</code> does not let you create a Pod with multiple container images as far as I can see.</p>
<p>User "bodo" above had the right idea I think. You can use the following command to generate the YAML file and then edit that YAML to add more containers to the Pod:</p>
<pre><code>oc new-app --name my-application --docker-image=example.com/repository/myimage:latest --dry-run=true -o yaml
</code></pre>
| Simon |
<p><em>Situation</em></p>
<p>I want to have a <code>pool</code> of <code>serving</code> pods (say I have 50 <code>serving</code> pods lying around). They will be exposed via a LoadBalancer Service.</p>
<p>I want to make sure that:</p>
<ol>
<li>each pod serves only one TCP connection and keep this connection alive until the client terminates it. Until the end of this pod's life, it will not receive any other TCP connections.</li>
<li>Once the client terminates the connection, the pod cleans up and destroys it self.</li>
<li>Another pod is spinned up to match the desired replica number. Since it's not currently serving any TCP connection, it can be chosen to serve the next TCP connection that goes into the LoadBalancer service.</li>
</ol>
<p><em>Example</em></p>
<pre><code>1. Initially, a `Deployment` specifies a pool of 2 pods behind a LoadBalancer Service.
[1] [2]
------LoadBalancer-------
2. A client initiates a TCP connection to the LoadBalancer (e.g. telnet loadbalancer.domain.com 80) and the TCP connection is routed to the first vacant pod.
[1] [2]
|
|
------LoadBalancer-------
|
|
cl1
3. 24 hours after that (assuming data has been passing between client1 & pod1), another client hits to the same Load Balancer public domain. Since pod1 is serving client1, I want the second client to be routed to another vacant pod, such as pod 2.
[1] [2]
| |
| |
------LoadBalancer-------
| |
| |
cl1 cl2
4. 24 hours after that, client 1 terminates the connection, I want pod1 to do clean up and destroy itself shortly afterwards. No new connections should be routed to it. That leaves pod2 the only one still running.
[2]
|
|
------LoadBalancer-------
|
|
cl2
5. `Deployment` will create additional pods to ensure the number of replicas. So pod3 is created.
[1] [3]
|
|
------LoadBalancer-------
|
|
cl1
6. Another client hits the same endpoint and is routed to a vacant pod (pod 3 in this case).
[1] [3]
| |
| |
------LoadBalancer-------
| |
| |
cl1 cl3
And so on and so forth.
</code></pre>
<p>Anybody has any ideas how this can be solved on K8s?</p>
| Tran Triet | <p>So I guess this would work.</p>
<p>First set a ReadinessProbe to poll an endpoint/tcp port on your pod very agressivly (as often as possible).</p>
<p>Then as soon as you get a connection make sure that the ReadinessProbe fails, but at the same time make sure the LivenessProbe DOESN'T fail. </p>
<p>Finally, after the client disconnects terminate the application.</p>
<p>Your deplopment needs to have enough replicas to serve all clients that can enter simultanious since one pod will never serve two clients.</p>
| Andreas Wederbrand |
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=dir:///workspace"
- "--dockerfile=/workspace/Dockerfile"
- "--destination=gcr.io/kubernetsjenkins/jenkinsondoc:latest"
volumeMounts:
- name: kaniko-secret
mountPath: /secret
- name: context
mountPath: /workspace
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
- name: context
hostPath:
path: /home/sabadsulla/kanikodir
</code></pre>
<p>I am running kaniko on a kubernetes pod to build a docker image and pushing to the GCR.</p>
<p>When i use google cloud storage for the CONTEXT_PATH it works fine ,
But i need to use the Local_directory(meaning using the shared volumes of the pods) AS the CONTEXT_PATH
it throws an error </p>
<pre><code>"Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
</code></pre>
<p>Usage:</p>
<pre><code>I tried with args "--context=/workspace" , "--context=dir://workspace" , it gives the same error
</code></pre>
| user8024713 | <p>the folder looks like</p>
<p>In host:</p>
<pre><code>/home/sabadsulla/kanikodir/Dockerfile
</code></pre>
<p>When it turns to PV/PVC, in pod container</p>
<pre><code>/workspace/Dockerfile
</code></pre>
<p>Then for <code>kanino executor</code>, if we map the context to <code>workspace</code>, the dockerfile will be related to context is <code>Dockerfile</code>, so </p>
<pre><code>--context=/workspace
--dockerfile=Dockerfile
</code></pre>
| Larry Cai |
<p>I have an Openshift 4.6 platform running an applicative pod.
We use postman to send request to the pod.</p>
<p>The applicative pod return a 200 http response code, but get a 502 in postman.
So there is a interim component inside OpenShift/K8s that should transform the 200 into a 502.</p>
<p>Is there a way to debug/trace more information in Egress ?</p>
<p>Thanks</p>
<p>Nicolas</p>
| Nicolas Malinconico | <p>The HTTP 502 error is likely returned by the OpenShift Router that is forwarding your request to your application.</p>
<p>In practice this often means that the OpenShift Router (HAProxy) is sending the request to your application and it does not receive any or an unexpected answer from your application.</p>
<p>So I would recommend that you check your applications logs if there is any error in your application and if your application returns a valid HTTP answer. You can test this by using <code>curl localhost:<port></code> from your application Pods to see if there is a response being returned.</p>
| Simon |
<p>While going through the helm documentation, i came across rollback feature.
Its a cool feature, but i have some doubts about the implementation of that feature.</p>
<p>How they have implemented it? If they might have used some datastore to preserve old release config, what datastore it is?</p>
<p>Is there any upper limit on consecutive rollbacks? If so, Upto how many rollbacks will it support? Can we change this limit?</p>
| lokanadham100 | <p>As the <a href="https://docs.helm.sh/helm/#helm-rollback" rel="nofollow noreferrer">documentation</a> says, it rolls back the entire release. Helm generally stores release metadata in its own configmaps. Every-time you release changes, it appends that to the existing data. Your changes can have new deployment image, new configmaps, storages, etc. On rollback, everything goes to the previous version. </p>
| Amrit |
<p>I am trying to achieve zero-downtime deployment on k8s. My deployment has one replica. The pod probes look like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
namespace: ${KUBE_NAMESPACE}
spec:
selector:
matchLabels:
app: app
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app-container
imagePullPolicy: IfNotPresent
image: ${DOCKER_IMAGE}:${IMAGE_TAG}
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 10
terminationGracePeriodSeconds: 130
</code></pre>
<p>However, every time after the <code>kubectl rollout status</code> returns and reports rollout finished. I experience a small time of <code>bad gateway</code>.</p>
<p>Then I add a test that I let <code>/health</code> return 500 in <code>prestop</code> and wait at lease 20 seconds before actually stop the pod.</p>
<pre><code># If the app test the /tmp/prestop file exists, it will return 500.
lifecycle:
preStop:
exec:
command: ["/bin/bash", "-c", "touch /tmp/prestop && sleep 20"]
</code></pre>
<p>Then I found after the k8s stop the pod, the traffic can still flow to the old pod(If I visit the /health, I can get a 500 result).</p>
<p>So it looks like the load balancer decides which pods can be used solely by the probe result. Since the probes have a period time, there is always a small window in which the pod stopped but the load balancer still doesn't know and can direct traffic to it, hence the user experiences downtime.</p>
<p>So my question is: in order to have a zero-downtime deployment, it seems a must to let the probes know the pod is stopping before actually stop the pod. Is this right? or I'm doing something wrong?</p>
| kkpattern | <p>After digging around Google and doing some tests. I found it's not needed to manually replying 500 to probes after prestop.</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>At the same time as the kubelet is starting graceful shutdown, the control plane removes that shutting-down Pod from Endpoints (and, if enabled, EndpointSlice) objects where these represent a Service with a configured selector. ReplicaSets and other workload resources no longer treat the shutting-down Pod as a valid, in-service replica. Pods that shut down slowly cannot continue to serve traffic as load balancers (like the service proxy) remove the Pod from the list of endpoints as soon as the termination grace period begins.</p>
</blockquote>
<p>The pod won't get traffics after starting shutdown. But I also found this <a href="https://github.com/kubernetes/kubernetes/issues/47597" rel="nofollow noreferrer">issue</a> said there was indeed a delay between starting shutdown a pod to actually removing it from endpoints.</p>
<p>So instead of return 500 to probes in prestop, I simply sleep 60 seconds in prestop. At the sametime let the /health check return 200 with a status telling the node is in running or prestop status. Then I did a rollout and got following result:</p>
<pre class="lang-sh prettyprint-override"><code>b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"running"}' at 1612717529.114602
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"running"}' at 1612717530.59488
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"running"}' at 1612717532.094305
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"running"}' at 1612717533.5859041
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"running"}' at 1612717535.086944
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"running"}' at 1612717536.757241
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"running"}' at 1612717538.57626
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"prestop"}' at 1612717540.3773062
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"prestop"}' at 1612717543.2204192
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"prestop"}' at 1612717544.7196548
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"prestop"}' at 1612717546.550169
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"prestop"}' at 1612717548.01408
b'{"node_id":"a5c387f5df30","node_start_at":1612706851,"status":"prestop"}' at 1612717549.471266
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717551.387528
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717553.49984
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717555.404394
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717558.1528351
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717559.64011
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717561.294955
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717563.366436
b'{"node_id":"17733ca118f4","node_start_at":1612717537,"status":"running"}' at 1612717564.972768
</code></pre>
<p>The a5c387f5df30 node still got traffic after the prestop hook been called. After around 10 seconds, it never received traffic then. So it's not related to anything I did in prestop, it's purely a delay.</p>
<p>I did this test on AWS EKS with fargate. I don't know what's the status about other k8s platform.</p>
| kkpattern |
<p>Few pods in my openshift cluster are still restarted multiple times after deployment.</p>
<p>with describe output:<br />
<code>Last State: Terminated</code>
<code>Reason: OOMKilled</code><br />
<code>Exit Code: 137</code></p>
<p>Also, memory usage is well below the memory limits.
Any other parameter which I am missing to check?</p>
<p>There are no issues with the cluster in terms of resources.</p>
| dev | <p>„OOMKilled“ means your container memory limit was reached and the container was therefore restarted.</p>
<p>Especially Java-based applications can consume a large amount of memory when starting up. After the startup, the memory usage often drops considerably.</p>
<p>So in your case, increase the ‚requests.limit.memory‘ to avoid these OOMKills. Note that the ‚requests‘ can still be lower and should roughly reflect what your container consumes after the startup.</p>
| Simon |
<p>I have a question about best practices designing deployments and or stateful sets for stateful applications like wordpress and co.
The current idea i had was to make a fully dynamic image for one specific cms. With the idea i can mount the project data into it. Like themes, files etc.
In the case of wordpress it would be wp-content/themes. Or is that the wrong way. Is it better to already build the image with the right data and dont worry about the deployment because you already have everything.</p>
<p>What are your experiences with stateful apps and how did you solve those "problems".</p>
<p>thanks for answers :)</p>
| Melvin | <p>I don't think Wordpress is really stateful in this matter and it should be deployed like a regular deployment.</p>
<p>Stateful Set is typically things like databases that needs storage. As an example, Cassandra would typically be a Stateful Set with mounted Volume Claims. When one instance dies, a new one is brought up with the same name, IP address and volume as the old one. After a short while it should be part of the cluster again.</p>
<p>With deployments you will not get the same name or IP address and you can't mount Volume Claims.</p>
| Andreas Wederbrand |
<p>What would be the best setup to run <code>sonatype\nexus3</code> in Kubernetes that allows using the Docker repositories? </p>
<p>Currently I have a basic setup:</p>
<ul>
<li>Deployment of <code>sonatype\nexus3</code></li>
<li>Internal service exposing port 80 and 5000</li>
<li>Ingress + kube-lego provides HTTPS access to the Nexus UI</li>
</ul>
<p>How do I get around the limitation of ingress that doesn't allow more than one port?</p>
| Niel de Wet | <h1>tl;dr</h1>
<p>Nexus needs to be served over SSL, otherwise docker won't connect to it. This can be achieved with a k8s ingress + <a href="https://github.com/jetstack/kube-lego/" rel="noreferrer">kube-lego</a> for a <a href="https://letsencrypt.org/" rel="noreferrer">Let's Encrypt</a> certificate. Any other real certificate will work as well. However, in order to serve both the nexus UI and the docker registry through one ingress (thus, one port) one needs a reverse proxy behind the ingress to detect the docker user agent and forward the request to the registry.</p>
<pre><code> --(IF user agent docker) --> [nexus service]nexus:5000 --> docker registry
|
[nexus ingress]nexus.example.com:80/ --> [proxy service]internal-proxy:80 -->|
|
--(ELSE ) --> [nexus service]nexus:80 --> nexus UI
</code></pre>
<hr>
<h2>Start nexus server</h2>
<p><em>nexus-deployment.yaml</em>
This makes use of an azureFile volume, but you can use any volume. Also, the secret is not shown, for obvious reasons.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nexus
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nexus
spec:
containers:
- name: nexus
image: sonatype/nexus3:3.3.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
- containerPort: 5000
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
resources:
requests:
cpu: 440m
memory: 3.3Gi
limits:
cpu: 440m
memory: 3.3Gi
volumes:
- name: nexus-data
azureFile:
secretName: azure-file-storage-secret
shareName: nexus-data
</code></pre>
<p>It is always a good idea to add health and readiness probes, so that kubernetes can detect when the app goes down. Hitting the <code>index.html</code> page doesn't always work very well, so I'm using the REST API instead. This requires adding the Authorization header for a user with the <code>nx-script-*-browse</code> permission. Obviously you'll have to first bring the system up without probes to set up the user, then update your deployment later.</p>
<pre><code> readinessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
# The authorization token is simply the base64 encoding of the `healthprobe` user's credentials:
# $ echo -n user:password | base64
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
livenessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
</code></pre>
<p>Because nexus can sometimes take a long time to start, I use a very generous initial delay and timeout.</p>
<p><em>nexus-service.yaml</em> Expose port 80 for the UI, and port 5000 for the registry. This must correspond to the port configured for the registry through the UI.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: nexus
name: nexus
namespace: default
selfLink: /api/v1/namespaces/default/services/nexus
spec:
ports:
- name: http
port: 80
targetPort: 8081
- name: docker
port: 5000
targetPort: 5000
selector:
app: nexus
type: ClusterIP
</code></pre>
<h2>Start reverse proxy (nginx)</h2>
<p><em>proxy-configmap.yaml</em> The <em>nginx.conf</em> is added as ConfigMap data volume. This includes a rule for detecting the docker user agent. This relies on the kubernetes DNS to access the <code>nexus</code> service as upstream.</p>
<pre><code>apiVersion: v1
data:
nginx.conf: |
worker_processes auto;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:80;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
server_name nexus.example.com;
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: internal-proxy-conf
namespace: default
selfLink: /api/v1/namespaces/default/configmaps/internal-proxy-conf
</code></pre>
<p><em>proxy-deployment.yaml</em> </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: internal-proxy
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
proxy: internal
spec:
containers:
- name: nginx
image: nginx:1.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: internal-proxy-conf
mountPath: /etc/nginx/
env:
# This is a workaround to easily force a restart by incrementing the value (numbers must be quoted)
# NGINX needs to be restarted for configuration changes, especially DNS changes, to be detected
- name: RESTART_
value: "0"
volumes:
- name: internal-proxy-conf
configMap:
name: internal-proxy-conf
items:
- key: nginx.conf
path: nginx.conf
</code></pre>
<p><em>proxy-service.yaml</em> The proxy is deliberately of type <code>ClusterIP</code> because the ingress will forward traffic to it. Port 443 is not used in this example.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: internal-proxy
namespace: default
spec:
selector:
proxy: internal
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
type: ClusterIP
</code></pre>
<h2>Create Ingress</h2>
<p><em>nexus-ingress.yaml</em> This step assumes you have an nginx ingress controller. If you have a certificate you don't need an ingress and can instead expose the proxy service, but you won't have the automation benefits of kube-lego.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nexus
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- nexus.example.com
secretName: nexus-tls
rules:
- host: nexus.example.com
http:
paths:
- path: /
backend:
serviceName: internal-proxy
servicePort: 80
</code></pre>
| Niel de Wet |
<p>Unable to mount a Kubernetes secret to <code>${HOME}/.ssh/id_rsa</code> path.</p>
<p>Following are my secrets.yaml created using</p>
<pre><code> kubectl create secret generic secret-ssh-auth --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=keys/id_rsa
</code></pre>
<pre><code>apiVersion: v1
data:
ssh-privatekey: abcdefgh
kind: Secret
metadata:
name: secret-ssh-auth
namespace: app
type: kubernetes.io/ssh-auth
---
apiVersion: v1
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
name: mysecret
namespace: app
type: Opaque
</code></pre>
<p>Following is my <code>deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-helm-test
labels:
helm.sh/chart: helm-test-0.1.0
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
template:
metadata:
labels:
app.kubernetes.io/name: helm-test
app.kubernetes.io/instance: nginx
spec:
serviceAccountName: nginx-helm-test
securityContext:
{}
containers:
- name: helm-test
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
env:
- name: HOME
value: /root
volumeMounts:
- mountPath: ${HOME}/.ssh/id_rsa
name: sshdir
readOnly: true
- name: foo
mountPath: /etc/foo
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
- name: foo
secret:
secretName: mysecret
</code></pre>
<p>All I wanted is to mount the <code>ssh-privatekey</code> value in <code>${HOME}/.ssh/id_rsa</code> but for some reason, the above mount does not happen</p>
<p>But at the same time, I was able to see the <code>foo</code> secret correctly in <code>/etc/foo/username</code>. Exhaust to be honest but still want to finish this</p>
<p>What am I doing wrong?</p>
| Noobie | <p>K8s Secret <a href="https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets" rel="nofollow noreferrer">type: kubernetes.io/ssh-auth</a> (i.e. ssh-key-secret) does not work out of the box as mount point for <code>SSH</code>, since it mounts it under the filename <code>ssh-privatekey</code>. To fix this you have to do few things:</p>
<ol>
<li>You need to mount the <code>ssh-privatekey</code> key to <code>id_rsa</code> filename via <a href="https://kubernetes.io/docs/concepts/configuration/secret/#projection-of-secret-keys-to-specific-paths" rel="nofollow noreferrer">secret:items:key projection</a> in your volume definition.</li>
<li>Mount the secret so it is NOT group/world readable because the <a href="https://kubernetes.io/docs/concepts/configuration/secret/#secret-files-permissions" rel="nofollow noreferrer">default mode/permissions</a> is <code>0644</code> (i.e. add <code>defaultMode: 0400</code> to your VolumeMount) .</li>
</ol>
<p>Here is what I believe you need to change in your <code>deployment.yaml</code> to fix this problem:</p>
<pre class="lang-yaml prettyprint-override"><code>...
volumeMounts:
- mountPath: ${HOME}/.ssh
name: sshdir
readOnly: true
volumes:
- name: sshdir
secret:
secretName: secret-ssh-auth
defaultMode: 0400
items:
- key: ssh-privatekey
path: id_rsa
</code></pre>
| Neon |
<p>I'm trying to connect to an existing kubernetes that's running on AWS and run arbitrary commands on it using Java. Specifically, we are using fabric8 (although I am open to another api if you can provide a sufficient answer using one). The reason I need to do this in Java is because we plan to eventually incorporate this into our existing junit live tests.</p>
<p>For now I just need an example of how to connect to the sever and get all of the pod names as an array of Strings. Can somebody show me a simple, concise example of how to do this.</p>
<p>i.e. I want the equivalent of this bash script using a java api (again preferably using fabric8, but I'll accept another api if you know one)</p>
<pre><code>#!bin/bash
kops export kubecfg --name $CLUSTER --state=s3://$STATESTORE
kubectl get pod -o=custom-colums=NAME:.metadata.name -n=$NAMESPACE
</code></pre>
| Alex Parker | <p>Here is the official Java client for Kubernetes.</p>
<p><a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a></p>
<p>It gives you a clean interface and write code in java to execute against kubernetes.</p>
<p>As listed in the documentation page to list all pods,</p>
<pre><code>import io.kubernetes.client.ApiClient;
import io.kubernetes.client.ApiException;
import io.kubernetes.client.Configuration;
import io.kubernetes.client.apis.CoreV1Api;
import io.kubernetes.client.models.V1Pod;
import io.kubernetes.client.models.V1PodList;
import io.kubernetes.client.util.Config;
import java.io.IOException;
public class Example {
public static void main(String[] args) throws IOException, ApiException{
ApiClient client = Config.defaultClient();
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
}
}
</code></pre>
<p>Hope it helps.</p>
| Kannaiyan |
<p>I have a K8s, currently running in single node (master+kubelet,172.16.100.81). I have an config server image which I will run it in pod. The image is talking to another pod named eureka server. Both two images are spring boot application. And eureka server's http address and port is defined by me. I need to transfer eureka server's http address and port to config pod so that it could talk to eureka server.</p>
<p>I start eureka server: ( pesudo code)</p>
<pre><code>kubectl run eureka-server --image=eureka-server-image --port=8761
kubectl expose deployment eureka-server --type NodePort:31000
</code></pre>
<p>Then I use command "docker pull" to download config server image and run it as below:</p>
<pre><code>kubectl run config-server --image=config-server-image --port=8888
kubectl expose deployment config-server --type NodePort:31001
</code></pre>
<p>With these steps, I did not find the way to transfer eureka-server http
server (master IP address 172.16.100.81:31000) to config server, are there
methods I could transer variable eureka-server=172.16.100.81:31000 to Pod config server? I know I shall use ingress in K8s networking, but currently I use NodePort. </p>
| user84592 | <p>Generally, you don't need nodePort when you want two pods to communicate with each other. A simpler clusterIP is enough.</p>
<p>Whenever you are exposing a deployment with a service, it will be internally discoverable from the DNS. Both of your exposed services can be accessed using:
<code>http://config-server.default:31001</code> and <code>http://eureka-server.default:31000</code>. <code>default</code> is the namespace here.</p>
<p><code>172.16.100.81:31000</code> will make it accessible from <em>outside</em> the cluster.</p>
| Amrit |
<p>I've previously used both types, I've also read through the docs at:</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a>
<a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a></p>
<p>However it's still not clear what the difference is, both seem to support the same storage types, the only thing that comes to mind is there seems to be a 'provisioning' aspect to persistent volumes.</p>
<p><strong>What is the practical difference?
Are there advantages / disadvantages between the two - or for what use case would one be better suited to than the other?</strong></p>
<p><strong>Is it perhaps just 'synctactic sugar'?</strong></p>
<p>For example NFS could be mounted as a volume, or a persistent volume. Both require a NFS server, both will have it's data 'persisted' between mounts. What difference would be had in this situation?</p>
| Chris Stryczynski | <p>Volume decouples the storage from the Container. Its lifecycle is coupled to a pod. It enables safe container restarts and sharing data between containers in a pod.</p>
<p>Persistent Volume decouples the storage from the Pod. Its lifecycle is independent. It enables safe pod restarts and sharing data between pods.</p>
| itaysk |
<p>I am new to OpenShift and I have a deployment config there. And I know how to see the node (ip) of my current pods,</p>
<p>the deployment config have been deployed many times, and the pods have been redeployed to many nodes as well. how can I check the history nodes that the pods have been deployed to?</p>
| user13904118 | <p>Note that in OpenShift or Kubernetes in general it should not matter on which node your application runs on. This should be completely transparent to you and you should not need to care too much about which exact node your application is running on.</p>
<blockquote>
<p>I am new to OpenShift and I have a deployment config there. And I know how to see the node (ip) of my current pods,</p>
</blockquote>
<p>You can see the assigned node in the OpenShift Web Console when selecting a certain Pod (there is a field for it). You can also use the following option with <code>oc get pods</code> to show the node where the Pod is running when using <code>oc</code>:</p>
<pre><code>oc get pods -o wide
</code></pre>
<blockquote>
<p>the deployment config have been deployed many times, and the pods have been redeployed to many nodes as well. how can I check the history nodes that the pods have been deployed to?</p>
</blockquote>
<p>When a Pod is deleted, the definition is not kept. This means this information is not available unless you specifically query it.</p>
| Simon |
<p>With the following code, I'm able to fetch all the Pods running in a cluster. How can I find the Pod Controller (Deployment/DaemonSet) using the Kubernetes go-client library?</p>
<pre class="lang-golang prettyprint-override"><code>var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the kubeClient
kubeClient, err := kubernetes.NewForConfig(config)
metricsClient, err := metricsv.NewForConfig(config)
if err != nil {
panic(err.Error())
}
pods, err := kubeClient.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, pod := range pods.Items {
fmt.Println(pod.Name)
// how can I get the Pod controller? (Deployment/DaemonSet)
// e.g. fmt.Println(pod.Controller.Name)
}
</code></pre>
| AngryPanda | <p>By following @Jonas suggestion I was able to get Pod's manager. Here's a fully working sample:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"context"
"flag"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
"path/filepath"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the kubeClient
kubeClient, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
pods, err := kubeClient.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, pod := range pods.Items {
if len(pod.OwnerReferences) == 0 {
fmt.Printf("Pod %s has no owner", pod.Name)
continue
}
var ownerName, ownerKind string
switch pod.OwnerReferences[0].Kind {
case "ReplicaSet":
replica, repErr := kubeClient.AppsV1().ReplicaSets(pod.Namespace).Get(context.TODO(), pod.OwnerReferences[0].Name, metav1.GetOptions{})
if repErr != nil {
panic(repErr.Error())
}
ownerName = replica.OwnerReferences[0].Name
ownerKind = "Deployment"
case "DaemonSet", "StatefulSet":
ownerName = pod.OwnerReferences[0].Name
ownerKind = pod.OwnerReferences[0].Kind
default:
fmt.Printf("Could not find resource manager for type %s\n", pod.OwnerReferences[0].Kind)
continue
}
fmt.Printf("POD %s is managed by %s %s\n", pod.Name, ownerName, ownerKind)
}
}
</code></pre>
| AngryPanda |
<p>Azure Admins created a cluster for us.
On VM I installed <code>"az cli" and "kubectl"</code>.
With my account from Azure Portal I can see that Kubernetes Service and Resource Group to which it belongs.
From the level of that cluster in Azure Portal I can see that I have a role:</p>
<blockquote>
<p>"AKS Cluster Admin Operator"</p>
</blockquote>
<p>I am logged on VM with kubectl with my account. I need to config my kubectl to work with our cluster.
When I try to execute:</p>
<pre><code>az aks get-credentials --resource-group FRONT-AKS-NA2 --name front-aks
</code></pre>
<p>I am getting error:</p>
<blockquote>
<p>ForbiddenError: The client 'my_name@my_comp.COM' with object id
'4ea46ad637c6' does not have authorization to perform action
'Microsoft.ContainerService/managedClusters/listClusterUserCredential/action'
over scope
'/subscriptions/89e05d73-8862-4007-a700-0f895fc0f7ea/resourceGroups/FRONT-AKS-NA2/providers/Microsoft.ContainerService/managedClusters/front-aks'
or the scope is invalid. If access was recently granted, please
refresh your credentials.</p>
</blockquote>
| vel | <p>In my case to refresh recently granted credentials helped this one:</p>
<pre class="lang-bash prettyprint-override"><code>az account set --subscription "your current subscription name"
</code></pre>
<p>It leds to re-login and fix the issue.</p>
| vladimir |
<p>I have an EKS cluster where having one daemon which mounts a s3 bucket to all pods.</p>
<p>Whenever there is some issue or pod restarts, then the mount volume is not accessible and throws the below error.</p>
<pre><code>Transport endpoint is not connected
</code></pre>
<p>For solving this error, I have to manaully unmount the volume and restart the daemon.</p>
<pre><code>umount /mnt/data-s3-fuse
</code></pre>
<p>What could be the permanent solution for this issue?</p>
<p>My Daemon file</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: s3-provider
name: s3-provider
namespace: airflow
spec:
template:
metadata:
labels:
app: s3-provider
spec:
containers:
- name: s3fuse
image: image
lifecycle:
preStop:
exec:
command: ["/bin/sh","-c","umount -f /opt/airflow/dags"]
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
# use ALL entries in the config map as environment variables
envFrom:
- configMapRef:
name: s3-config
volumeMounts:
- name: devfuse
mountPath: /dev/fuse
- name: mntdatas3fs
mountPath: /opt/airflow/dags:shared
volumes:
- name: devfuse
hostPath:
path: /dev/fuse
- name: mntdatas3fs
hostPath:
path: /mnt/data-s3-fuse
</code></pre>
<p>and my pod yaml is</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
namespace: airflow
spec:
containers:
- image: nginx
name: s3-test-container
securityContext:
privileged: true
volumeMounts:
- name: mntdatas3fs
mountPath: /opt/airflow/dags:shared
livenessProbe:
exec:
command: ["ls", "/opt/airflow/dags"]
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
volumes:
- name: mntdatas3fs
hostPath:
path: /mnt/data-s3-fuse
</code></pre>
<p>I am using the below code for the s3 kubernetes fuse.</p>
<p><a href="https://github.com/freegroup/kube-s3" rel="nofollow noreferrer">https://github.com/freegroup/kube-s3</a></p>
| Lijo Abraham | <p>Ok, I think I solved it.
It seems like sometimes the pod looses the connection, resulting in "Transport not connected".
The workaround I found to fix this is to add an init container, that tries to unmount the folder before. That seems to fix the issue.
Note, you want to mount a higher-level folder, so you have access to the node.
Will let it run and see if it comes back, it seems to have fixed the issue once here:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: s3-provider
name: s3-provider
spec:
selector:
matchLabels:
app: s3-provider
template:
metadata:
labels:
app: s3-provider
spec:
initContainers:
- name: init-myservice
image: bash
command: ['bash', '-c', 'umount -l /mnt/data-s3-fs/root ; true']
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
# use ALL entries in the config map as environment variables
envFrom:
- configMapRef:
name: s3-config
volumeMounts:
- name: devfuse
mountPath: /dev/fuse
- name: mntdatas3fs-init
mountPath: /mnt:shared
containers:
- name: s3fuse
image: 963341077747.dkr.ecr.us-east-1.amazonaws.com/kube-s3:1.0
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command: ["bash", "-c", "umount -f /srv/s3-mount/root"]
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
# use ALL entries in the config map as environment variables
envFrom:
- configMapRef:
name: s3-config
env:
- name: S3_BUCKET
value: s3-mount
- name: MNT_POINT
value: /srv/s3-mount/root
- name: IAM_ROLE
value: none
volumeMounts:
- name: devfuse
mountPath: /dev/fuse
- name: mntdatas3fs
mountPath: /srv/s3-mount/root:shared
volumes:
- name: devfuse
hostPath:
path: /dev/fuse
- name: mntdatas3fs
hostPath:
type: DirectoryOrCreate
path: /mnt/data-s3-fs/root
- name: mntdatas3fs-init
hostPath:
type: DirectoryOrCreate
path: /mnt
</code></pre>
| GuySoft |
<p>We followed the solution suggested in <a href="https://stackoverflow.com/questions/71957287/apache-ignite-c-sharp-client-connection-configuration-for-kubernetes">Apache Ignite C# Client Connection configuration for kubernetes</a> as thick client to connect the ignite cluster running in kubrenetes.</p>
<p>We get the below error message on start:</p>
<p>failed to start: System.EntryPointNotFoundException: Unable to find an entry point named 'dlopen' in shared library 'libcoreclr.so'. at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.NativeMethodsCore.dlopen(String filename, Int32 flags) at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.Load(String dllPath) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.LoadDll(String filePath, String simpleName) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.Load(String configJvmDllPath, ILogger log) at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)</p>
<p>We included the openjdk8 in the docker image. Here is the docker file.</p>
<pre><code>
#FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
#WORKDIR /app
#EXPOSE 80
#EXPOSE 443
ARG REPO=mcr.microsoft.com/dotnet/runtime
FROM $REPO:3.1.24-alpine3.15 AS base
# Install ASP.NET Core
RUN aspnetcore_version=3.1.24 \
&& wget -O aspnetcore.tar.gz https://dotnetcli.azureedge.net/dotnet/aspnetcore/Runtime/$aspnetcore_version/aspnetcore-runtime-$aspnetcore_version-linux-musl-x64.tar.gz \
&& aspnetcore_sha512='1341b6e0a9903b253a69fdf1a60cd9e6be8a5c7ea3c4a52cd1a8159461f6ba37bef7c2ae0d6df5e1ebd38cd373cf384dc55c6ef876aace75def0ac77427d3bb0' \
&& echo "$aspnetcore_sha512 aspnetcore.tar.gz" | sha512sum -c - \
&& tar -oxzf aspnetcore.tar.gz -C /usr/share/dotnet ./shared/Microsoft.AspNetCore.App \
&& rm aspnetcore.tar.gz
RUN apk add openjdk8
ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk
ENV PATH="$JAVA_HOME/bin:${PATH}"
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
...
RUN dotnet restore "API.csproj"
COPY . .
WORKDIR "API"
RUN dotnet build "API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "API.dll"]````
</code></pre>
| Prem | <p>This Ignite issue on Alpine Linux was fixed in 2.13, which was released yesterday - please try upgrading.</p>
<p><a href="https://issues.apache.org/jira/browse/IGNITE-16749" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/IGNITE-16749</a>
<a href="https://www.nuget.org/packages/Apache.Ignite/2.13.0" rel="nofollow noreferrer">https://www.nuget.org/packages/Apache.Ignite/2.13.0</a></p>
<hr />
<p>Additionally, set <strong>LD_LIBRARY_PATH</strong> environment variable:</p>
<ul>
<li><strong>openjdk8</strong>: <code>ENV LD_LIBRARY_PATH /usr/lib/jvm/default-jvm/jre/lib/amd64/server</code></li>
<li><strong>openjdk11</strong>: <code>ENV LD_LIBRARY_PATH /usr/lib/jvm/default-jvm/jre/lib/server</code></li>
</ul>
<p>Do not set <code>JAVA_HOME</code> and <code>PATH</code> - it is not necessary.</p>
| Pavel Tupitsyn |
<p>I have created ConfigMap from Openshift 4 Windows 10 CLI:</p>
<pre><code>.\oc create configmap my-cacerts --from-file=cacerts
</code></pre>
<p>I can see ConfigMap with name <strong>my-cacerts</strong> and download binary file <strong>cacerts</strong> from it using web interface of Openshift 4</p>
<p>Now I mount it (part of <strong>my-deployment.yaml</strong>)</p>
<pre><code>containers:
volumeMounts:
- name: my-cacerts-volume
mountPath: /etc/my/cacerts
volumes:
- name: my-cacerts-volume
config-map:
name: my-cacerts
</code></pre>
<p>Unfortunately <em>/etc/my/cacerts</em> is mounted as a empty folder but not as a single binary file.</p>
<p>How can I mount <strong>cacerts</strong> as a file and not as a directory?</p>
<p><strong>Update:</strong></p>
<p>If I issue</p>
<pre><code>.\oc get configmap my-cacerts
</code></pre>
<p>There is following output:</p>
<pre><code>apiVersion: v1
binaryData:
cacerts: ... big long base64...
kind: ConfigMap
metadata: ...
</code></pre>
<p>If I issue</p>
<pre><code>.\oc describe pod my-pod
</code></pre>
<p>I get</p>
<pre><code>Volumes:
my-cacerts-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
</code></pre>
| tillias | <p>Your <code>volumes</code> definition is incorrect, <code>config-map</code> <strong>does not exist and is invalid</strong>, but it seems the API is silently falling back to an <code>EmptyDir</code> here, thus leading to an empty directory.</p>
<p>When you create a <code>ConfigMap</code> using the <code>oc</code> command above, the result will be a <code>ConfigMap</code> that looks like this (note that there is one key called "cacerts"):</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-cacerts
data:
cacerts: |
Hello world!
</code></pre>
<p>In the <code>volumes</code> section, then use <code>configMap:</code> together with <code>subPath</code> as follows to mount a only a single key ("cacerts") from your <code>ConfigMap</code>:</p>
<pre><code>$ oc edit deployment my-deployment
[..]
spec:
containers:
- image: registry.fedoraproject.org/fedora-minimal:33
name: fedora-minimal
volumeMounts:
- mountPath: /etc/my/cacerts
name: my-cacerts-volume
subPath: cacerts
[..]
volumes:
- configMap:
name: my-cacerts
defaultMode: 420
name: my-cacerts-volume
</code></pre>
<p>This then results in:</p>
<pre><code>$ oc rsh ...
sh-5.0$ ls -l /etc/my/cacerts
-rw-r--r--. 1 root 1000590000 13 Dec 3 19:11 /etc/my/cacerts
sh-5.0$ cat /etc/my/cacerts
Hello world!
</code></pre>
<p>You can also leave <code>subPath</code> out and set <code>/etc/my/</code> as the destination for the same result, as for each key there will be a file:</p>
<pre><code>[..]
volumeMounts:
- mountPath: /etc/my/
name: my-cacerts-volume
[..]
volumes:
- configMap:
name: my-cacerts
name: my-cacerts-volume
</code></pre>
<p>For the right syntax, you can also check <a href="https://docs.openshift.com/container-platform/4.6/builds/builds-configmaps.html#builds-configmaps-use-case-consuming-in-volumes_builds-configmaps" rel="nofollow noreferrer">the documentation</a></p>
| Simon |
<p>I have a Python process that I want to fire up every <em>n</em> minutes in a Kubernetes cronjob and read a number of messages (say 5) from a queue, and then process/convert some files and run analysis on results based on these queue messages. If the process is still running after <em>n</em> minutes, I don't want to start a new process. In total, I would like a number of these (say 3) of these to be able to run at the same time, however, there can never be more than 3 processes running at the same time. To try and implement this, I tried the following (simplified):</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: some-job
namespace: some-namespace
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
parallelism: 3
template:
spec:
containers:
- name: job
image: myimage:tag
imagePullPolicy: Always
command: ['python', 'src/run_job.py']
</code></pre>
<p>Now what this amounts to is a maximum of three processes running at the same time due to 'parallelism' being 3, and concurrencyPolicy being "Forbid", even if the processes go over the 5 minute mark.</p>
<p>The problem I specifically have is that one pod (e.g. pod 1) can take longer than the other two to finish, which means that pod 2 and 3 might finish after a minute, while pod one only finishes after 10 minutes due to processing of larger files from the queue.</p>
<p>Where I thought that <code>parallelism: 3</code> would cause pod 2 and 3 to be deleted and replaced after finishing (when new cron interval hits), they are not and have to wait for pod 1 to finish before starting three new pods when the cron interval hits again.</p>
<p>When I think about it, this functionality makes sense given the specification and meaning of what a cronjob is. However, I would like to know if it would be able to have these pods/processes not be dependent on one another for restart without having to define a duplicate cronjob, all running one process.</p>
<p>Otherwise, maybe I would like to know if it's possible to easily launch more duplicate cronjobs without copying them into multiple manifests.</p>
| Tim | <p>Duplicate cronjobs seems to be the way to achieve what you are looking for. Produce 3 duplicates with single job at a time. You could template the job manifest and produce multiple as in the following example. The example is not in your problem context, but you can get the idea.
<a href="http://kubernetes.io/docs/tasks/job/parallel-processing-expansion" rel="nofollow noreferrer">http://kubernetes.io/docs/tasks/job/parallel-processing-expansion</a></p>
| gordanvij |
<p>I executed a scenario where I deployed Microsoft SQL Database on my K8s cluster using PV and PVC. It work well but I see some strange behaviour. I created PV but it is only visible on one node and not on other workers nodes. What am I missing here, any inputs please?</p>
<p>Background:</p>
<p>Server 1 - Master</p>
<p>Server 2 - Worker</p>
<p>Server 3 - Worker</p>
<p>Server 4 - Worker</p>
<p>Pod : "MyDb" is running on Server (Node) 4 without any replica set. I am guessing because my POD is running on server-4, PV got created on server four when created POD and refer PVC (claim) in it.</p>
<p>Please let me know your thought on this issue or share your inputs about mounting shared disk in production cluster. </p>
<p>Those who want to deploy SQL DB on K8s cluster, can refer blog posted by Philips. Link below,</p>
<p><a href="https://www.phillipsj.net/posts/sql-server-on-linux-on-kubernetes-part-1/" rel="nofollow noreferrer">https://www.phillipsj.net/posts/sql-server-on-linux-on-kubernetes-part-1/</a> (without PV)</p>
<p><a href="https://www.phillipsj.net/posts/sql-server-on-linux-on-kubernetes-part-2/" rel="nofollow noreferrer">https://www.phillipsj.net/posts/sql-server-on-linux-on-kubernetes-part-2/</a> (with PV and Claim)</p>
<p>Regards,
Farooq</p>
<hr>
<p>Please see below my findings of my original problem statement.
Problem: POD for SQL Server was created. At runtime K8s created this pod on server-4 hence created PV on server-4. However, on other node PV path wasn't created (/tmp/sqldata_.</p>
<ul>
<li>I shutdown server-4 node and run command for deleting SQL pod (no replica was used so initially).</li>
<li>Status of POD changed to "Terminating" POD</li>
<li>Nothing happened for a while. </li>
<li>I restarted server-4 and noticed POD got deleted immediately. </li>
</ul>
<p>Next Step:
- I stopped server-4 again and created same pod.
- POD was created on server-3 node at runtime and I see PV (/tmp/sqldata) was created as well on server-3. However, all my data (example samples tables) are was lost. It is fresh new PV on server 3 now.</p>
<p>I am assuming PV would be mounted volume of external storage and not storage/disk from any node in cluster. </p>
| Solutions Architect | <blockquote>
<p>I am guessing because my POD is running on server-4, PV got created on server four when created POD and refer PVC (claim) in it.</p>
</blockquote>
<p>This is more or less correct and you should be able to verify this by simply deleting the Pod and recreating it (since you say you do not have a ReplicaSet doing that for you). The <code>PersistentVolume</code> will then be visible on the node where the Pod is scheduled to.</p>
<p>Edit: The above assumes that you are using an external storage provider such as NFS or AWS EBS (see <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">possible storage providers</a> for Kubernetes). With <code>HostPath</code> the above does NOT apply and a PV will be created locally on a node (and will not be mounted to another node).</p>
<p>There is no reason to mount the <code>PersistentVolume</code> also to the other nodes. Imagine having hundreds of nodes, would you want to mount your <code>PersistentVolume</code> to all of them, while your Pod is just running on one?</p>
<p>You are also asking about "shared" disks. The <code>PersistentVolume</code> created in the blog post you linked is using <code>ReadWriteMany</code>, so you actually can start multiple Pods accessing the same volume (given that your storage supports that as well). But your software (a database in your case) needs to support having multiple processes accessing the same data.</p>
<p>Especially when considering databases, you should also look into <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a>, as this basically allows you to define Pods that are always using the same storage, which can be very interesting for databases. Wherever you <a href="https://cloud.google.com/blog/products/databases/to-run-or-not-to-run-a-database-on-kubernetes-what-to-consider" rel="nofollow noreferrer">should run or not run databases on Kubernetes</a> is a whole different topic...</p>
| Simon |
Subsets and Splits