Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I am running 3 mongodb pods and separate service and persistent volume claims for each pod. I want to do the Mongodb replication among the 3 pods. Login into 1st pod and gave the mongo command, then i configured hosts as <strong>podname.servicename.namespace.svc.cluster.local:27017</strong> for each pod.</p> <pre><code>rs.initiate( { "_id": "rs0", "members": [ { "_id": 0, "host": "mongo-.mongo.default.svc.cluster.local:27017", "priority": 10 }, { "_id": 1, "host": "mongo-1.mongo.default.svc.cluster.local:27017", "priority": 9 }, { "_id": 2, "host": "mongo-2.mongo.default.svc.cluster.local:27017", "arbiterOnly": true } ] } ) </code></pre> <p>I am getting the error like this</p> <blockquote> <p>replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo-1.mongo.default.svc.cluster.local:27017 failed with Error connecting to mongo-1.mongo.default.svc.cluster.local:27017 (10.36.0.1:27017) :: caused by :: Connection refused, mongo-2.mongo.default.svc.cluster.local:27017 failed with Error connecting to mongo-2.mongo.default.svc.cluster.local:27017 (10.44.0.3:27017) :: caused by :: Connection refused</p> </blockquote> <p>Here i have dought on whether cluster-IP or node-IP it takes as host while doing the MongoDB replication in kubernetes cluster. </p> <p>Could anybody suggest me how to configure the host-name while doing the mongodb replication in kubernates? </p>
BSG
<p>You must explicitly bind <code>mongod</code> to the non-loopback interface since mongo 3.6, according to <a href="https://docs.mongodb.com/manual/reference/configuration-options/#net.bindIp" rel="nofollow noreferrer">the fine manual</a></p> <p>You can test that theory yourself by exec-ing into <code>mongo-1.mongo.default</code> and attempting to manually connect to <code>mongo-2.mongo.default</code>, which I am about 90% certain will fail for you in the same way it fails for <code>mongod</code>.</p>
mdaniel
<p>I'm trying to scale my docker containers with Minikube on Windows 10 Enterprise Edition. However, I'm running into a few conflicts with Hyper-V and VirtualBox. I know Dockers require Hyper-V to run properly while Minikube requires VirtualBox to run (shows an error if Hyper-V is enabled) </p> <pre><code>C:\WINDOWS\system32&gt;minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 160.27 MB / 160.27 MB [============================================] 100.00% 0s E0822 11:42:07.898412 13028 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks. : This computer is running Hyper-V. VirtualBox won't boot a 64bits VM when Hyper-V is activated. Either use Hyper-V as a driver, or disable the Hyper-V hypervisor. (To skip this check, use --virtualbox-no-vtx-check). </code></pre> <p>If I disable, Hyper-V, I'm able to start minikube properly but Dockers does not work and shows an error to enable Hyper-V.</p> <p>I also tried running minikube with Hyper-V driver, but also get this error:</p> <pre><code>C:\WINDOWS\system32&gt;minikube start --vm-driver hyperv Starting local Kubernetes v1.10.0 cluster... Starting VM... E0822 11:44:32.323877 13120 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks. : no External vswitch found. A valid vswitch must be available for this command to run. Check https://docs.docker.com/machine/drivers/hyper-v/. </code></pre> <p>Any solution to this?</p>
Rking14
<blockquote> <p>I also tried running minikube with Hyper-V driver, but also get this error:</p> </blockquote> <p>There is an explicit warning about that HyperV and vswitch situation in their docs: <a href="https://github.com/kubernetes/minikube/blob/v0.28.2/docs/drivers.md#hyperv-driver" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/v0.28.2/docs/drivers.md#hyperv-driver</a></p> <p>Although in <code>minikube</code>'s defense, it also does say <strong>right at the end</strong> the URL one should go to in order to read about the <code>--hyperv-virtual-switch</code> flag one should provide in addition to <code>--vm-driver=hyperv</code>.</p> <p>Navigating to that linked docker page, it even provides a step-by-step with screenshots example of how to accomplish that.</p>
mdaniel
<p>I have created and deployed an application on ICP 2.1. Have exposed NodePort as my service. Iam able to navigate to the url from the Nodeport. How do i go to a specific path on the url directly from the Nodeport. Iam using yaml file to create the deployment and service. Where should i specify the path??</p>
vidya
<blockquote> <p>How do i go to a specific path on the url directly from the Nodeport</p> </blockquote> <p>The short version is that you don't, since a <code>NodePort</code> is merely an exposure of an existing <code>port:</code> from a <code>Service</code>.</p> <p>The medium length version is that using an ingress controller (such as the "default" nginx one) would allow you to <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/README/#app-root" rel="nofollow noreferrer">add an app-root</a> to the <code>Ingress</code> resource, and then use the <code>NodePort</code> belonging to the ingress controller rather than the <code>NodePort</code> of the upstream <code>Service</code> itself. I'm pretty sure <a href="https://kubernetes.github.io/ingress-nginx/ingress-controller-catalog/" rel="nofollow noreferrer">all of the ingress controllers</a> support that kind of behavior since it's a fairly common scenario.</p> <p>The long version is that you can manually put something like <code>nginx</code> or <code>haproxy</code> between your <code>Service</code> and the upstream <code>Pod</code> to artificially inject a URI prefix, with the disadvantage that if you wish to do that more than once, you'd be better off using an <code>Ingress</code> resource so that functionality is handled for you in a standardized way.</p>
mdaniel
<p>I'm trying to setup Redis cluster in Kubernetes. The major requirement is that all of nodes from Redis cluster have to be available from outside of Kubernetes. So clients can connect every node directly. But I got no idea how to configure service that way. </p> <p>Basic config of cluster right now. It's ok for services into k8s but no full access from outside.</p> <pre><code> apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster labels: app: redis-cluster data: redis.conf: |+ cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file /data/nodes.conf cluster-migration-barrier 1 appendonly no protected-mode no --- apiVersion: v1 kind: Service metadata: annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "false" name: redis-cluster labels: app: redis-cluster spec: type: NodePort ports: - port: 6379 targetPort: 6379 name: client - port: 16379 targetPort: 16379 name: gossip selector: app: redis-cluster --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: redis-cluster labels: app: redis-cluster spec: serviceName: redis-cluster replicas: 6 template: metadata: labels: app: redis-cluster spec: hostNetwork: true containers: - name: redis-cluster image: redis:4.0.10 ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip command: ["redis-server"] args: ["/conf/redis.conf"] readinessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 20 periodSeconds: 3 volumeMounts: - name: conf mountPath: /conf readOnly: false volumes: - name: conf configMap: name: redis-cluster items: - key: redis.conf path: redis.conf </code></pre>
Shtlzut
<p>Given:</p> <pre><code> spec: hostNetwork: true containers: - name: redis-cluster ports: - containerPort: 6379 name: client </code></pre> <p>It appears that your <code>StatefulSet</code> is misconfigured, since if <code>hostNetwork</code> is <code>true</code>, you have to provide <code>hostPort</code>, and that value should match <code>containerPort</code>, according to the PodSpec docs:</p> <blockquote> <p><code>hostPort integer</code> - Number of port to expose on the host. If specified, this must be a valid port number, 0 &lt; x &lt; 65536. If HostNetwork is specified, this must match ContainerPort.</p> </blockquote> <p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#containerport-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#containerport-v1-core</a></p>
mdaniel
<p>I have this weird error plaguing me.</p> <p>I am trying to get an activemq pod running with a kubernetes stateful set, volume attached. </p> <p>The activemq is just a plain old vanila docker image, picked it from here <a href="https://hub.docker.com/r/rmohr/activemq/" rel="nofollow noreferrer">https://hub.docker.com/r/rmohr/activemq/</a></p> <pre><code>INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@3fee9989: startup date [Thu Aug 23 22:12:07 GMT 2018]; root of context hierarchy INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb] INFO | KahaDB is version 6 INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started INFO | Apache ActiveMQ 5.15.4 (localhost, ID:activemq-0-43279-1535062328969-0:1) is starting INFO | Listening for connections at: tcp://activemq-0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector openwire started INFO | Listening for connections at: amqp://activemq-0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector amqp started INFO | Listening for connections at: stomp://activemq-0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector stomp started INFO | Listening for connections at: mqtt://activemq-0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector mqtt started WARN | [email protected]@65a15628{/,null,STARTING} has uncovered http methods for path: / INFO | Listening for connections at ws://activemq-0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector ws started INFO | Apache ActiveMQ 5.15.4 (localhost, ID:activemq-0-43279-1535062328969-0:1) started INFO | For help or more information please see: http://activemq.apache.org WARN | Store limit is 102400 mb (current store usage is 6 mb). The data directory: /opt/activemq/data/kahadb only has 95468 mb of usable space. - resetting to maximum available disk space: 95468 mb WARN | Failed startup of context o.e.j.w.WebAppContext@478ee483{/admin,file:/opt/apache-activemq-5.15.4/webapps/admin/,null} java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false at org.eclipse.jetty.webapp.WebInfConfiguration.makeTempDirectory(WebInfConfiguration.java:336)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:304)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:69)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.security.SecurityHandler.doStart(SecurityHandler.java:391)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.security.ConstraintSecurityHandler.doStart(ConstraintSecurityHandler.java:449)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.Server.start(Server.java:387)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.Server.doStart(Server.java:354)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171] at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171] at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:265)[spring-core-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.config.MethodInvokingBean.invokeWithTargetException(MethodInvokingBean.java:119)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.config.MethodInvokingFactoryBean.afterPropertiesSet(MethodInvokingFactoryBean.java:106)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1692)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1630)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:742)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)[spring-context-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)[spring-context-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.apache.xbean.spring.context.ResourceXmlApplicationContext.&lt;init&gt;(ResourceXmlApplicationContext.java:64)[xbean-spring-4.2.jar:4.2] at org.apache.xbean.spring.context.ResourceXmlApplicationContext.&lt;init&gt;(ResourceXmlApplicationContext.java:52)[xbean-spring-4.2.jar:4.2] at org.apache.activemq.xbean.XBeanBrokerFactory$1.&lt;init&gt;(XBeanBrokerFactory.java:104)[activemq-spring-5.15.4.jar:5.15.4] at org.apache.activemq.xbean.XBeanBrokerFactory.createApplicationContext(XBeanBrokerFactory.java:104)[activemq-spring-5.15.4.jar:5.15.4] at org.apache.activemq.xbean.XBeanBrokerFactory.createBroker(XBeanBrokerFactory.java:67)[activemq-spring-5.15.4.jar:5.15.4] at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:71)[activemq-broker-5.15.4.jar:5.15.4] at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:54)[activemq-broker-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.StartCommand.runTask(StartCommand.java:87)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:154)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)[activemq-console-5.15.4.jar:5.15.4] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171] at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171] at org.apache.activemq.console.Main.runTaskClass(Main.java:262)[activemq.jar:5.15.4] at org.apache.activemq.console.Main.main(Main.java:115)[activemq.jar:5.15.4] </code></pre> <p>The kubernete activemq pod is running fine if we don't define it with stateful sets.</p> <p>Below is the spec</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: activemq namespace: dev labels: app: activemq spec: replicas: 1 serviceName: activemq-svc selector: matchLabels: app: activemq template: metadata: labels: app: activemq spec: securityContext: runAsUser: 1000 fsGroup: 2000 runAsNonRoot: false containers: - name: activemq image: "mydocker/amq:latest" imagePullPolicy: "Always" ports: - containerPort: 61616 name: port-61616 - containerPort: 8161 name: port-8161 volumeMounts: - name: activemq-data mountPath: "/opt/activemq/data" restartPolicy: Always imagePullSecrets: - name: regsecret tolerations: - effect: NoExecute key: appstype operator: Equal value: ibd-mq affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: appstype operator: In values: - dev-mq volumeClaimTemplates: - metadata: name: activemq-data spec: accessModes: - ReadWriteOnce storageClassName: "gp2-us-east-2a" resources: requests: storage: 100Gi </code></pre>
virtuvious
<blockquote> <p>WARN | Failed startup of context o.e.j.w.WebAppContext@478ee483{/admin,file:/opt/apache-activemq-5.15.4/webapps/admin/,null}</p> <p>java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false</p> </blockquote> <p>Unless you altered the <code>activemq</code> userid in your image, then that filesystem permission issue is caused by this stanza in your <code>PodSpec</code>:</p> <pre><code>spec: securityContext: runAsUser: 1000 fsGroup: 2000 runAsNonRoot: false </code></pre> <p>failing to match up with the userid configuration in <code>rmohr/activemq:5.15.4</code>:</p> <pre><code>$ docker run -it --entrypoint=/bin/bash rmohr/activemq:5.15.4 -c 'id -a' uid=999(activemq) gid=999(activemq) groups=999(activemq) </code></pre>
mdaniel
<p>I created an InitializerConfiguration that adds my initializer for pods.</p> <p>The documentation says to use a Deployment (<a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-initializers-on-the-fly" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-initializers-on-the-fly</a>). However, doing so results in my initializer Pod being stuck in "pending" because it's waiting for itself to initialize it. I tried overriding the pending initializers to an empty list in the pod spec of the Deployment, but that seems to be ignored.</p> <p>What's the correct way to deploy a Pod initializer without deadlocking?</p> <p>I found a couple bug reports that seem related, but no solutions that worked for me: * <a href="https://github.com/kubernetes/kubernetes/issues/51485" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/51485</a> (based on this one I added the "initialize" verb for pods to the ClusterRole system:controller:replicaset-controller, but that didn't help either)</p>
cberner
<blockquote> <p>However, doing so results in my initializer Pod being stuck in "pending" because it's waiting for itself to initialize it</p> </blockquote> <p>But the docs say:</p> <blockquote> <p>You should first deploy the initializer controller and make sure that it is working properly before creating the <code>initializerConfiguration</code>. Otherwise, any newly created resources will be stuck in an uninitialized state.</p> </blockquote> <p>So it sounds to me like you will want to <code>kubectl delete initializerConfiguration --all</code> (or, of course, the specific name of the <code>initializerConfiguration</code>), allow your initializer Pod to start successfully, <strong>then</strong> <code>kubectl create -f my-initializer-config.yaml</code> or whatever.</p>
mdaniel
<p>I have a single node kubernetes deployment running on a home server, on which I have several services running. Since it's a small local network, I wanted to block off a portion of the local address range that the rest of my devices use for pod ips, and then route to them directly.</p> <p>For example, if I have a web server running, instead of exposing port 80 as an external port and port forwarding from my router to the worker node, I would be able to port forward directly to the pod ip.</p> <p>I haven't had much luck finding information on how to do this though, is it possible? </p> <p>I'm new to kubernetes so I am sure I am leaving out important information, please let me know so I can update the question.</p> <hr> <p>I got this working by using the macvlan CNI plugin from the <a href="https://github.com/containernetworking/cni" rel="nofollow noreferrer">reference plugins</a>. Using kubeadm to set up the cluster, these plugins are already installed and the cluster will be configured to use them. The only thing to do is drop in a cni.conf (in <code>/etc/cni/net.d</code>). Mine looks like this </p> <pre><code>{ "name": "net", "type": "macvlan", "mode": "bridge", "master": "eno1", "ipam": { "type": "host-local", "ranges": [[{ "subnet": "10.0.0.0/8", "gateway": "10.0.0.1", "rangeStart": "10.0.10.2", "rangeEnd": "10.0.10.254" }]], "routes": [ { "dst": "0.0.0.0/0" } ] } } </code></pre> <p>Putting this in place is all that is needed for coredns to start up and any pods you run will have ips from the range defined in the config. Since this is on the same subnet as the rest of my lan, I can freely ping these containers and my router even lets me play with their settings since they have mac addresses (if you dont want this use ipvlan instead of macvlan, you'll still be able to ping and port forward and everything, your router just wont be enumerating all the devices since they dont have hardware addresses). </p> <p>Couple of caveats:</p> <ol> <li><p>Services won't work since they're all "fake" (e.g. they dont have interfaces its all iptables magic that makes them work). There's probably a way to make them work but it wasn't worth it for my use case</p></li> <li><p>For whatever reason the DNS server keeps revering to 10.96.0.1. I have no idea where it got that address from, but I have been working around it by defining <code>dnsPolicy: None</code> and setting <code>dnsConfig.nameservers[0]</code> to my routers IP. There's probably a better solution for for it.</p></li> <li><p>You should run kubeadm with <code>--service-cidr 10.0.10.0/24 --pod-network-cidr 10.0.10.0/24</code> or it seems like kubelet (or something) doesn't know how to talk to the pods. I actually don't know if <code>--service-cidr</code> matters but it seems like a good idea</p></li> <li><p>Out of the box, your pods wont be able to talk to the master since they are using macvlan devices enslaving its ethernet and for whatever reason macvlan doesn't let you talk between host and guest devices. As you can imagine this isnt a good thing. Solution is to manually add a macvlan device on the host with the same subnet as your pods.</p></li> <li><p>It seems like even ports you don't expose from the pod are usable from the lan devices (which isnt cool), probably since the iptables rules think that anything on the lan is cluster-internal. I haven't put much time into debugging this.</p></li> </ol> <p>This is probably some kind of cardinal sin for people used to using kubernetes in production, but its kind of cool and useful for a home setup, though it certainly feels like a hack sometimes. </p>
Max Ehrlich
<p>I believe the answer to your question is to use the <a href="https://github.com/containernetworking/plugins#ipam-ip-address-allocation" rel="nofollow noreferrer">dhcp IPAM</a> plugin to <a href="https://github.com/containernetworking/cni#readme" rel="nofollow noreferrer">CNI</a>, but being mindful about Pod address recycling. I say be mindful because it <em>might not</em> matter, unless you have high frequency Pod termination, but on the other hand I'm not sure where it falls on the Well That's Unfortunate™ spectrum if a Pod IP is recycled in the cluster.</p> <p>The bad news is that I have not had any experience with these alternative CNI plugins to be able to speak to the sharp edges one will need to be mindful of, so hopefully if someone else has then they can chime in.</p>
mdaniel
<p>I have installed Grafana from this git repo <a href="https://github.com/ibrahiem94/prometheus-kubernetes" rel="nofollow noreferrer">Grafana-Kubernates</a>. After the installation, Grafana web monitor link is available at <code>{aws LB}/api/v1/namespaces/monitoring/services/grafana/proxy</code>. But when I navigate to any link, it is redirected to <code>{aws LB}/{service like dashboard or datasource}</code> insted of <code>{aws LB}/api/v1/namespaces/monitoring/services/grafana/proxy/{dashboard or datasource}</code>.</p> <p>How can I solve this?</p>
Ibrahim Mohamed
<blockquote> <p>how can I solve this?</p> </blockquote> <p>Exposing functionality outside of the cluster is designed to be done with either a <code>Service</code> of type <code>LoadBalancer</code>, where kubernetes will create the ELB on your behalf, or with an <code>ingress-controller</code> such that you only have to have a single TCP/IP port or ELB but that dispatches to all the <code>Service</code>s in the cluster based on <code>Host:</code> header virtual-hosting.</p> <p>In short, you are trying to use <code>kubectl</code> in a manner that it wasn't designed to be used.</p>
mdaniel
<p>looking a guide to install kubernetes over AWS EC2 instances using kops <a href="https://medium.com/containermind/how-to-create-a-kubernetes-cluster-on-aws-in-few-minutes-89dda10354f4" rel="nofollow noreferrer">Link</a> I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes, is possible set an IP to some configuration file then my cluster is created with a specific IP in my control node and my etcd node???? if a control node is restarting and not have elastic IP its change, and a big number of issues starts. I want to prevent this problem, or at least after deploy change my control node IP.</p>
Miguel Conde
<blockquote> <p>I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes</p> </blockquote> <p>The correct way, and the way almost every provisioning tool that I know of does this, is to use either an Elastic Load Balancer (ELB) or the new Network Load Balancer (NLB) to put an abstraction layer in front of the master nodes for exactly that reason. So it does one step better than just an EIP and assigns one EIP per Availability Zone (AZ), along with a stable DNS name. It's my recollection that the masters can also keep themselves in sync with the ELB (unknown about the NLB, but certainly conceptually possible), so if new ones come online they register with the ELB automatically</p> <p>Then, a similar answer for the etcd nodes, and for the same reason, although as far as I know etcd has no such ability to keep the nodes in sync with the fronting ELB/NLB so that would need to be done with the script that provisions any new etcd nodes</p>
mdaniel
<p>I'm having trouble connecting to the kubernetes python client even though I'm following the examples <a href="https://github.com/kubernetes-client/python/blob/master/examples/example1.py" rel="noreferrer">here</a> in the api. </p> <p>Basically this line can't connect to the kubernetes client: </p> <pre><code>config.load_kube_config() </code></pre> <p><strong>What I'm doing:</strong> </p> <p>I have a Dockerfile file like this that I'm building my image with. This is just a simple python/flask app. </p> <pre><code>FROM python:2 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY requirements.txt /usr/src/app/ RUN pip install --no-cache-dir -r requirements.txt COPY . /usr/src/app EXPOSE 5000 CMD [ "python", "./app.py" ] </code></pre> <p>This is my requirements.txt:</p> <pre><code>Flask==1.0.2 gunicorn==19.8.1 kubernetes==6.0.0 requests # Apache-2.0 </code></pre> <p>After building the Dockerfile it outputs: </p> <pre><code> Successfully built a2590bae9fd9 Successfully tagged testapp:latest </code></pre> <p>but when I do <code>docker run a2590bae9fd9</code> I receive an error:</p> <pre><code>Traceback (most recent call last): File "./app.py", line 10, in &lt;module&gt; config.load_kube_config() File "/usr/local/lib/python2.7/site- packages/kubernetes/config/kube_config.py", line 470, in load_kube_config config_persister=config_persister) File "/usr/local/lib/python2.7/site- packages/kubernetes/config/kube_config.py", line 427, in _get_kube_config_loader_for_yaml_file with open(filename) as f: IOError: [Errno 2] No such file or directory: '/root/.kube/config' </code></pre> <p>I thought it might've been my python directory but I checked and its running in /usr/local/bin/python. </p> <p>I'm really stumped - any suggestions/tips? thank you. </p>
helloworld
<p>You don't want <code>config.load_kube_config()</code>, you want <a href="https://github.com/kubernetes-client/python/blob/6.0.0/examples/in_cluster_config.py#L21" rel="noreferrer"><code>config.load_incluster_config()</code></a></p> <p>If you need to distinguish between your setup and when it's running in a <code>Pod</code>, one mechanism is <code>if os.getenv('KUBERNETES_SERVICE_HOST'): config.load_incluster_config()</code> since that for sure will be in the environment while in a <code>Pod</code>, and is unlikely to be in your local environment.</p>
mdaniel
<p>Running Ubuntu 18.04 </p> <p>kubectl : 1.10</p> <p>Google Cloud SDK 206.0.0 alpha 2018.06.18 app-engine-python 1.9.70 app-engine-python-extras 1.9.70 beta 2018.06.18 bq 2.0.34 core 2018.06.18 gsutil 4.32</p> <pre><code>helm init $HELM_HOME has been configured at /home/jam/snap/helm/common. Error: error installing: Post https://&lt;ip&gt;/apis/extensions/v1beta1/namespaces/kube-system/deployments: error executing access token command "/usr/lib/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=fork/exec /usr/lib/google-cloud-sdk/bin/gcloud: no such file or directory output= stderr= </code></pre> <p>I have copy pasted the command and it runs fine</p> <p>Any help ? </p>
hounded
<p><code>snap</code> is like docker in that I believe its filesystem and your filesystem intersect only in very controlled ways -- otherwise the isolation feature would be null and void. In docker, you can "volume mount" a directory from the host FS into the "guest" FS, so if snap permits such a thing: you'd want to make <code>/usr/lib/google-cloud-sdk</code> available to the snap's FS -- or, of course, just download (or compile) the <code>helm</code> binary like a normal person since it's literally one statically linked go binary</p>
mdaniel
<p>I have an ingress-nginx controller handling traffic to my Kubernetes cluster hosted on GKE. I set it up using helm installation instructions from docs:</p> <p><a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">Docs here</a></p> <p>For the most part everything is working, but if I try to set cache related parameters via a <code>server-snippet</code> annotation, all of the served content that should get the cache-control headers comes back as a <code>404</code>.</p> <p>Here's my <code>ingress-service.yaml</code> file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-read-timeout: "4000" nginx.ingress.kubernetes.io/proxy-send-timeout: "4000" nginx.ingress.kubernetes.io/server-snippet: | location ~* \.(js|css|gif|jpe?g|png)$ { expires 1M; add_header Cache-Control "public"; } spec: tls: - hosts: - example.com secretName: example-com rules: - host: example.com http: paths: - path: / backend: serviceName: client-cluster-ip-service servicePort: 5000 - path: /api/ backend: serviceName: server-cluster-ip-service servicePort: 4000 </code></pre> <p>Again, it's only the resources that are matched by the regex that come back as <code>404</code> (all <code>.js</code> files, <code>.css</code> files, etc.).</p> <p>Any thoughts on why this would be happening?</p> <p>Any help is appreciated!</p>
Murcielago
<p>Those <code>location</code> blocks are <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="noreferrer">last and/or longest match wins</a>, and since the ingress <strong>itself</strong> is not serving any such content, the nginx relies on a <a href="https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass" rel="noreferrer"><code>proxy_pass</code> directive</a> pointing at the upstream server. Thus, if you are getting 404s, it's very likely because <em>your</em> <code>location</code> is matching, thus interfering with the <code>proxy_pass</code> one. There's a pretty good chance you'd actually want <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="noreferrer"><code>configuration-snippet:</code></a> instead, likely in combination with <a href="https://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if" rel="noreferrer"><code>if ($request_uri ~* ...) {</code></a> to add the header.</p> <p>One can try this locally with a trivial nginx.conf pointing at <code>python3 -m http.server 9090</code> or whatever fake upstream target.</p> <p>Separately, for debugging nginx ingress problems, it is often invaluable to consult its actual <code>nginx.conf</code>, which one can grab from any one of the ingress Pods, and/or consulting the logs of the ingress Pods where nginx will emit helpful debugging text.</p>
mdaniel
<p>I am trying to build an API, which can send back my pods' resource usage. Looking at the <a href="https://i.stack.imgur.com/9wPbP.png" rel="nofollow noreferrer">resources being used by the pods</a>, I am not able to figure out the go-client API to send the request to. Any help will be very appreciated.</p>
Shibu
<ol> <li><p>I'm pretty sure the kuberetes-dashboard uses XHR to obtain that data, so you can make the same requests your browser does, provided your <code>serviceAccount</code> has the correct credentials to interrogate the kubernetes-dashboard API</p></li> <li><p>either way, that timeseries data surfaced by kubernetes-dashboard actually comes from heapster, not from the kubernetes API itself, so the kubernetes go-client wouldn't be involved but rather would be a query to the heapster <code>Service</code> (which IIRC does not require authentication, although it <em>would</em> require constructing the correct heapster query syntax, which kubernetes-dashboard is doing on your behalf)</p></li> </ol>
mdaniel
<p>I have a deployment with a php image that connects with a redis deployment in the 6379 port.</p> <p>The problem is that the php application connects in the host 127.0.0.1 of its own pod, but the redis is in another pod (and have its own ClusterIP service). </p> <p>I can't change the app code, so I want to redirect the 6379 port of php pod for the same port on the redis port.</p> <p>How I can do this?</p>
mayconfsbrito
<p>kubernetes uses <a href="http://www.dest-unreach.org/socat/" rel="nofollow noreferrer"><code>socat</code></a> for doing port-forwarding from <code>kubectl</code>, so if they trust it that much, it seems reasonable you could trust it, too.</p> <p>Place it in a 2nd container that runs alongside your php container, run <code>socat</code> in forwarding mode, and hope for the best:</p> <pre><code>containers: - name: my-php # etc - name: redis-hack image: my-socat-container-image-or-whatever command: - socat - TCP4-LISTEN:6379,fork - TCP4:$(REDIS_SERVICE_HOST):$(REDIS_SERVICE_PORT) # assuming, of course, your Redis ClusterIP service is named `redis`; adjust accordingly </code></pre> <p>Since all Pods share the same network namespace, the second container will listen on "127.0.0.1" also</p> <p>Having said all of that, as commentary on your situation, it is a <strong>terrible</strong> situation to introduce this amount of hackery to work around a very, very simple problem of just not having the app hard-code "127.0.0.1" as the redis connection host</p>
mdaniel
<p>I have tried to install Kubernetes on 3 separate Ubuntu 16.04 machines, with poor results. On all three machines, the recommended installation, using snap and conjure-up did not work:</p> <pre><code>gknight@pz1:~$ sudo snap install conjure-up --classic [sudo] password for gknight: gknight@pz1:~$ sudo reboot gknight@pz1:~$ conjure-up kubernetes dropping privs did not work </code></pre> <p>This is the snap version:</p> <pre><code>gknight@pz1:~$ snap --version snap 2.33.1ubuntu2 snapd 2.33.1ubuntu2 series 16 ubuntu 16.04 kernel 4.4.0-130-generic </code></pre> <p>On two, local, machines, the repository method worked:</p> <pre><code>sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add add the following to sources.list.d, as kubernetes.list: deb http://apt.kubernetes.io/ kubernetes-xenial main apt-get update apt-get install -y kubelet kubeadm kubectl kubernetes-cni </code></pre> <p>But, on a remote 512mb KVM VPS (PnZ Hosting), although Docker installs and runs just fine, when I install kubelet, etc. and do <em>nothing</em> else, it soon runs the uptime load average up to 12 or so, and I can barely get through to it to reboot. There are no obvious error messages (and swap is turned off).</p> <p>So, does the "conjure-up" method work on <em>any</em> Ubuntu 16.04 today? </p> <p>What is Kubernetes <em>doing</em> that's taking over the KVM machine?</p> <p>Finally, is there any other way to install Kubernetes?</p>
Gene Knight
<blockquote> <p>remote 512mb KVM VPS</p> </blockquote> <p>That's almost certainly the problem, as I don't know of very much software nowadays that will run in that little memory. It matches your experience that the machine will start swapping like mad, driving the I/O pressure through the roof</p>
mdaniel
<p>I have kubernetes cluster which I created with <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a> on AWS. I'm trying to use <a href="https://github.com/box/kube-applier" rel="nofollow noreferrer">kube-applier</a> to apply <code>yaml</code> configuration to my cluster: I created a deployment with <code>kube-applier</code>:</p> <pre><code>apiVersion: &quot;extensions/v1beta1&quot; kind: &quot;Deployment&quot; metadata: name: &quot;kube-applier&quot; namespace: &quot;kube-system&quot; spec: # spec </code></pre> <p>and started it in <code>kube-system</code> namespaces as suggested in a README: <code>kubectl --namespace=kube-system apply -f deployment.yaml</code>.</p> <p>But then <code>kube-applier</code> fails with this error when received new file to apply:</p> <pre><code>$ kubectl apply -f /git/deployment.yaml Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc43034ca91 0xc43034ca91 kube-system kube-applier /git/applier-deployment.yaml 0xc421432531 false} from server for: &quot;/git/deployment.yaml&quot;: deployments.extensions &quot;kube-applier&quot; is forbidden: User &quot;system:serviceaccount:kube-system:default&quot; cannot get deployments.extensions in the namespace &quot;kube-system&quot; </code></pre> <p>How can I grant permissions to <code>kube-applier</code> pod to apply configurations in other namespaces?</p> <p>Kubernetes version: <code>Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;9&quot;, GitVersion:&quot;v1.9.6&quot;, GitCommit:&quot;9f8ebd171479bec0ada837d7ee641dec2f8c6dd1&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2018-03-21T15:13:31Z&quot;, GoVersion:&quot;go1.9.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}</code></p>
Kirill
<blockquote> <p>How can I grant permissions to kube-applier pod to apply configurations in other namespaces?</p> </blockquote> <p>Create, or find, a <code>ClusterRole</code> with the correct resource permissions, then bind the <code>ServiceAccount</code> to it using a <code>ClusterRoleBinding</code> like so</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: # C.R.B. don't have a "namespace:" name: my-kube-applier roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: default namespace: kube-system </code></pre> <p><strong>BUT</strong>, as @jhernandez said, you will really want to create a dedicated <code>ServiceAccount</code> for <code>kube-applier</code> instead of granting what I presume is a very, very privileged <code>ClusterRole</code> to the <code>default</code> S.A. (which is what I did in the <em>example</em> above, but you should not do for real)</p> <p>Creating a new <code>ServiceAccount</code> is super cheap: <code>kubectl -n kube-system create sa kube-applier</code> and then replace <code>name: default</code> with <code>name: kube-applier</code> in the <code>subjects:</code> block above.</p> <p>Ideally one would create a customized least-privilege <code>ClusterRole</code> rather than using a massive hammer like <code>cluster-admin</code>, but generating the <em>correct</em> one would take some serious typing, so I'll leave that to your discretion.</p>
mdaniel
<p>I have two different Minecraft server containers running, both set to use the default TCP port 25565. To keep things simple for laymen to connect, I would like to have a subdomain dedicated to each server, say mc1.example.com and mc2.example.com, such that they only put the address in and the client connects. </p> <p>For an HTTP(s) service, the NGINX L7 ingress works fine, but it doesn't seem to work for Minecraft. NodePort works well, but then each server would need a different port.</p> <p>This is also installed on bare metal - there is not a cloud L4 load balancer available, and a very limited pool of IP addresses (assume there are not enough to cover all the various Minecraft servers).</p> <p>Can the L7 ingress be modified to redirect mc1.example.com to the correct container's port 25565? Would I need to use something like <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>?</p>
user1943640
<blockquote> <p>This is also installed on bare metal - there is not a cloud L4 load balancer available, and a very limited pool of IP addresses (assume there are not enough to cover all the various Minecraft servers).</p> </blockquote> <p>If you don't have enough IP addresses, then MetalLB won't help you, since it is just using BGP to v-host for you but you'd have to virtual addresses to hand out. Based on your description of the situation and your problem, I'd venture to say you're trying to do this on the cheap, and it is -- as one might expect -- hard to work without resources.</p> <p>That said:</p> <p>As best I can tell, there is no redirect in <a href="https://wiki.vg/Protocol" rel="nofollow noreferrer">the modern Minecraft protocol</a>, but interestingly enough during <a href="https://wiki.vg/Protocol#Handshake" rel="nofollow noreferrer">the Handshake</a> the client does actually send the hostname to which it is attempting to connect. That may or may not be something that <a href="https://github.com/SpigotMC/BungeeCord#readme" rel="nofollow noreferrer">BungeeCord</a> takes advantage of, I didn't study its source code.</p> <p>It could therefore be <strong>theoretically</strong> possible to make a Minecraft-specific virtual-hosting proxy, since there are already quite a few implementations of the protocol. But, one could have to study all the messages in the protocol to ensure they contain a reference to the actual connection id, otherwise you'd have to resort to just <code>(client-ip, client-port)</code> identification tuples, effectively turning your server into a reverse NAT/PAT implementation. That may be fine, just watch out.</p>
mdaniel
<p>I'm trying to deploy my Dockerized React app to Kubernetes. I believe I've dockerized it correctly, but i'm having trouble accessing the exposed pod.</p> <p>I don't have experience in Docker or Kubernetes, so any help would be appreciated.</p> <p>My React app is just static files (from npm run build) being served from Tomcat.</p> <p>My Dockerfile is below. In summary, I put my app in the Tomcat folder and expose port 8080.</p> <pre><code>FROM private-docker-registry.com/repo/tomcat:latest EXPOSE 8080:8080 # Copy build directory to Tomcat webapps directory RUN mkdir -p /tomcat/webapps/app COPY /build/sample-app /tomcat/webapps/app # Create a symbolic link to ROOT -- this way app starts at root path (localhost:8080) RUN ln -s /tomcat/webapps/app /tomcat/webapps/ROOT # Start Tomcat ENTRYPOINT ["catalina.sh", "run"] </code></pre> <p>I build and pushed the Docker image to the Private Docker Registry. I verified that container runs correctly by running it like this:</p> <pre><code>docker run -p 8080:8080 private-docker-registry.com/repo/sample-app:latest </code></pre> <p>Then, if I go to localhost:8080, I see the homepage of my React app.</p> <p>Now, the trouble I'm having is deploying to Kubernetes and accessing the app externally.</p> <p>Here's my deployment.yaml file:</p> <pre><code>kind: Deployment apiVersion: apps/v1beta2 metadata: name: sample-app namespace: dev labels: app: sample-app spec: replicas: 1 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: sample-app image: private-docker-registry.com/repo/sample-app:latest ports: - containerPort: 8080 protocol: TCP nodeSelector: TNTRole: luxkube --- kind: Service apiVersion: v1 metadata: name: sample-app labels: app: sample-app spec: selector: app: sample-app type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP </code></pre> <p>I created the deployment and service by running kubectl --namespace=dev create -f deployment.yaml</p> <pre><code>Output of 'describe deployment' Name: sample-app Namespace: dev CreationTimestamp: Sat, 21 Jul 2018 12:27:30 -0400 Labels: app=sample-app Annotations: deployment.kubernetes.io/revision=1 Selector: app=sample-app Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=sample-app Containers: sample-app: Image: private-docker-registry.com/repo/sample-app:latest Port: 8080/TCP Host Port: 0/TCP Environment: &lt;none&gt; Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: sample-app-bb6f59b9 (1/1 replicas created) Events: &lt;none&gt; Output of 'describe service' Name: sample-app Namespace: fab-dev Labels: app=sample-app Annotations: &lt;none&gt; Selector: app=sample-app Type: NodePort IP: 10.96.29.199 Port: &lt;unset&gt; 80/TCP TargetPort: 8080/TCP NodePort: &lt;unset&gt; 34604/TCP Endpoints: 192.168.138.145:8080 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>Now I don't know which IP and port I should be using to access the app. I have tried every combination but none has loaded my app. I believe the port should be 80, so if I just have the IP, i shuold be able to go to the browser and access the React app by going to http://.</p> <p>Does anyone have suggestions?</p>
user1596241
<p>The short version is that the Service is listening on the same TCP/IP port on every Node in your cluster (<code>34604</code>) as is shown in the output of <code>describe service</code>:</p> <pre><code>NodePort: &lt;unset&gt; 34604 </code></pre> <p>If you wish to access the application through a "nice" URL, you'll want a load balancer that can translate the hostname into the in-cluster IP and port combination. That's what an Ingress controller is designed to do, but it isn't the only way -- changing the Service to be <code>type: LoadBalancer</code> will do that for you, if you're running in a cloud environment where Kubernetes knows how to programmatically create load balancers for you.</p>
mdaniel
<p>I am trying using Kubernetes Java client for few use cases.</p> <p><a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a></p> <p>Our Kubernetes cluster is been implemented with OpenId authentication.</p> <p>Unfortunately, the Java client doesn't support OpenId auth.</p> <p><strong>Java code:</strong></p> <pre><code>final ApiClient client = io.kubernetes.client.util.Config.defaultClient(); Configuration.setDefaultApiClient(client); CoreV1Api api = new CoreV1Api(); V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null); for (V1Pod item : list.getItems()) { System.out.println(item.getMetadata().getName()); } </code></pre> <p><strong>Error:</strong></p> <pre><code>13:25:22.549 [main] ERROR io.kubernetes.client.util.KubeConfig - Unknown auth provider: oidc Exception in thread &quot;main&quot; io.kubernetes.client.ApiException: Forbidden at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882) at io.kubernetes.client.ApiClient.execute(ApiClient.java:798) at io.kubernetes.client.apis.CoreV1Api.listPodForAllNamespacesWithHttpInfo(CoreV1Api.java:18462) at io.kubernetes.client.apis.CoreV1Api.listPodForAllNamespaces(CoreV1Api.java:18440) </code></pre> <p>Is there any plan to support OpenId auth with the Java client. Or, is there any other way?</p>
user1578872
<p><a href="https://github.com/kubernetes-client/java/tree/client-java-parent-2.0.0-beta1/util/src/main/java/io/kubernetes/client/util/credentials" rel="nofollow noreferrer">Apparently not</a>, but by far the larger question is: what would you <em>expect</em> to happen with an <code>oidc</code> <code>auth-provider</code> in a Java setting? Just use the <code>id-token</code>? Be able to use the <code>refresh-token</code> and throw an exception if unable to reacquire an <code>id-token</code>? Some callback system for you to manage that lifecycle on your own?</p> <p>Trying to do oidc from a <em>library</em> is fraught with peril, since it is almost certain that there is no "user" to interact with.</p> <blockquote> <p>Is there any plan to support OpenId auth with the Java client</p> </blockquote> <p>Only the project maintainers could answer that, and it is unlikely they know to prioritize that kind of work when there is no issue describing what you would expect to happen. Feel free to <a href="https://github.com/kubernetes-client/java/issues" rel="nofollow noreferrer">create one</a>.</p> <blockquote> <p>Or, is there any other way?</p> </blockquote> <p>In the meantime, you still have <a href="https://github.com/kubernetes-client/java/blob/client-java-parent-2.0.0-beta1/util/src/main/java/io/kubernetes/client/util/Config.java#L63" rel="nofollow noreferrer"><code>Config.fromToken()</code></a> where you can go fishing in your <code>.kube/config</code> and pull out the existing <code>id-token</code> then deal with any subsequent <code>ApiException</code> which requires using the <code>refresh-token</code>, because you will know more about what tradeoffs your client is willing to make.</p>
mdaniel
<p>I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: <a href="https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration</a>). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch <code>yaml</code> configuration files from repository and apply them to this cluster.</p> <p>Is it possible to call <code>kubectl apply -f &lt;config-file&gt;</code> via kubernetes API (or better via golang SDK)?</p>
Kirill
<p>As yaml directly: no, not that I'm aware of. But if you increase the <code>kubectl</code> verbosity (<code>--v=100</code> or such), you'll see that the first thing <code>kubectl</code> does to your yaml file is convert it to json, and then <code>POST</code> <em>that</em> content to the API. So the spirit of the answer to your question is "yes."</p> <p>This <a href="https://github.com/box/kube-applier#readme" rel="nofollow noreferrer">box/kube-applier</a> project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.</p>
mdaniel
<p>I have a integration Kubernetes cluster in AWS, and I want to conduct end-to-end tests on that cluster. </p> <p>I currently use Deployments and Services.</p> <p>The proposed approach is to use Ingress and configure it to use cookie injection for ensuring access to web page that implements the following logic:</p> <ol> <li><p>When <em>special</em> header is recognized in request -> allow the agent to enter the webpage (used for automated tests)</p></li> <li><p>When <em>special</em> header is not recognized in request -> display popup for basic http authentication (used for normal users).</p></li> </ol> <p>I also want to use a single entry point for both cases (the same url).</p> <p>I've been browsing official documentation, but didn't find the specified use case, nor did I find any examples that might be close to what I want to achieve.</p> <p>I'm interested if anyone has used similar approach or something that might be similar in use.</p> <p>Many thanks!</p>
Bartosz Zawadzki
<p>It sounds like either <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">configuration-snippet</a> or a full-blown <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/" rel="nofollow noreferrer">custom template</a> may do what you want, along with the nginx <a href="http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if" rel="nofollow noreferrer"><code>if</code></a> and <a href="http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header" rel="nofollow noreferrer"><code>add_header</code></a> using something like:</p> <pre><code>if ($http_user_agent ~ "My Sekrit Agent/666") { add_header Authentication "Basic dXNlcm5hbWU6cGFzc3dvcmQ="; } </code></pre>
mdaniel
<p>Let's say I have a database with schema of v1, and an application which is tightly coupled to that schema of v1. i.e. SQLException is thrown if the records in the database don't match the entity classes.</p> <p>How should I deploy a change which alters the database schema, and deploys the application which having a race condition. i.e. user queries the app for a field which no longer exists.</p>
aclowkay
<p>This problem actually isn't specific to kubernetes, it happens in any system with more than one server -- kubernetes just makes it more front-and-center because of how automatic the rollover is. The words "tightly coupled" in your question are a dead giveaway of the <em>real</em> problem here.</p> <p>That said, the "answer" actually will depend on which of the following mental models are better for your team:</p> <ul> <li>do not make two consecutive schemas contradictory</li> <li>use a "maintenance" page that keeps traffic off of the pods until they are fully rolled out</li> <li>just accept the <code>SQLException</code>s and add better retry logic to the consumers</li> </ul> <p>We use the first one, because the kubernetes rollout is baked into our engineering culture and we <em>know</em> that pod-old and pod-new will be running simultaneously and thus schema changes need to be incremental and backward compatible for at minimum one generation of pods.</p> <p>However, sometimes we just accept that the engineering effort to do that is more cost than the 500s that a specific breaking change will incur, so we cheat and scale the replicas low, then roll it out and warn our monitoring team that there will be exceptions but they'll blow over. We can do that partially because the client has retry logic built into it.</p>
mdaniel
<p>I have created a k8s cluster with RHEL7 with kubernetes packages GitVersion:"<strong>v1.8.1</strong>". I'm trying to deploy wordpress on my custom cluster. But pod creation is always stuck in ContainerCreating state.</p> <pre><code>phani@k8s-master]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default wordpress-766d75457d-zlvdn 0/1 ContainerCreating 0 11m kube-system etcd-k8s-master 1/1 Running 0 1h kube-system kube-apiserver-k8s-master 1/1 Running 0 1h kube-system kube-controller-manager-k8s-master 1/1 Running 0 1h kube-system kube-dns-545bc4bfd4-bb8js 3/3 Running 0 1h kube-system kube-proxy-bf4zr 1/1 Running 0 1h kube-system kube-proxy-d7zvg 1/1 Running 0 34m kube-system kube-scheduler-k8s-master 1/1 Running 0 1h kube-system weave-net-92zf9 2/2 Running 0 34m kube-system weave-net-sh7qk 2/2 Running 0 1h </code></pre> <p>Docker Version:1.13.1</p> <pre><code>Pod status from descibe command Normal Scheduled 18m default-scheduler Successfully assigned wordpress-766d75457d-zlvdn to worker1 Normal SuccessfulMountVolume 18m kubelet, worker1 MountVolume.SetUp succeeded for volume "default-token-tmpcm" Warning DNSSearchForming 18m kubelet, worker1 Search Line limits were exceeded, some dns names have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local Warning FailedCreatePodSandBox 14m kubelet, worker1 Failed create pod sandbox. Warning FailedSync 25s (x8 over 14m) kubelet, worker1 Error syncing pod Normal SandboxChanged 24s (x8 over 14m) kubelet, worker1 Pod sandbox changed, it will be killed and re-created. </code></pre> <p>from the kubelet log I observed below error on worker</p> <pre><code>error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" </code></pre> <p>But kubelet is stable no problems seen on worker.</p> <p>How do I solve this problem?</p> <p>I checked the cni failure, I couldn't find anything.</p> <pre><code>~]# ls /opt/cni/bin bridge cnitool dhcp flannel host-local ipvlan loopback macvlan noop ptp tuning weave-ipam weave-net weave-plugin-2.3.0 </code></pre> <p>In journal logs below messages are repetitively appeared . seems like scheduler is trying to create the container all the time.</p> <pre><code>Jun 08 11:25:22 worker1 kubelet[14339]: E0608 11:25:22.421184 14339 remote_runtime.go:115] StopPodSandbox "47da29873230d830f0ee21adfdd3b06ed0c653a0001c29289fe78446d27d2304" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Jun 08 11:25:22 worker1 kubelet[14339]: E0608 11:25:22.421212 14339 kuberuntime_manager.go:780] Failed to stop sandbox {"docker" "47da29873230d830f0ee21adfdd3b06ed0c653a0001c29289fe78446d27d2304"} Jun 08 11:25:22 worker1 kubelet[14339]: E0608 11:25:22.421247 14339 kuberuntime_manager.go:580] killPodWithSyncResult failed: failed to "KillPodSandbox" for "7f1c6bf1-6af3-11e8-856b-fa163e3d1891" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jun 08 11:25:22 worker1 kubelet[14339]: E0608 11:25:22.421262 14339 pod_workers.go:182] Error syncing pod 7f1c6bf1-6af3-11e8-856b-fa163e3d1891 ("wordpress-766d75457d-spdrb_default(7f1c6bf1-6af3-11e8-856b-fa163e3d1891)"), skipping: failed to "KillPodSandbox" for "7f1c6bf1-6af3-11e8-856b-fa163e3d1891" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded" </code></pre>
phanikumar ch
<blockquote> <p>Failed create pod sandbox.</p> </blockquote> <p>... is almost always a <a href="https://github.com/containernetworking/cni#readme" rel="nofollow noreferrer">CNI</a> failure; I would check on the <em>node</em> that all the weave containers are happy, and that <code>/opt/cni/bin</code> is present (or its weave equivalent)</p> <p>You may have to check both the <code>journalctl -u kubelet.service</code> as well as the docker logs for any containers running to discover the full scope of the error on the node.</p>
mdaniel
<p>I am new to the Kubernetes and especially using helm. I installed the charts and it works fine with default values. I want to the add smtp server setting in the values.yml file for the chart. I am confused on how to inject the values while installing the chart. This is the chart that I am using <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus-operator</a>. After installing the helm chart with default values I see that there is a deployment called prometheus-operator-grafana which has values GF_SECURITY_ADMIN_USER and GF_SECURITY_ADMIN_PASSWORD but I am not sure where these values are coming from. Help with how these values work and how to inject them would be appreciated.</p>
Anshul Tripathi
<p>The interaction between parent and child chart values is summarize very well in this SO answer: <a href="https://stackoverflow.com/questions/49580938/helm-overriding-chart-and-values-yaml-from-a-base-template-chart#49581516">helm overriding Chart and Values yaml from a base template chart</a></p> <p>There are two separate grafana chart mechanisms that control such a thing: <a href="https://github.com/helm/charts/blob/master/stable/grafana/values.yaml#L130-L131" rel="nofollow noreferrer"><code>adminUser</code> and <code>adminPassword</code></a> or <a href="https://github.com/helm/charts/blob/master/stable/grafana/values.yaml#L134-L137" rel="nofollow noreferrer"><code>admin.existingSecret</code> along with <code>admin.userKey</code> and <code>admin.passwordkey</code></a></p> <p>Thus, <code>helm ... --set grafana.adminUser=ninja --set grafana.adminPassword=hunter2</code> will do what you want. <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator#grafana" rel="nofollow noreferrer">The fine manual</a> even says they are using grafana as a subchart, and documents that exact setting as the first value underneath the <code>grafana.enabled</code> setting. Feel free to file an issue with the helm chart to spend the extra characters and document the <code>grafana.adminUser</code> setting, too</p>
mdaniel
<p>so I need to connect to the python kubernetes client through a pod. I've been trying to use <code>config.load_incluster_config()</code>, basically following the example from <a href="https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py" rel="noreferrer">here</a>. However it's throwing these errors. </p> <pre><code> File "/Users/myname/Library/Python/2.7/lib/python/site-packages/kubernetes/config/incluster_config.py", line 93, in load_incluster_config cert_filename=SERVICE_CERT_FILENAME).load_and_set() File "/Users/myname/Library/Python/2.7/lib/python/site- packages/kubernetes/config/incluster_config.py", line 45, in load_and_set self._load_config() File "/Users/myname/Library/Python/2.7/lib/python/site-packages/kubernetes/config/incluster_config.py", line 51, in _load_config raise ConfigException("Service host/port is not set.") </code></pre> <p>I'm using Python 2.7 and Minikube Any hints or suggestions would be really appreciated. thank you.</p>
helloworld
<blockquote> <p>so I need to connect to that pod somehow through the python api</p> </blockquote> <p>I am pretty sure you misunderstood <a href="https://stackoverflow.com/a/51035715/225016">my answer</a>, and/or I misunderstood your question. One should only use <code>load_incluster_config</code> when ... in-cluster ... otherwise it will attempt to use <code>/var/run/secrets/kubernetes.io/etcetc</code> and not find them (above and beyond the missing env-var in the actual error you cited above). However, if you had guarded the <code>load_incluster_config()</code> with the <code>if os.getenv('KUBERNETES_SERVICE_HOST'):</code> as suggested, then it wouldn't run that code and this question here wouldn't be a problem.</p> <p>If you have built a docker image, but <em>did not deploy it into kubernetes</em>, then that wasn't clear.</p> <hr> <p>If you just want to use the python api to <em>access</em> the cluster, but not from <em>within</em> the cluster, <code>config.load_kube_config()</code> is in fact the correct method call, but you will absolutely need to provide a working <code>kubeconfig</code>, whether at <code>/root/.kube/config</code> or at another place specified by the env-var <code>KUBECONFIG</code> (I mean, usually; I haven't specifically looked into the python library to see if that env-var is honored).</p>
mdaniel
<p>I need to deploy GitLab with Helm on Kubernetes. I have the problem: PVC is Pending.</p> <p>I see <code>volume.alpha.kubernetes.io/storage-class: default</code> in PVC description, but I set value <code>gitlabDataStorageClass: gluster-heketi</code> in values.yaml. And I fine deploy simple nginx from article <a href="https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md" rel="nofollow noreferrer">https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md</a> Yes, I use distribute storage GlusterFS <a href="https://github.com/gluster/gluster-kubernetes" rel="nofollow noreferrer">https://github.com/gluster/gluster-kubernetes</a></p> <pre><code># kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE gitlab1-gitlab-data Pending 19s gitlab1-gitlab-etc Pending 19s gitlab1-postgresql Pending 19s gitlab1-redis Pending 19s gluster1 Bound pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 43m </code></pre> <p>Structure for single of pangings:</p> <pre><code># kubectl get pvc gitlab1-gitlab-data -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: volume.alpha.kubernetes.io/storage-class: default creationTimestamp: 2018-05-29T19:43:18Z finalizers: - kubernetes.io/pvc-protection name: gitlab1-gitlab-data namespace: default resourceVersion: "263950" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/gitlab1-gitlab-data uid: 8958d4f5-6378-11e8-8f10-4ccc6a60fcbe spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi status: phase: Pending </code></pre> <p>In describe I see:</p> <pre><code># kubectl describe pvc gitlab1-gitlab-data Name: gitlab1-gitlab-data Namespace: default StorageClass: Status: Pending Volume: Labels: &lt;none&gt; Annotations: volume.alpha.kubernetes.io/storage-class=default Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 2m (x43 over 12m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set </code></pre> <p>My values.yaml file:</p> <pre><code># Default values for kubernetes-gitlab-demo. # This is a YAML-formatted file. # Required variables # baseDomain is the top-most part of the domain. Subdomains will be generated # for gitlab, mattermost, registry, and prometheus. # Recommended to set up an A record on the DNS to *.your-domain.com to point to # the baseIP # e.g. *.your-domain.com. A 300 baseIP baseDomain: my-domain.com # legoEmail is a valid email address used by Let's Encrypt. It does not have to # be at the baseDomain. legoEmail: [email protected] # Optional variables # baseIP is an externally provisioned static IP address to use instead of the provisioned one. #baseIP: 95.165.135.109 nameOverride: gitlab # `ce` or `ee` gitlab: ce gitlabCEImage: gitlab/gitlab-ce:10.6.2-ce.0 gitlabEEImage: gitlab/gitlab-ee:10.6.2-ee.0 postgresPassword: NDl1ZjNtenMxcWR6NXZnbw== initialSharedRunnersRegistrationToken: "tQtCbx5UZy_ByS7FyzUH" mattermostAppSecret: NDl1ZjNtenMxcWR6NXZnbw== mattermostAppUID: aadas redisImage: redis:3.2.10 redisDedicatedStorage: true redisStorageSize: 5Gi redisAccessMode: ReadWriteOnce postgresImage: postgres:9.6.5 # If you disable postgresDedicatedStorage, you should consider bumping up gitlabRailsStorageSize postgresDedicatedStorage: true postgresAccessMode: ReadWriteOnce postgresStorageSize: 30Gi gitlabDataAccessMode: ReadWriteOnce #gitlabDataStorageSize: 30Gi gitlabRegistryAccessMode: ReadWriteOnce #gitlabRegistryStorageSize: 30Gi gitlabConfigAccessMode: ReadWriteOnce #gitlabConfigStorageSize: 1Gi gitlabRunnerImage: gitlab/gitlab-runner:alpine-v10.6.0 # Valid values for provider are `gke` for Google Container Engine. Leaving it blank (or any othervalue) will disable fast disk options. #provider: gke # Gitlab pages # The following 3 lines are needed to enable gitlab pages. # pagesExternalScheme: http # pagesExternalDomain: your-pages-domain.com # pagesTlsSecret: gitlab-pages-tls # An optional reference to a tls secret to use in pages ## Storage Class Options ## If defined, volume.beta.kubernetes.io/storage-class: &lt;storageClass&gt; ## If not defined, but provider is gke, will use SSDs ## Otherwise default: volume.alpha.kubernetes.io/storage-class: default gitlabConfigStorageClass: gluster-heketi gitlabDataStorageClass: gluster-heketi gitlabRegistryStorageClass: gluster-heketi postgresStorageClass: gluster-heketi redisStorageClass: gluster-heketi healthCheckToken: 'SXBAQichEJasbtDSygrD' # Optional, for GitLab EE images only #gitlabEELicense: base64-encoded-license # Additional omnibus configuration, # see https://docs.gitlab.com/omnibus/settings/configuration.html # for possible configuration options #omnibusConfigRuby: | # gitlab_rails['smtp_enable'] = true # gitlab_rails['smtp_address'] = "smtp.example.org" gitlab-runner: checkInterval: 1 # runnerRegistrationToken must equal initialSharedRunnersRegistrationToken runnerRegistrationToken: "tQtCbx5UZy_ByS7FyzUH" # resources: # limits: # memory: 500Mi # cpu: 600m # requests: # memory: 500Mi # cpu: 600m runners: privileged: true ## Build Container specific configuration ## # builds: # cpuLimit: 200m # memoryLimit: 256Mi # cpuRequests: 100m # memoryRequests: 128Mi ## Service Container specific configuration ## # services: # cpuLimit: 200m # memoryLimit: 256Mi # cpuRequests: 100m # memoryRequests: 128Mi ## Helper Container specific configuration ## # helpers: # cpuLimit: 200m # memoryLimit: 256Mi # cpuRequests: 100m # memoryRequests: 128Mi </code></pre> <p>You can see I have the StorageClass:</p> <pre><code># kubectl get sc NAME PROVISIONER AGE gluster-heketi kubernetes.io/glusterfs 48m </code></pre>
Ivan
<p>Without a link to the actual helm you used, it's impossible for anyone to troubleshoot why the go-template isn't correctly consuming your <code>values.yaml</code>.</p> <blockquote> <p>I see <code>volume.alpha.kubernetes.io/storage-class: default</code> in PVC description, but I set value <code>gitlabDataStorageClass: gluster-heketi</code> in values.yaml</p> </blockquote> <p>I can appreciate you set whatever you wanted in values.yaml, but as long as that <code>StorageClass</code> doesn't match any existing <code>StorageClass</code>, I'm not sure what positive thing will materialize from there. You can certainly try creating a <code>StorageClass</code> named <code>default</code> containing the same values as your <code>gluster-heketi</code> SC, or update the PVC to use the correct SC.</p> <p>To be honest, this may be a bug in the helm chart, but until it is fixed (and/or we get the link to the chart to help you know how to adjust your yaml) if you want your GitLab to deploy, you will need to work around this bad situation manually.</p>
mdaniel
<p>From <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="noreferrer">assess cluster api</a>, i know that the pod in the cluster can using the clusterIp service kubernetes.default.svc to access the api server, but i am curious about how it works. </p> <p>The pod in the cluster would only try to access the clusterip defined in the kubernetes.default.svc, the clusterip is nothing different with the other cluster ip except the svc's name.</p> <p>So how can a http request to the specific clusterip be routed to the api server, does it configured by the api server proxy when create the kubernetes.default.svc?</p>
user2992389
<blockquote> <p>The pod in the cluster would only try to access the clusterip defined in the kubernetes.default.svc, the clusterip is nothing different with the other cluster ip except the svc's name.</p> </blockquote> <p>Absolutely correct</p> <blockquote> <p>So how can a http request to the specific clusterip be routed to the api server, does it configured by the api server proxy when create the kubernetes.default.svc?</p> </blockquote> <p>This magic happens via <code>kube-proxy</code>, which <em>usually</em> delegates down to <code>iptables</code>, although I think it more recent kubernetes installs they are using ipvs to give a lot more control over ... well, almost everything. The <code>kube-proxy</code> receives its instructions from the API informing it of any changes, which it applies to the individual Nodes to keep the world in sync.</p> <p>If you have access to the Nodes, you can run <code>sudo iptables -t nat -L -n</code> and see all the <code>KUBE-SERVICE-*</code> rules that are defined -- usually with helpful comments, even -- and see how they are mapped from the <code>ClusterIP</code> down to the Pod's IP of the Pods which match the selector on the <code>Service</code></p>
mdaniel
<p>I currently have kubectl v1.10.6 which I need to access my cluster, however I'm also trying to connect to a different cluster thats running on v1.5.</p> <p>How and whats the best practice in having multiple version of a package on my computer? I could downgrade my package to v1.5, but that would require me to upgrade my kubectl back to v1.10 every time I need to access my other cluster. I'm currently running Ubuntu 16.04 (if that helps)</p>
AlphaCR
<p>They're statically linked, and have no dependencies, so there's no need to use a dependency manager for them:</p> <pre><code>$ curl -sSfo /usr/local/bin/kubectl-1.9 \ https://storage.googleapis.com/kubernetes-release/release/v1.9.11/bin/linux/amd64/kubectl $ chmod 755 /usr/local/bin/kubectl-1.9 </code></pre>
mdaniel
<p>I use <a href="https://gist.github.com/ruzickap/c072cdfe480ca52bd32b6c4fcf8397a2" rel="nofollow noreferrer">https://gist.github.com/ruzickap/c072cdfe480ca52bd32b6c4fcf8397a2</a> for deploy kubernetes</p> <p>Need:</p> <pre><code># Create kubespray config file cat &gt; ~/.kubespray.yml &lt;&lt; EOF kubespray_git_repo: "https://github.com/kubespray/kubespray.git" kubespray_path: "$PWD/kubespray" loglevel: "info" EOF </code></pre> <p>Can I use kubespray without <code>kubespray_git_repo</code>, <code>kubespray_path</code> ?</p>
Anton Patsev
<blockquote> <p>Can i use kubespray without kubespray_git_repo, kubespray_path ?</p> </blockquote> <p>Without question, yes; arguably it will <em>most often</em> be used without those things (I have actually never even heard of <code>kubespray prepare</code>):</p> <ol> <li><a href="https://github.com/kubernetes-incubator/kubespray/archive/v2.5.0.tar.gz" rel="nofollow noreferrer">download a release</a></li> <li><code>pip install ansible</code> (if you don't already have ansible; you can also <code>brew install ansible</code> if you are on a Mac and/or have concerns about <code>pip</code> messing up your global pythonpath)</li> <li><a href="https://github.com/kubernetes-incubator/kubespray/tree/v2.5.0#ansible" rel="nofollow noreferrer">use <code>ansible-playbook</code></a></li> <li>declare victory</li> </ol>
mdaniel
<p>Using docker can simplify CI/CD but also introduce the complexity, not everybody able to hold the docker network though selecting open source solutions like Flannel, Calico. So why don't use host network in docker, or what lost if use host network in docker. I know the port conflict is one point, any others?</p>
Simon
<p>There are two parts to an answer to your question:</p> <ol> <li>Pods must have individual, cluster-routable, IP addresses and one should be <strong>very cautious</strong> about recycling them</li> <li>You can, if you wish, not use any software defined network (SDN)</li> </ol> <p>So with the first part, it is usually a huge hassle to provision a big enough CIDR to house the address range required for supporting <em>every Pod</em> that is running across every Namespace, and have the space be big enough to avoid recycling addresses for a very long time. Thus, having an SDN allows using "fake" addresses that one need not bother the "real" network with knowing about. No routers need to be updated, no firewalls, no DHCP, whatever.</p> <p>That said, as with the second part, you don't have to use an SDN: that's exactly what the <a href="https://github.com/containernetworking/cni#what-is-cni" rel="nofollow noreferrer">container network interface (CNI)</a> is designed to paper over. You can use the CNI provider that makes you the happiest, including using <a href="https://github.com/containernetworking/plugins/tree/master/plugins/ipam/static" rel="nofollow noreferrer">static IP addresses</a> or <a href="https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp#dhcp-plugin" rel="nofollow noreferrer">the outer network's DHCP server</a>.</p> <p>But your comment about port collisions is pretty high up the list of reasons one wouldn't just want to <code>hostNetwork: true</code> and be done with it; I'm actually not certain if the default kubernetes scheduler is aware of <code>hostNetwork: true</code> and the declared <code>ports:</code> on the <code>containers:</code> in order to avoid co-scheduling two containers that would conflict. I guess try it and see, or, better yet, don't try it -- use CNI so the next poor person who tries to interact with your cluster doesn't find a snowflake setup.</p>
mdaniel
<p>I'm trying to deploy Postgres on my Kubernetes cluster and I have been successful to do this, but then I don't know how I can import my data which are in csv format. I already have the scripts which gets the path to data and create a database in a local instance of postgres, but when I deploy postgres on Kubernetes cluster then those scripts wont work because I can't see those script inside the pod. </p> <p>I was looking for a solution to execute the scripts from host to inside the pod, or I can expose the directory of scripts and data to postgres pod. I've found the hostpath solution, but I don't know how to define multiple volumes for a deployment. (I'm using Rook cluster to provision the volume) Maybe a way to define a hostpath volume alongside a Rook volume so I can have access to the scripts and csv files inside the hostpath and then create the database inside the Rook volume. I don't know of this makes sense, but I would appreciate if someone help me with this. </p>
Fatemeh Rouzbeh
<p>If you're using the <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer">official docker image</a>, or an image that is derived from it but didn't destroy its entrypoint, then they have <a href="https://hub.docker.com/_/postgres#how-to-extend-this-image" rel="nofollow noreferrer">documentation about <code>/docker-entrypoint-initdb.d/*.sql</code></a>, with the tl;dr as</p> <pre><code>kind: ConfigMap spec: import_csv.sql: | COPY my_table FROM '/whatever/path/you/want.csv' FORMAT csv /* etc */ --- kind: Pod spec: containers: - volumeMounts: - name: my-initdb-configmap mountPath: /docker-entrypoint-initdb.d readOnly: true # ... </code></pre> <p>type deal </p>
mdaniel
<p>Circleci and many other ci tools show real time logs during the job running. It like the <code>tail -f</code> in linux but it also show all the previous logs. </p> <p>I am trying to sync specific logs from kubernetes to s3 and then move the update to the browse it that possible?</p>
aisensiy
<blockquote> <p>I am trying to sync specific logs from kubernetes to s3 and then move the update to the browse it that possible?</p> </blockquote> <p>IMHO you would want to have a "tee" mechanism to divert the log streams to <em>each</em> destination, since those two destinations have vastly different access (and retry!) mechanisms. AFAIK kubernetes allows unlimited(?) numbers of <code>kubectl logs -f</code> connections, so you would want one process that connects to the pod and relays those bytes out to the browser, and a separate process that connects and relays the bytes to S3. You <em>could</em> have one process that does both, but that runs the risk of a single bug wiping out <em>both</em> streams, making everyone unhappy.</p> <p>I used <code>kubectl logs -f</code> as a "shortcut" for this answer, but I am super positive there's an API endpoint for doing that, I just don't remember what it is offhand -- <code>kubectl --v=100 logs -f $pod</code> will show it to you.</p>
mdaniel
<p>I have been running a pod for more than a week and there has been no restart since started. But, still I am unable to view the logs since it started and it only gives logs for the last two days. Is there any log rotation policy for the container and how to control the rotation like based on size or date?</p> <p>I tried the below command but shows only last two days logs.</p> <pre><code>kubectl logs POD_NAME --since=0 </code></pre> <p>Is there any other way?</p>
user1578872
<blockquote> <p>Is there any log rotation policy for the container and how to control the rotation like based on size or date</p> </blockquote> <p>The log rotation is controlled by the docker <code>--log-driver</code> and <code>--log-opts</code> (or their <a href="https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file" rel="noreferrer"><code>daemon.json</code></a> equivalent), which for any sane system has file size and file count limits to prevent a run-away service from blowing out the disk on the docker host. <em>that answer also assumes you are using docker, but that's a fairly safe assumption</em></p> <blockquote> <p>Is there any other way?</p> </blockquote> <p>I <strong>strongly</strong> advise something like <a href="https://github.com/kubernetes/kubernetes/tree/v1.10.4/cluster/addons/fluentd-elasticsearch" rel="noreferrer">fluentd-elasticsearch</a>, or <a href="https://github.com/Graylog2/graylog-docker#readme" rel="noreferrer">graylog2</a>, or Sumologic, or Splunk, or whatever in order to egress those logs from the hosts. No serious cluster would rely on infinite log disks nor on using <code>kubectl logs</code> in a <code>for</code> loop to search the output of Pods. To say nothing of egressing the logs from the kubernetes containers themselves, which is almost essential for keeping tabs on the health of the cluster.</p>
mdaniel
<p>I am trying to get Ansible AWX installed on my Kubernetes cluster but the RabbitMQ container is throwing "Failed to get nodes from k8s" error.</p> <p><strong>Below are the version of platforms I am using</strong> </p> <pre><code>[node1 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T15:50:45Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Kubernetes is deployed via the <a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">kubespray</a> playbook v2.5.0 and all the services and pods are up and running. (CoreDNS, Weave, IPtables) </p> <p>I am deploying <a href="https://github.com/ansible/awx" rel="nofollow noreferrer">AWX</a> via the 1.0.6 release using the 1.0.6 images for awx_web and awx_task.</p> <p>I am using an external PostgreSQL database at v10.4 and have verified the tables are being created by awx in the db.</p> <p><strong>Troubleshooting steps I have tried.</strong></p> <ul> <li>I tried to deploy AWX 1.0.5 with the etcd pod to the same cluster and it has worked as expected </li> <li>I have deployed a stand alone <a href="https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/tree/master/examples/k8s_statefulsets" rel="nofollow noreferrer">RabbitMQ cluster</a> in the same k8s cluster trying to mimic the AWX rabbit deployment as much as possible and it works with the rabbit_peer_discovery_k8s backend.</li> <li>I have tried tweeking some of the rabbitmq.conf for AWX 1.0.6 with no luck it just keeps thowing the same error.</li> <li>I have verified the /etc/resolv.conf file has the kubernetes.default.svc.cluster.local entry</li> </ul> <p><strong>Cluster Info</strong></p> <pre><code>[node1 ~]# kubectl get all -n awx NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/awx 1 1 1 0 38m NAME DESIRED CURRENT READY AGE rs/awx-654f7fc84c 1 1 0 38m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/awx 1 1 1 0 38m NAME DESIRED CURRENT READY AGE rs/awx-654f7fc84c 1 1 0 38m NAME READY STATUS RESTARTS AGE po/awx-654f7fc84c-9ppqb 3/4 CrashLoopBackOff 11 38m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/awx-rmq-mgmt ClusterIP 10.233.10.146 &lt;none&gt; 15672/TCP 1d svc/awx-web-svc NodePort 10.233.3.75 &lt;none&gt; 80:31700/TCP 1d svc/rabbitmq NodePort 10.233.37.33 &lt;none&gt; 15672:30434/TCP,5672:31962/TCP 1d </code></pre> <p>AWX RabbitMQ error log</p> <pre><code>[node1 ~]# kubectl logs -n awx awx-654f7fc84c-9ppqb awx-rabbit 2018-07-09 14:47:37.464 [info] &lt;0.33.0&gt; Application lager started on node '[email protected]' 2018-07-09 14:47:37.767 [info] &lt;0.33.0&gt; Application os_mon started on node '[email protected]' 2018-07-09 14:47:37.767 [info] &lt;0.33.0&gt; Application crypto started on node '[email protected]' 2018-07-09 14:47:37.768 [info] &lt;0.33.0&gt; Application cowlib started on node '[email protected]' 2018-07-09 14:47:37.768 [info] &lt;0.33.0&gt; Application xmerl started on node '[email protected]' 2018-07-09 14:47:37.851 [info] &lt;0.33.0&gt; Application mnesia started on node '[email protected]' 2018-07-09 14:47:37.851 [info] &lt;0.33.0&gt; Application recon started on node '[email protected]' 2018-07-09 14:47:37.852 [info] &lt;0.33.0&gt; Application jsx started on node '[email protected]' 2018-07-09 14:47:37.852 [info] &lt;0.33.0&gt; Application asn1 started on node '[email protected]' 2018-07-09 14:47:37.852 [info] &lt;0.33.0&gt; Application public_key started on node '[email protected]' 2018-07-09 14:47:37.897 [info] &lt;0.33.0&gt; Application ssl started on node '[email protected]' 2018-07-09 14:47:37.901 [info] &lt;0.33.0&gt; Application ranch started on node '[email protected]' 2018-07-09 14:47:37.901 [info] &lt;0.33.0&gt; Application ranch_proxy_protocol started on node '[email protected]' 2018-07-09 14:47:37.901 [info] &lt;0.33.0&gt; Application rabbit_common started on node '[email protected]' 2018-07-09 14:47:37.907 [info] &lt;0.33.0&gt; Application amqp_client started on node '[email protected]' 2018-07-09 14:47:37.909 [info] &lt;0.33.0&gt; Application cowboy started on node '[email protected]' 2018-07-09 14:47:37.957 [info] &lt;0.33.0&gt; Application inets started on node '[email protected]' 2018-07-09 14:47:37.964 [info] &lt;0.193.0&gt; Starting RabbitMQ 3.7.4 on Erlang 20.1.7 Copyright (C) 2007-2018 Pivotal Software, Inc. Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ## ## RabbitMQ 3.7.4. Copyright (C) 2007-2018 Pivotal Software, Inc. ########## Licensed under the MPL. See http://www.rabbitmq.com/ ###### ## ########## Logs: &lt;stdout&gt; Starting broker... 2018-07-09 14:47:37.982 [info] &lt;0.193.0&gt; node : [email protected] home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.conf cookie hash : at619UOZzsenF44tSK3ulA== log(s) : &lt;stdout&gt; database dir : /var/lib/rabbitmq/mnesia/[email protected] 2018-07-09 14:47:39.649 [info] &lt;0.201.0&gt; Memory high watermark set to 11998 MiB (12581714329 bytes) of 29997 MiB (31454285824 bytes) total 2018-07-09 14:47:39.652 [info] &lt;0.203.0&gt; Enabling free disk space monitoring 2018-07-09 14:47:39.653 [info] &lt;0.203.0&gt; Disk free limit set to 50MB 2018-07-09 14:47:39.658 [info] &lt;0.205.0&gt; Limiting to approx 1048476 file handles (943626 sockets) 2018-07-09 14:47:39.658 [info] &lt;0.206.0&gt; FHC read buffering: OFF 2018-07-09 14:47:39.658 [info] &lt;0.206.0&gt; FHC write buffering: ON 2018-07-09 14:47:39.660 [info] &lt;0.193.0&gt; Node database directory at /var/lib/rabbitmq/mnesia/[email protected] is empty. Assuming we need to join an existing cluster or initialise from scratch... 2018-07-09 14:47:39.660 [info] &lt;0.193.0&gt; Configured peer discovery backend: rabbit_peer_discovery_k8s 2018-07-09 14:47:39.660 [info] &lt;0.193.0&gt; Will try to lock with peer discovery backend rabbit_peer_discovery_k8s 2018-07-09 14:47:39.660 [info] &lt;0.193.0&gt; Peer discovery backend does not support locking, falling back to randomized delay 2018-07-09 14:47:39.660 [info] &lt;0.193.0&gt; Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized startup delay. 2018-07-09 14:47:39.665 [info] &lt;0.193.0&gt; Failed to get nodes from k8s - {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}}, {inet,[inet],nxdomain}]} 2018-07-09 14:47:39.665 [error] &lt;0.192.0&gt; CRASH REPORT Process &lt;0.192.0&gt; with 0 neighbours exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from_config/0 line 164 in application_master:init/4 line 134 2018-07-09 14:47:39.666 [info] &lt;0.33.0&gt; Application rabbit exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from_config/0 line 164 {"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{{case_clause,{error,\"{failed_connect,[{to_address,{\\"kubernetes.default.svc.cluster.local\\",443}},\n {inet,[inet],nxdomain}]}\"}},[{rabbit_mnesia,init_from_config,0,[{file,\"src/rabbit_mnesia.erl\"},{line,164}]},{rabbit_mnesia,init_with_lock,3,[{file,\"src/rabbit_mnesia.erl\"},{line,144}]},{rabbit_mnesia,init,0,[{file,\"src/rabbit_mnesia.erl\"},{line,111}]},{rabbit_boot_steps,'-run_step/2-lc$^1/1-1-',1,[{file,\"src/rabbit_boot_steps.erl\"},{line,49}]},{rabbit_boot_steps,run_step,2,[{file,\"src/rabbit_boot_steps.erl\"},{line,49}]},{rabbit_boot_steps,'-run_boot_steps/1-lc$^0/1-0-',1,[{file,\"src/rabbit_boot_steps.erl\"},{line,26}]},{rabbit_boot_steps,run_boot_steps,1,[{file,\"src/rabbit_boot_steps.erl\"},{line,26}]},{rabbit,start,2,[{file,\"src/rabbit.erl\"},{line,793}]}]}}}}}"} Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{{case_clause,{error,"{failed_connect,[{to_address,{\"kubernetes.defau Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done </code></pre> <p>Kubernetes API service </p> <pre><code>[node1 ~]# kubectl describe service kubernetes Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: &lt;none&gt; Selector: &lt;none&gt; Type: ClusterIP IP: 10.233.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.237.34.19:6443,10.237.34.21:6443 Session Affinity: ClientIP Events: &lt;none&gt; </code></pre> <p>nslookup from a busybox in the same kubernetes cluster</p> <pre><code>[node2 ~]# kubectl exec -it busybox -- sh / # nslookup kubernetes.default.svc.cluster.local Server: 10.233.0.3 Address 1: 10.233.0.3 coredns.kube-system.svc.cluster.local Name: kubernetes.default.svc.cluster.local Address 1: 10.233.0.1 kubernetes.default.svc.cluster.local </code></pre> <p>Please let me know if I am missing anything that could help troubleshooting.</p>
kaylor
<p>I <em>believe</em> the solution is to omit <a href="https://github.com/ansible/awx/blob/1.0.6/installer/roles/kubernetes/templates/deployment.yml.j2#L40" rel="nofollow noreferrer">the explicit kubernetes host</a>. I can't think of any good reason one would need to <em>specify</em> the kubernetes api host from inside the cluster.</p> <p>If for some terrible reason the RMQ plugin requires it, then try swapping in the <code>Service</code> IP (assuming your SSL cert for the master has its <code>Service</code> IP in the SANs list).</p> <hr> <p>As for <em>why</em> it is doing such a silly thing, the only good reason I can think of is that the RMQ <code>PodSpec</code> has somehow gotten a <code>dnsPolicy</code> of something other than <code>ClusterFirst</code>. If you truly wish to troubleshoot the RMQ Pod, then you can provide an explicit <code>command:</code> to run some debugging bash commands first, in order to interrogate the state of the container at launch, and then <code>exec /launch.sh</code> to resume booting up RMQ (<a href="https://github.com/ansible/awx-rabbitmq/blob/7f20bd7dc057beb4070d5a6032c3c70ac0b7f724/Dockerfile#L14" rel="nofollow noreferrer">as they do</a>)</p>
mdaniel
<p>I'm trying to install metricbeat helm chart to forward my kubernetes metrics to elasticsearch.</p> <p>Default configuration works but when I configure output to elasticsearch, the pod tell me </p> <blockquote> <blockquote> <p>Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'metricbeat.yml')</p> </blockquote> </blockquote> <p>I download the <a href="https://github.com/helm/charts/blob/master/stable/metricbeat/values.yaml" rel="nofollow noreferrer">values.yaml</a> and modify output.file in both daemonset and deployment from </p> <pre><code>output.file: path: "/usr/share/metricbeat/data" filename: metricbeat rotate_every_kb: 10000 number_of_files: 5 </code></pre> <p>to </p> <pre><code>output.file: enable: false output.elasticsearch: enable: true hosts: ["http://192.168.10.156:9200/"] </code></pre> <p>How do I modify the config to forward metrics to elasticsearch?</p>
Leisen Chang
<p>According to <a href="https://www.elastic.co/guide/en/beats/metricbeat/6.x/file-output.html#_literal_enabled_literal_6" rel="noreferrer">the fine manual</a>, the property is actually <code>enabled:</code> not <code>enable:</code> so I would presume you actually want:</p> <pre><code>output.file: enabled: false </code></pre> <p>Although to be honest, I always thought you could have as many outputs as you wish, but <a href="https://discuss.elastic.co/t/more-than-one-namespace-configured-accessing-output/146646" rel="noreferrer">that is clearly not true</a></p>
mdaniel
<p>I have a Docker image which runs a python subprocess, which is a <code>node.js</code> server exposing an end point <code>/check</code>. The whole thing is put inside a Kubernetes pod and uses <code>/check</code> as the readinessProbe endpoint.</p> <p>Now at some point, I want to close this endpoint or force-fail all the requests coming at it. Ideally, I want to do this via higher-level entities (i.e. Kubernetes lifecycle hooks) so as not to touch the lower-level implementation (such as opening a new endpoint /stop that switch some boolean flag and force the /check to fail)</p> <p>Is that possible at all? If not, what is the best alternative?</p>
KeenSeeker99
<blockquote> <p>Is that possible at all? If not, what is the best alternative?</p> </blockquote> <p>I believe there are a few:</p> <ul> <li>remote address filtering</li> <li>magic headers</li> <li>a formal proxy container</li> </ul> <h3>remote address</h3> <p>Requests to <code>/check</code> coming from kubernetes will come from the Node's SDN IP address (so if a Node's SDN subnet is <code>10.10.5.0/24</code>, then requests will come from 10.10.5.1), so you could permit the checks from the <code>.1</code> of the <code>/24</code> assigned to the Pod</p> <h3>magic headers</h3> <p>The <code>httpGet</code> <code>readinessProbe</code> allows <code>httpHeaders:</code> so you could turn on HTTP Basic auth for <code>/check</code> and then put the <code>- name: Authentication value: Basic xxyyzz==</code> in the <code>httpHeaders:</code></p> <h3>a formal proxy container</h3> <p>Add a 2nd container to the <code>Pod</code> that runs <code>haproxy</code> and filters <code>/check</code> requests to return 401 or 404 or whatever you want. Since all containers in a Pod share the same networking namespace, configuring <code>haproxy</code> to speak to your node.js server will be super trivial, and your <code>readinessProbe</code> (as well as <code>liveliness</code>) can continue to use the URL because only kubernetes will have access to it by using the non-haproxy container's <code>port</code>. To complete that loop, point the <code>Service</code> at the <code>haproxy</code> container's port.</p>
mdaniel
<p>currently I recently switched our PostgreSQL cluster from a simple "bare-metal" (vms) workload to a containerised K8s cluster (also on vms).</p> <p>Currently we run <code>zalando-incubator/postgres-operator</code> and use Local Volume's with <code>volumeMode: FileSystem</code> the volume itself is a "simple" xfs volume mounted on the host.</p> <p>However we actually seen performance drops up to 50% on the postgres cluster inside k8s. Some heavy join workloads actually perform way worse than on the old cluster which did not use containers at all.</p> <p>Is there a way to tune the behavior or at least measure the performance of I/O to find the actual bottleneck (i.e. what is a good way to measure I/O, etc.)</p>
Christian Schmitt
<blockquote> <p>Is there a way to tune the behavior</p> </blockquote> <p>Be cognizant of two things that <em>might</em> be impacting your in-cluster behavior: increased cache thrashing and the inherent problem of running concurrent containers on a Node. If you haven't already tried it, you may want to use taints and tolerations to sequester your PG Pods away from other Pods and see if that helps.</p> <blockquote> <p>what is a good way to measure I/O, etc.</p> </blockquote> <p>I would expect the same <code>iostat</code> tools one is used to using would work on the <code>Node</code>, since no matter how much kernel namespace trickery is going on, it's still the Linux kernel.</p> <p>Prometheus (and likely a ton of other such toys) surfaces some I/O specific metrics for containers, and I would presume they are at the scrape granularity, meaning you can increase the scrape frequency, bearing in mind the observation cost impacting your metrics :-(</p> <p>It appears <a href="https://docs.docker.com/config/thirdparty/prometheus/" rel="nofollow noreferrer">new docker daemons ship with Prom metrics</a>, although I don't know what version introduced that functionality. There is <a href="https://docs.docker.com/config/containers/runmetrics/#tips-for-high-performance-metric-collection" rel="nofollow noreferrer">a separate page</a> discussing the implications of high frequency metric collection. There also appears to be <a href="https://github.com/ncabatoff/process-exporter/tree/v0.2.11#readme" rel="nofollow noreferrer">a Prometheus exporter</a> for monitoring arbitrary processes, above and beyond the <a href="https://github.com/wrouesnel/postgres_exporter/tree/v0.4.6#readme" rel="nofollow noreferrer">PostgreSQL specific exporter</a>.</p> <hr> <p>Getting into <em>my opinion</em>, it may be a very reasonable experiment to go head-to-head with ext4 versus a non-traditional FS like <code>xfs</code>. I can't even fathom how much extra production experience has gone into ext4, merely by the virtue of almost every Linux on the planet deploying on it by default. You may have great reasons for using xfs, but I just wanted to ensure you had at least considered that xfs might have performance characteristics that make it problematic in a shared environment like a kubernetes cluster.</p>
mdaniel
<p>I see a lot of traction for spark over kubernetes. Is it better over running spark on Hadoop? Both the approaches runs in distributive approach. Can someone help me understand the difference/comparision between running spark on kubernetes vs Hadoop ecosystem?</p> <p>Thanks</p>
Premchand
<blockquote> <p>Can someone help me understand the difference/comparision between running spark on kubernetes vs Hadoop ecosystem?</p> </blockquote> <p>Be forewarned this is a theoretical answer, because I don't run Spark anymore, and thus I haven't run Spark on kubernetes, but I have maintained both a Hadoop cluster and now a kubernetes cluster, and so I can speak to some of their differences.</p> <p>Kubernetes is as much a battle hardened resource manager with api access to all its components as a reasonable person could wish for. It provides very painless declarative resource limitations (both cpu and ram, plus even syscall capacities), very, <em>very</em> painless log egress (both back to the user via <code>kubectl</code> and out of the cluster using multiple flavors of log management approaches), unprecedented level of metrics gathering and egress allowing one to keep an eye on the health of the cluster and the jobs therein, and the list goes on and on.</p> <p>But perhaps the biggest reason one would choose to run Spark on kubernetes is the same reason one would choose to run kubernetes at all: shared resources rather than having to create new machines for different workloads (well, plus all of those benefits above). So if you have a Spark cluster, it is very, very likely it is going to burn $$$ while a job isn't actively running on it, versus kubernetes will cheerfully schedule other jobs onto those Nodes while they aren't running Spark jobs. Yes, I am aware that Mesos and Yarn are "generic" cluster resource managers, but it has not been my experience that they are as painless or ubiquitous as kubernetes.</p> <p>I would welcome someone posting the counter narrative, or contributing more hands-on experience of Spark on kubernetes, but tho</p>
mdaniel
<p>I have laravel image defined with Dockerfile like so...</p> <pre><code>FROM php:7.2-fpm # Copy composer.lock and composer.json COPY composer.lock composer.json /var/www/ # Set working directory WORKDIR /var/www # Install dependencies RUN apt-get update -y &amp;&amp; apt-get install -y openssl zip unzip git libpng-dev # Install extensions RUN docker-php-ext-install gd # Install composer RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer # Add user for laravel application RUN groupadd -g 1000 www RUN useradd -u 1000 -ms /bin/bash -g www www # Copy existing application directory contents COPY . /var/www # Copy existing application directory permissions COPY --chown=www:www . /var/www # Change current user to www USER www # Expose port 8181 and start php-fpm server EXPOSE 8181 CMD ["php-fpm"] </code></pre> <p>I have nginx ingress installed with helm like so...</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-resource annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-body-size: 10m ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/load-balancer: "ip-hash" spec: tls: - hosts: - api.myweb.com - www.myweb.com secretName: secret rules: - host: api.myweb.com http: paths: - backend: serviceName: backend-golang servicePort: 8080 - host: www.myweb.com http: paths: - backend: serviceName: frontend-laravel servicePort: 8181 </code></pre> <p>With this configuration, it return <code>502 Bad Gateway</code> when I access www.myweb.com</p> <p>Should I tell nginx that root folder was at <code>/var/www/public</code>..?? How do I do that?</p> <p>Should I tell nginx that index file was at <code>index.php</code>..?? How do I do that?</p> <p>My reference was this <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-laravel-nginx-and-mysql-with-docker-compose" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-laravel-nginx-and-mysql-with-docker-compose</a>. But I don't want to use custom nginx,, instead I want to use nginx ingress from helm..</p> <p>Thank you...</p>
karina
<blockquote> <p>But I don't want to use custom nginx,, instead I want to use nginx ingress from helm..</p> </blockquote> <p>I am pretty sure you are conflating two different nginx-es here: the ingress-controller's nginx is designed strictly for doing vhost upstream routing, and is only one of many ingress controller implementations. The other nginx in your story is responsible for dealing with the "HTTP" bits that sit upstream of your php cgi-bin (which is effectively what it is, "fpm" aside). In both cases you don't have to use nginx, you can use other servers, but the fact that in your situation they both are nginx is what I think is leading to the confusion.</p> <p>You will want a "local" nginx inside your php container to deal with those HTTP bits since (AFAIK) fpm isn't bright enough to do that on its own.</p> <p>I did wonder if you could use the nginx Ingress annotations to inject the necessary <code>location {}</code> blocks into the ingress controller's nginx's config file, but I am just not sure if nginx needs to be able to see any of the files on the disk of the Pod to work correctly, in which case the annotation trickery won't work.</p> <p>You are penny pinching, since running a separate nginx in your cluster will likely be a rounding error compared to the overall memory and CPU budget, and <strong>for sure</strong> has already cost you (and me) more glucose to think through this edge-case than it would to just create the nginx.conf and be done with it.</p>
mdaniel
<p>I have a frontend application that works perfectly fine when I have just one instance of the application running in a kubernetes cluster. But when I scale up the deployment to have 3 replicas it shows a blank page on the first load and then after the refresh, it loads the page. As soon as I scale down the app to 1, it starts loading fine again. Here is the what the console prints in the browser.</p> <blockquote> <p>hub.xxxxx.me/:1 Refused to execute script from '<a href="https://hub.xxxxxx.me/static/js/main.5a4e61df.js" rel="nofollow noreferrer">https://hub.xxxxxx.me/static/js/main.5a4e61df.js</a>' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.</p> </blockquote> <p><a href="https://i.stack.imgur.com/GYDaY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GYDaY.png" alt="enter image description here"></a> Adding the screenshot as well. Any ideas what might be the case. I know it is an infrastructure issue since it happens only when I scale the application.</p> <p>One thing I noticed is that 2 pods have a different js file that the other pod. </p> <blockquote> <p>2 pods have this file - build/static/js/main.b6aff941.js</p> <p>The other pod has this file - build/static/js/main.5a4e61df.js</p> </blockquote> <p>I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?</p>
Anshul Tripathi
<blockquote> <p>I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?</p> </blockquote> <p>Yes, this is actually pretty common in a build where those resources change like that. You actually won't want to use the traditional rolling-update mechanism, because your deployment is closer to a blue-green one: only one "family" of Pods should be in service at a time, else the <strong>html</strong> from Pod 1 is served but the subsequent request for the <strong>javascript</strong> from Pod 2 is 404</p> <p>There is also the pretty grave risk of a browser having a cached copy of the HTML, but kubernetes can't -- by itself -- help you with that.</p> <p>One pretty reasonable solution is to scale the Deployment to one replica, do the image patch, wait for the a-ok, then scale them back up, so there is only one source of truth for the application running in the cluster at a time. A rollback would look very similar: scale 1, rollback the deployment, scale up</p> <p>An alternative mechanism would be to use label patching, to atomically switch the <code>Service</code> (and presumably thus the <code>Ingress</code>) over to the new Pods all at once, but that would require having multiple copies of the application in the cluster at the same time, which for a front-end app is likely more trouble than it's worth.</p>
mdaniel
<p>I have a docker image which is been deployed into kubernetes.</p> <p>The docker file is as follows.</p> <pre><code>FROM alpine/jdk1.8:latest RUN mkdir -p /opt/test/app COPY app.war /opt/test/app/app.war CMD java -jar /opt/test/app/app.war </code></pre> <p>This application uses hibernate and getting the below error when trying to load jar file for connection.</p> <pre><code>loggerName="org.hibernate.orm.url" threadName="main" txnId="" HHH10000002: File or directory named by URL [file:/opt/test/app/app.war!/WEB-INF/classes] could not be found. URL will be ignored java.io.FileNotFoundException: /opt/test/app/app.war!/WEB-INF/classes (No such file or directory) at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.&lt;init&gt;(ZipFile.java:225) at java.util.zip.ZipFile.&lt;init&gt;(ZipFile.java:155) at java.util.jar.JarFile.&lt;init&gt;(JarFile.java:166) at java.util.jar.JarFile.&lt;init&gt;(JarFile.java:103) at org.hibernate.boot.archive.internal.JarFileBasedArchiveDescriptor.resolveJarFileReference(JarFileBasedArchiveDescriptor.java:165) at org.hibernate.boot.archive.internal.JarFileBasedArchiveDescriptor.visitArchive(JarFileBasedArchiveDescriptor.java:51) at org.hibernate.boot.archive.scan.spi.AbstractScannerImpl.scan(AbstractScannerImpl.java:47) at org.hibernate.boot.model.process.internal.ScanningCoordinator.coordinateScan(ScanningCoordinator.java:75) at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.prepare(MetadataBuildingProcess.java:98) at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.&lt;init&gt;(EntityManagerFactoryBuilderImpl.java:199) at org.hibernate.jpa.boot.spi.Bootstrap.getEntityManagerFactoryBuilder(Bootstrap.java:34) at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilder(HibernatePersistenceProvider.java:165) at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilderOrNull(HibernatePersistenceProvider.java:114) at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilderOrNull(HibernatePersistenceProvider.java:71) at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory(HibernatePersistenceProvider.java:52) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:55) </code></pre> <p>Is there any permission to be given in docker file or in kubernetes deployment yaml file?</p> <p>Hibernate config,</p> <pre><code>Map&lt;String, String&gt; Props = new HashMap&lt;String, String&gt;(); conProps.put("javax.persistence.jdbc.url", "JDBC_URL"); conProps.put("javax.persistence.jdbc.password", "PASSWORD"); conProps.put("javax.persistence.jdbc.user", "USER"); conProps.put("oracle.net.ssl_cipher_suites","CIPHER"); conProps.put("javax.persistence.jdbc.driver", "oracle.jdbc.OracleDriver"); Persistence.createEntityManagerFactory("appjdbc", conProps); </code></pre> <p>I checked the hibernate-core.jar and the below code gets executed. Not sure, if it is supposed to return JarProtocolArchiveDescriptor but returns JarFileBasedArchiveDescriptor.</p> <pre><code>public ArchiveDescriptor buildArchiveDescriptor(URL url, String entry) { final String protocol = url.getProtocol(); if ( "jar".equals( protocol ) ) { return new JarProtocolArchiveDescriptor( this, url, entry ); } else if ( StringHelper.isEmpty( protocol ) || "file".equals( protocol ) || "vfszip".equals( protocol ) || "vfsfile".equals( protocol ) ) { final File file = new File( extractLocalFilePath( url ) ); if ( file.isDirectory() ) { return new ExplodedArchiveDescriptor( this, url, entry ); } else { return new JarFileBasedArchiveDescriptor( this, url, entry ); } } else { //let's assume the url can return the jar as a zip stream return new JarInputStreamBasedArchiveDescriptor( this, url, entry ); } } </code></pre>
user1578872
<blockquote> <p>loggerName="org.hibernate.orm.url" threadName="main" txnId="" HHH10000002: File or directory named by URL [file:/opt/test/app/app.war!/WEB-INF/classes] could not be found. URL will be ignored</p> </blockquote> <p>No, it's not a permission denied, it's using an incorrect URL scheme. <code>file://thing</code> is fine, but using the "bang" syntax requires prefixing the URL with <code>jar:</code>, like so:</p> <pre><code>jar:file:///opt/test/app/app.war!/WEB-INF/classes </code></pre> <p>Without more context I can't say whether that's a hibernate bug or a your-configuration bug, but I can say with high confidence that the error message is exactly correct: there is no such directory as <code>app.war!</code></p>
mdaniel
<p>I have a statefulset which has a nodeSelector:</p> <pre><code> nodeSelector: app: licensed </code></pre> <p>When I assign the node with <code>app: licensed</code>, I can see the pod is schedule on a specific node.</p> <p>But when I remove the label from the node, I don't see k8s remove the pod from that node. I have to explicitly delete the pod.</p> <p>Is it a kubernetes feature? Or did I use the nodeSelector correctly?</p>
Kintarō
<p>The thing you're looking for is likely <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions" rel="nofollow noreferrer">taints and tolerations</a> if you want only designated things to run on designated Nodes. The <code>nodeSelector:</code> you used is used in the scheduling decision, but is not used in the reconciliation loop <em>(as you experienced)</em></p> <p>There is more fine grained control via the <code>effect: NoSchedule</code>, which as it implies will keep a Pod merely from being scheduled if it doesn't tolerate the taint, and <code>effect: NoExecute</code> which is closer to what you're asking for where if a taint <em>appears</em> on a Node, and the Pod doesn't tolerate it, the Pod will be evicted and not rescheduled upon any such tainted Node. Modern versions of kubernetes allow even finer grained control via the <code>PreferNoSchedule</code> and the opposite end of that with <code>tolerationSeconds</code> to give some padding around how soon any such eviction may happen</p>
mdaniel
<p>I got my application working in docker-compose, then converted it using kompose. However, I have a problem when I want to get data from the back-end. I can no longer use my previous method because of the url.</p> <pre><code>@app.route("/") def home(): url = "http://backend:5000/" try: res = requests.get(url) except Exception: return "Error with {}".format(url) dictFromServer = res.json() return dictFromServer['message'] </code></pre> <p>What is the best way to get the url to work?</p>
jor2
<p>The traditional way that is done is to use environment variables that are injected into your container's runtime. It is fine to have reasonable defaults so that local development can continue as it did before:</p> <pre class="lang-python prettyprint-override"><code> import os def home(): be_host = os.getenv('BACKEND_SERVICE_HOST', 'backend') be_port = os.getenv('BACKEND_SERVICE_PORT', '5000') url = 'http://{}:{}'.format(be_host, be_port) </code></pre> <p>Those environment variables and their values are <a href="https://github.com/kubernetes/kubernetes/blob/v1.12.2/pkg/kubelet/envvars/envvars.go#L45-L48" rel="nofollow noreferrer">injected by kubelet</a> based on the name of the <code>Service</code> objects in the current namespace as the running Pod. In that example above, it would mean a <code>Service</code> was named <code>backend</code> and that service exposed a <code>ports:</code> on 5000 (pointing at whatever the exposed <code>containerPorts:</code> on the Pod itself.</p> <p>You can, of course, declare your own environment variables, if that's too much magic for your team.</p>
mdaniel
<p>Is there anyway to convert configuration files like xml to kubernetes configmap yaml file without using kubectl command ? Let’s say if I want to create xml files dynamically which in turn stored in git repo as configmap yaml and some operator can monitor for yaml changes and deploy it to the cluster.</p>
srikanth
<blockquote> <p>configuration files like xml to kubernetes configmap yaml file without using kubectl command</p> </blockquote> <p>Sure, because the only thing <code>kubectl</code> does with <code>yaml</code> is immediately convert it to <code>json</code> and then <code>POST</code> (or <code>PUT</code> or whatever) to the kubernetes api with a <code>content-type: application/json;charset=utf-8</code> header (you can watch that take place via <code>kubectl --v=100 create -f my-thing.yaml</code>)</p> <p>So, the answer to your question is to use your favorite programming language that has libraries for json (or the <a href="https://github.com/stedolan/jq#readme" rel="nofollow noreferrer">positively amazing jq</a>), package the XML as necessary, the use something like <a href="https://github.com/box/kube-applier#readme" rel="nofollow noreferrer">kube-applier</a> to monitor and roll out the change:</p> <pre class="lang-py prettyprint-override"><code># coding=utf-8 import json import sys result = { "apiVersion": "v1", "kind": "ConfigMap", # etc etc "data": [], } for fn in sys.argv[1:]: with open(fn) as fh: body = fh.read() data.append({fn: body}) json.dump(result, sys.stdout) # or whatever </code></pre>
mdaniel
<p>I'm writing an Ansible task to deploy GitLab in my k3s environment.</p> <p>According to the doc, I need to execute this to install GitLab using Helm:</p> <pre class="lang-sh prettyprint-override"><code>$ helm install gitlab gitlab/gitlab \ --set global.hosts.domain=DOMAIN \ --set [email protected] </code></pre> <p>But the <code>community.kubernetes.helm</code> doesn't handle <code>--set</code> parameters and only call helm with the <code>--values</code> parameter.</p> <p>So my Ansible task looks like this:</p> <pre class="lang-yaml prettyprint-override"><code>- name: Deploy GitLab community.kubernetes.helm: update_repo_cache: yes release_name: gitlab chart_ref: gitlab/gitlab release_namespace: git release_values: global.hosts.domain: example.com certmanager-issuer.email: [email protected] </code></pre> <p>But the helm chart still return the error <code>You must provide an email to associate with your TLS certificates. Please set certmanager-issuer.email.</code></p> <p>I've tried manually in a terminal, and it seems that the GitLab helm chart requires <code>--set</code> parameters and fail with <code>--values</code>. But <code>community.kubernetes.helm</code> doesn't.</p> <p>What can I do?<br /> Is there a bug on GitLab helm chart side?</p>
Binary Brain
<blockquote> <p>it seems that the GitLab helm chart requires --set parameters and fail with --values</p> </blockquote> <p>That is an erroneous assumption; what you are running into is that <code>--set</code> splits on <code>.</code> because otherwise providing fully-formed YAML on the command line would be painful</p> <p>The correct values are using sub-objects where the <code>.</code> occurs:</p> <pre class="lang-yaml prettyprint-override"><code>- name: Deploy GitLab community.kubernetes.helm: update_repo_cache: yes release_name: gitlab chart_ref: gitlab/gitlab release_namespace: git release_values: global: hosts: # https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L47 domain: example.com # https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L592-595 certmanager-issuer: email: [email protected] </code></pre>
mdaniel
<p>is it possible to deploy an ingress controller (nginx) without a public ip address? </p> <p>Thanks!</p>
sokolata
<blockquote> <p>is it possible to deploy an ingress controller (nginx) without a public ip address?</p> </blockquote> <p>Without question, yes, if the Ingress controller's <code>Service</code> is of <code>type: NodePort</code> then the Ingress controller's private IP address is <strong>every</strong> <code>Node</code>'s IP address, on the port(s) pointing to <code>:80</code> and <code>:443</code> of the <code>Service</code>. Secretly, that's exactly what is happening anyway with <code>type: LoadBalancer</code>, just with the extra sugar coating of the cloud provider mapping between the load balancer's IP address and the binding to the <code>Node</code>'s ports.</p> <p>So, to close that loop: if you wished to have a 100% internal Ingress controller, then use a <code>hostNetwork: true</code> and bind the Ingress controller's <code>ports:</code> to be the <strong>host</strong>'s port 80 and 443; then, make a DNS (A record|CNAME record) for each virtual-host that resolve to the address of every <code>Node</code> in the cluster, and poof: 100% non-Internet-facing Ingress controller.</p>
mdaniel
<p>It might be a question based on curiosity which couldn't find help on google.</p> <p>Consider this part of the yaml for a headless service:</p> <pre><code>ports: - port: abcd --&gt; this line </code></pre> <p>My doubt is when the cluster-ip for a headless service is already none (as it is a set of pods that it points to), what is the use of having the port for a service? The dns record from the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="noreferrer">documentation</a> for services states that:</p> <blockquote> <p>“Headless” (without a cluster IP) Services are also assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. Unlike normal Services, this resolves to the set of IPs of the pods selected by the Service. Clients are expected to consume the set or else use standard round-robin selection from the set.</p> </blockquote> <p>Hence, if the dns that is allocated to the headless services is solely used to have endpoints into the pods, is there any use-case of having the port functionality in a headless service?</p> <p>I have seen issues that people have faced while excluding the port value from the definition of headless service (<a href="https://github.com/kubernetes/kubernetes/issues/55158" rel="noreferrer">here</a>). This seems to have been fixed. But then, do we really have a use-case for the port functionality of a headless service?</p>
Akash
<blockquote> <p>But then, do we really have a use-case for the port functionality of a headless service?</p> </blockquote> <p>IMHO, yes: because the very idea of a <code>Service</code> is not "a random IP address" -- otherwise it would be called <code>DHCPIPAddress</code>. The idea of a <code>Service</code> in kubernetes is that you can consume some network functionality using one or more tuples of <code>(address, protocol, port)</code> just like in the non-kubernetes world.</p> <p>So it can be fine if you don't care about the port of a headless <code>Service</code>, in which case toss in <code>ports:\n- port: 80\n</code> and call it a draw, but the <strong>benefit</strong> of a headless <code>Service</code> is to expose an extra-cluster network resource in a manner that kubernetes itself cannot manage. I used that very trick to help us transition from one cluster to another by creating a headless <code>Service</code>, whose name was what the previous <code>Deployment</code> expected, with the named <code>ports:</code> that the previous <code>Deployment</code> expected, but pointing to an IP that I controlled, not within the SDN.</p> <p>Doing that, all the traditional kubernetes <code>kube-dns</code> and <code>$(SERVICE_THING_HOST)</code> and <code>$(SERVICE_THING_PORT)</code> injection worked as expected, but abstracted away the fact that said <code>_HOST</code> temporarily lived outside the cluster.</p>
mdaniel
<p>Background: </p> <pre><code>$ kubectl get services -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.108.245.210 &lt;pending&gt; 80:30742/TCP,443:31028/TCP 41m $ kubectl cluster-info dump | grep LoadBalancer 14:35:47.072444 1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail </code></pre> <p>k8s cluster is up and running fine. - </p> <pre><code>$ ls /etc/kubernetes/manifests etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml ~$ kubectl get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 21h ingress-nginx default-http-backend ClusterIP 10.100.2.163 &lt;none&gt; 80/TCP 21h ingress-nginx ingress-nginx LoadBalancer 10.108.221.18 &lt;pending&gt; 80:32010/TCP,443:31271/TCP 18h kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 21h </code></pre> <p>How do I link the cloud provider to kubernetes cluster in the existing setup? </p>
rufus
<p>I would expect <code>grep -r cloud-provider= /etc/kubernetes/manifests</code> to either show you where the flag is being explicitly set to <code>--cloud-provider=</code> (that is, the empty value), or let you know that there is no such flag, in which case you'll need(?) to add them in three places:</p> <ul> <li><a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.8.1/roles/kubernetes/master/templates/manifests/kube-apiserver.manifest.j2#L124" rel="nofollow noreferrer"><code>kube-apiserver.yaml</code></a></li> <li><a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.8.1/roles/kubernetes/master/templates/manifests/kube-controller-manager.manifest.j2#L50" rel="nofollow noreferrer"><code>kube-cloud-provider.yaml</code></a></li> <li>in <a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.8.1/roles/kubernetes/node/templates/kubelet.standard.env.j2#L142" rel="nofollow noreferrer"><code>kubelet.service</code></a> or however you are currently running <code>kubelet</code></li> </ul> <p>I said "need(?)" because I thought that I read one upon a time that the kubernetes components were good enough at auto-detecting their cloud environment, and thus those flags were only required if you needed to improve or alter the default behavior. However, I just checked <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer">the v1.13 page</a> and there doesn't seem to be any "optional" about it. They've even gone so far as to now make <code>--cloud-config=</code> seemingly mandatory, too</p>
mdaniel
<p>Is it possible to take an image or a snapshot of container running inside pod using <code>kubectl</code>?</p> <p>Via <code>docker</code>, it is possible to use the <code>docker commit</code> command that creates an image of a container from which we can spawn more containers. I wanted to understand if there was something similar that we could do with <code>kubectl</code>.</p>
John
<p>No, partially because that's not in the kubernetes mental model of anything one would <em>wish</em> to do to a cluster, and partially because docker is not the only container runtime kubernetes uses. Every runtime one <em>could</em> use underneath kubernetes would need to support that operation, and I doubt they do.</p> <p>You are welcome to do your own <code>docker commit</code> either by getting a shell on the Node, or by running a privileged Pod then connecting to the <code>docker.sock</code> via a <code>volumeMount</code> and running it that way</p>
mdaniel
<p>The <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">documentation</a> doesn’t go into detail. I imagine I would at least need an iam role. </p>
mdornfe1
<p>This is the one used by kubespray, and is very likely indicative of a rational default:</p> <p><a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/contrib/aws_iam/kubernetes-minion-policy.json" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/contrib/aws_iam/kubernetes-minion-policy.json</a></p> <p>with the tl;dr of that link being to create an <code>Allow</code> for the following actions:</p> <ul> <li><code>s3:*</code></li> <li><code>ec2:Describe*</code></li> <li><code>ec2:AttachVolume</code></li> <li><code>ec2:DetachVolume</code></li> <li><code>route53:*</code></li> </ul> <p>(although I would bet that <code>s3:*</code> is too wide, I don't have the information handy to provide a more constrained version; similar observation on the <code>route53:*</code>)</p> <p>All of the <code>Resource</code> keys for those are <code>*</code> except the <code>s3:</code> one which restricts the resource to buckets beginning with <code>kubernetes-*</code> -- unknown if that's just an example, or there is something special in the kubernetes prefixed buckets. Obviously you might have a better list of items to populate the <code>Resource</code> keys to genuinely restrict attachable volumes (just be careful with dynamically provisioned volumes, as would be created by <code>PersistentVolume</code> resources)</p>
mdaniel
<p>I have made some changes in order to fix a AWS related bug in Kubernetes and I would like to test this changes in AWS.</p> <p>I build Kubernetes locally using: <code>build/run.sh make</code> and then I tried to use kops setting as <code>kubernetesVersion</code> the output <code>_output/dockerized</code> but it doesn't work probably because kops doesn't support it.</p> <p>Is there a simple way to deploy a local build kubernetes on AWS?</p>
b1zzu
<blockquote> <p>Is there a simple way to deploy a local build kubernetes on AWS?</p> </blockquote> <p>If you already have an existing cluster running, then the answer is to either push your images to a common docker registry (either hub.docker.com, or ECR, or a locally hosted registry, or whatever), or to just cheat and <code>for h in $(cluster node addresses); do docker save my-kubernetes-image:my-tag | ssh $h docker load; done</code>, then after you have made the images available, update all the manifests to point to your new image.</p> <p>If you don't already have a cluster, then I would suspect any of the existing toys, even the repugnant kops, would get you a cluster, <em>then</em> you can swap out the images and do so as many times as is required to verify your fix.</p>
mdaniel
<p>Is there anything wrong with how I am trying to configure my Minikube cluster in a way the pods can access the PostgreSQL instance within the same machine?</p> <p>I've access the <code>/etc/hosts</code> within the Minikube cluster via <code>minikube ssh</code> and returns:</p> <pre><code>127.0.0.1 localhost 127.0.1.1 minikube 192.168.99.1 host.minikube.internal 192.168.99.110 control-plane.minikube.internal </code></pre> <p><strong>database-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: service-database spec: type: ExternalName externalName: host.minikube.internal ports: - port: 5432 targetPort: 5432 </code></pre> <p><strong>pod-deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment spec: ... template: ... spec: containers: - name: &lt;container_alias&gt; image: &lt;container_name&gt; env: - name: DB_URL value: &quot;jdbc:postgresql://service-database/&lt;database_name&gt;&quot; ports: - containerPort: 8080 </code></pre> <p><em>Note</em>: <code>DB_URL</code> environment variable points to the <code>spring.datasource.url</code> in the <code>application.properties</code> in SpringBoot.</p> <p>Then when I tried to get the logs printed, I am getting this exception:</p> <pre><code>Caused by: java.net.UnknownHostException: service-database </code></pre>
David B
<blockquote> <p>I've access the /etc/hosts within the Minikube cluster via minikube ssh and returns</p> </blockquote> <p>That may be true, but for the same reason kubernetes does not expose the <code>/etc/hosts</code> of its Nodes, nor will minikube do the same thing. Kubernetes has its own DNS resolver, and thus its own idea of what should be in <code>/etc/hosts</code> (docker does the same thing -- it similarly does not just expose the host's <code>/etc</code> but rather allows the user to customize that behavior on container launch)</p> <p>There is a formal mechanism to tell kubernetes that you wish to manage the DNS resolution <em>endpoints</em> manually -- that's what <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">a Headless Service</a> does, although usually the &quot;manually&quot; part is done by the <code>StatefulSet</code> controller, but there's nothing stopping other mechanisms from grooming that list:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service-database spec: type: ClusterIP # yes, literally the word &quot;None&quot; clusterIP: None ports: - name: 5432-5432 port: 5432 targetPort: 5432 --- apiVersion: v1 kind: Endpoints metadata: name: service-database subsets: - addresses: - ip: 192.168.99.1 ports: - name: 5432-5432 port: 5432 protocol: TCP </code></pre> <p>and now the internal DNS will resolve <code>service-database</code> to be the answers <code>192.168.99.1</code> and also populate the SRV records just like normal</p>
mdaniel
<p>I setup a <strong>self-hosted</strong> registry on my machine to store the docker image files to test it thoroughly using <strong>minikube</strong> (lightweight Kubernetes implementation for local development).</p> <p>Though I'm able to successfully push &amp; pull repositories from local registry using <strong>docker push</strong> and <strong>docker pull</strong> commands but while trying to run a pod locally, facing below issue :</p> <p><strong>Error</strong> </p> <pre><code>Failed to pull image "localhost:5000/dev/customer:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused </code></pre> <p>Here's the list of events I noticed while inspecting the pod. </p> <p><strong>Pod Events</strong></p> <pre><code>Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16m default-scheduler Successfully assigned custappdeployment-6c8ddcc5d8-2zdfn to minikube Normal SuccessfulMountVolume 16m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-s5nlw" Normal BackOff 16m (x2 over 16m) kubelet, minikube Back-off pulling image "localhost:5000/dev/customer:v1" Warning Failed 16m (x2 over 16m) kubelet, minikube Error: ImagePullBackOff Normal Pulling 15m (x3 over 16m) kubelet, minikube pulling image "localhost:5000/dev/customer:v1" Warning Failed 15m (x3 over 16m) kubelet, minikube Failed to pull image "localhost:5000/dev/customer:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused Warning Failed 15m (x3 over 16m) kubelet, minikube **Error: ErrImagePull** Please see below the docker pull command output. PS C:\Sunny\Projects\NodeApps\Nodejs-Apps\Customer&gt; docker pull localhost:5000/dev/customer:v1 v1: Pulling from dev/customer Digest: sha256:edf0b716728b1cc00f7d8eed76fb3bdceadf1a60a46b9e4f80d983a64389e95c Status: Image is up to date for localhost:5000/dev/customer:v1 </code></pre>
Sunny Goel
<p>Well, I wouldn't expect <strong>your</strong> <code>localhost</code> to be the same as <code>localhost</code> from <code>minikube</code>'s point-of-view. Actually, I wouldn't expect your localhost to be the same localhost from <em>anything</em> in the kubernetes world's point-of-view.</p> <p>So, some practical things you'll want to check:</p> <ul> <li><p>is port 5000 accessible from not-your-machine (meaning could the minikube virtual machine plausibly pull from port 5000 on your machine)</p> <p>this question likely has some intersection with the point right below, because your registry may very well be listening on one of the internal adapters, but that's not what your machine knows itself as, or the opposite</p></li> <li><p>can minikube resolve the hostname that your machine presents itself as (because I actually don't <em>think</em> you can use an IP address in a docker image reference); obviously if that assumption isn't true, then no need to worry about this part</p></li> <li><p>and be sure that docker either is untrusting of your registry's CA, or you have already loaded the cert onto minikube and bounced docker</p></li> </ul> <p>You can always cheat, since minikube is one virtual machine (and not a whole fleet of Nodes) and <code>docker save $the_image_ref | minikube ssh docker load</code> which will side-step the pull entirely (unless you have <code>imagePullPolicy: Always</code> but that's easily fixed).</p>
mdaniel
<p>What is the max size allowed for Environment variable (pod->container->Env) in kubernetes, assuming base ubuntu containers? I am unable to find the relevant documentation. Question might seem stupid, but, I do need the info to make my design robust. </p>
AdmiralThrawn
<p>So at bare minimum there is some 1,048,576 byte limitation imposed:</p> <blockquote> <p>The ConfigMap "too-big" is invalid: []: Too long: must have at most 1048576 characters</p> </blockquote> <p>which I generated as:</p> <pre class="lang-sh prettyprint-override"><code>cat &gt; too-big.yml&lt;&lt;FOO apiVersion: v1 kind: ConfigMap metadata: name: too-big data: kaboom.txt: | $(python -c 'print("x" * 1024 * 1024)') FOO </code></pre> <p>And when I try that same stunt with a Pod, I'm met with a very similar outcome:</p> <pre><code>containers: - image: ubuntu:18.10 env: - name: TOO_BIG value: | $(python -c the same print) </code></pre> <blockquote> <p>standard_init_linux.go:178: exec user process caused "argument list too long"</p> </blockquote> <p>So I would guess it's somewhere in between those two numbers: 0 and 1048576</p> <p>That said, as the <a href="https://stackoverflow.com/questions/1078031/what-is-the-maximum-size-of-an-environment-variable-value">practically duplicate question</a> answered, you are very, very likely solving the wrong problem. The very fact that you have to come to a community site to ask such a question means you are brining risk to your project that it will work one way on Linux, another way on docker, another way on kubernetes, and a different way on macOS.</p>
mdaniel
<p>Curious as to how configMaps can be referenced in PODs without appropriate serviceAccount and asscoiated RBAC rules ?</p> <p><strong>Sample POD Yaml mounting configMap</strong></p> <pre><code> - mountPath: /kubernetes-vault name: kubernetes-vault ................. ................. volumes: - emptyDir: {} name: vault-token - configMap: defaultMode: 420 name: kubernetes-vault name: kubernetes-vault </code></pre> <p>But the <code>associated ServiceAccount and it's corresponding RBAC ( Role and RoleBinding )</code> does'nt have any rules specifying access rules for this <code>configMap (kubernetes-vault)</code></p> <p><strong>Role &amp; Rule for the POD</strong></p> <pre><code>rules: - apiGroups: - '*' resources: - services - pods - endpoints verbs: - get - list - watch </code></pre> <p><strong>Couple of Qs</strong></p> <ul> <li>does'nt access to configMap required appropriate ServiceAccount with access rules specified specifically for configMap access ?</li> <li>if yest which rule mentioned above governs configMap Access</li> <li>if not , what objects are governed by RBAC rules ?</li> </ul>
Shashi
<blockquote> <p>doesn't access to configMap required appropriate ServiceAccount with access rules specified specifically for configMap access ?</p> </blockquote> <p>It will when a <code>ServiceAccount</code> is performing that action, yes, but <code>volumes:</code> are performed by a mixture of <code>kube-apiserver</code>, <code>kube-controller</code>, and the calling credential that interacts with the apiserver. By the time the Pod's volumes mount, all those security checks are a done deal -- one can verify that behavior by running any Pod and suppressing its <code>ServiceAccount</code> and observe that the volume mounts still take place</p> <p>If one has objects which should only be accessed by a limited set of users, that should happen at the Role level to prevent the users from scheduling Pods that touch the sensitive items.</p> <blockquote> <p>if not , what objects are governed by RBAC rules ?</p> </blockquote> <p>As far as I know, <em>everything</em> is governed by RBAC rules, and even if they aren't to your satisfaction, the system offers <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">Validating Admission Controllers</a> which allow <strong>extremely</strong> fine-grained access rules</p>
mdaniel
<p>I have a web application consisting of a few services - web, DB and a job queue/worker. I host everything on a single Google VM and my deployment process is very simple and naive:</p> <ul> <li>I manually install all services like the database on the VM </li> <li>a bash script scheduled by crontab polls a remote git repository for changes every N minutes</li> <li>if there were changes, it would simply restart all services using supervisord (job queue, web, etc)</li> </ul> <p>Now, I am starting a new web project where I enjoy using docker-compose for local development. However, I seem to suck in analysis paralysis deciding between available options for production deployment - I looked at Kubernetes, Swarm, docker-compose, container registries and etc.</p> <p>I am looking for a recipe that will keep me productive with a single machine deployment. Ideally, I should be able to scale it to multiple machines when the time comes, but simplicity and staying frugal (one machine) is more important for now. I want to consider 2 options - when the VM already exists and when a new bare VM can be allocated specifically for this application.</p> <p>I wonder if docker-compose is a reasonable choice for a simple web application. Do people use it in production and if so, how does the entire process look like from bare VM to rolling out an updated application? Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?</p>
kpax
<blockquote> <p>I wonder if docker-compose is a reasonable choice for a simple web application.</p> </blockquote> <p>It can be, sure, if the development time is best spent focused on the web application and <em>less</em> on the non-web stuff such as the job queue and database. The other asterisk is whether the development environment works ok with hot-reloads or port-forwarding and that kind of jazz. I say it's a reasonable choice because 99% of the <strong>work</strong> of creating an application suitable for use in a clustered environment is the work of containerizing the application. So if the app already works under <code>docker-compose</code>, then it is with high likelihood that you can take the docker image that is constructed on behalf of <code>docker-compose</code> and roll it out to the cluster.</p> <blockquote> <p>Do people use it in production</p> </blockquote> <p>I hope not; I am sure there are people who use <code>docker-compose</code> to run in production, just like there are people that use Windows batch files to deploy, but don't be that person.</p> <blockquote> <p>Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?</p> </blockquote> <p>Similarly, don't be a person that deploys the entire application on a single virtual machine or be mentally prepared for one failure to wipe out everything that you value. That's part of what clustering technologies are designed to protect against: one mistake taking down the entirety of the application, web, queuing, and persistence all in one fell swoop.</p> <p>Now whether deploying kubernetes for your situation is "overkill" or not depends on whether you get benefit from the <em>other</em> things that kubernetes brings aside from mere scaling. We get benefit from developer empowerment, log aggregation, CPU and resource limits, the ability to take down one Node without introducing any drama, secrets management, configuration management, using a small number of Nodes for a large number of hosted applications (unlike creating a single virtual machine per deployed application because the deployments have no discipline over the placement of config file or ports or whatever). I can keep going, because kubernetes is truly magical; but, as many people will point out, it is not zero human cost to successfully run a cluster.</p>
mdaniel
<p>I am fresh to Kubernetes. </p> <p>My understanding of <code>secret</code> is that it encodes information by <code>base64</code>. And from the resources I have seen, it is claimed that <code>secret</code> could protect sensitive information. I do not get this. </p> <p>Besides encoding information with <code>base64</code>, I do not see any real difference between <code>secret</code> and <code>configMap</code>. And we could decode <code>base64</code>-encoded information so easily. That means there is no protection at all... </p> <p>Is my understanding wrong?</p>
Quan Zhou
<p>The thing which protects a <code>Secret</code> is the fact that it is a distinct resource type in kubernetes, and thus can be subject to a different RBAC policy than a <code>ConfigMap</code>.</p> <p>If you are currently able to read <code>Secret</code>s in your cluster, that's because your <code>ClusterRoleBinding</code> (or <code>RoleBinding</code>) has a rule that specifically grants access to those resources. It can be due to you accessing the cluster through its "unauthenticated" port from one of the master Nodes, or due to the [<code>Cluster</code>]<code>RoleBinding</code> attaching your <code>Subject</code> to <code>cluster-admin</code>, which is probably pretty common in hello-world situations, but I would guess less common in production cluster setups.</p> <p>That's the pedantic answer, however, <em>really</em> guarding the secrets contained in a <code>Secret</code> is trickier, given that they are usually exposed to the <code>Pod</code>s through environment injection or a volume mount. That means anyone who has <code>exec</code> access to the <code>Pod</code> can very easily exfiltrate the secret values, so if the secrets are super important, and must be kept even from the team, you'll need to revoke <code>exec</code> access to your <code>Pod</code>s, too. A middle ground may be to grant the team access to <code>Secret</code>s in their own <code>Namespace</code>, but forbid it from other <code>Namespace</code>s. It's security, so there's almost no end to the permutations and special cases.</p>
mdaniel
<p>When using <code>kubectl run -ti</code> with an interactive terminal, I would like to be able to pass a few commands in the <code>kubectl run</code> command to be run before the interactive terminal comes up, commands like <code>apt install zip</code> for example. In this way, I do not need to wait for the interactive terminal to come up and then run those common commands. Is there a way do so this?</p> <p>Thanks</p>
imriss
<p>You can use the shell's <code>exec</code> to hand control over from your initial &quot;outer&quot; bash, responsible for doing the initialization steps you want, over to a fresh one (fresh in the sense that it does not have <code>-c</code> and can optionally be a login shell) which runs after your pre-steps:</p> <pre class="lang-sh prettyprint-override"><code>kubectl run sample -it --image=ubuntu:20.04 -- \ bash -c &quot;apt update; apt install -y zip; exec bash -il&quot; </code></pre>
mdaniel
<p>A docker container with a small Python app inside it is deployed to a Kubernetes cluster that has a <code>redis master</code> and a <code>redis slave</code> service running in the cluster. The Python app inside the Docker container is not able to connect to the <code>redis</code> across the cluster because the Python app is not configured properly to find <code>redis</code> on the network. </p> <p>What specific changes need to be made to the code below in order for the Python app in <code>app.py</code> to be able to communicate successfully with the <code>redis</code> running in the same cluster?</p> <h2>PYTHON APP CODE</h2> <p>Here is <code>app.py</code> </p> <pre><code>from flask import Flask from redis import Redis, RedisError import os import socket # Connect to Redis redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2) app = Flask(__name__) @app.route("/") def hello(): try: visits = redis.incr("counter") except RedisError: visits = "&lt;i&gt;cannot connect to Redis, counter disabled&lt;/i&gt;" html = "&lt;h3&gt;Hello {name}!&lt;/h3&gt;" \ "&lt;b&gt;Hostname:&lt;/b&gt; {hostname}&lt;br/&gt;" \ "&lt;b&gt;Visits:&lt;/b&gt; {visits}" return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits) if __name__ == "__main__": app.run(host='0.0.0.0', port=80) </code></pre> <p><hr><strong>REDIS SERVICES IN THE SAME KUBERNETES CLUSTER</strong><hr> </p> <p>The <code>redis master</code> and <code>redis slave</code> running in the cluster are from public registries and are brought into the cluster by running <code>kubectl apply -f</code> with the following JSON: </p> <p>Redis Master replication controller <a href="https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-master-controller.json" rel="nofollow noreferrer">JSON from this link.</a><br> Redis Master service <a href="https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-master-service.json" rel="nofollow noreferrer">JSON from this link.</a><br> Redis Slave replication controller <a href="https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-slave-controller.json" rel="nofollow noreferrer">JSON from this link.</a><br> Redis Slave service <a href="https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-slave-service.json" rel="nofollow noreferrer">JSON from this link.</a> </p>
CodeMed
<blockquote> <p>What specific changes need to be made to the code below in order for the python app in app.py to be able to communicate successfully with the redis running in the same cluster?</p> </blockquote> <pre class="lang-py prettyprint-override"><code>redis = Redis(host="redis-master", db=0, socket_connect_timeout=2, socket_timeout=2) </code></pre> <p>because the <code>Service</code> you installed is named <a href="https://github.com/kubernetes/kubernetes/blob/v1.10.3/examples/guestbook-go/redis-master-service.json#L5" rel="nofollow noreferrer"><code>redis-master</code></a>, although that simple change I proposed above <em>assumes</em> that the flask app is running in the same kubernetes namespace as the <code>redis-master</code> <code>Service</code>. If that isn't true, you'll need to switch it to read:</p> <pre><code>redis = Redis(host="redis-master.whatever-namespace.svc.cluster.local", </code></pre> <p>and replace <code>whatever-namespace</code> with the actual, correct, namespace. If you don't remember or know, <code>kubectl get --all-namespaces=true svc | grep redis-master</code> will remind you.</p>
mdaniel
<p>I'm attempting to add a file to the /etc/ directory on an AWX task/web container in kubernetes. I'm fairly new to helm and I'm not sure what I'm doing wrong.</p> <p>The only thing I've added to my helm chart is krb5 key in configmap and an additional volume and volume mount to both task and web container. The krb5.conf file is in charts/mychart/files/</p> <p>ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ include "awx.fullname" . }}-application-config labels: app.kubernetes.io/name: {{ include "awx.name" . }} helm.sh/chart: {{ include "awx.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} data: krb5: |- {{ .Files.Get "krb5.conf"}} secret_key: {{ .Values.awx_secret_key }} awx_settings: | *some stuff* </code></pre> <p>Deployment: </p> <p>Volumes add to bottom of deployment.yaml</p> <pre><code>volumes: - name: {{ include "awx.fullname" . }}-application-config configMap: name: {{ include "awx.fullname" . }}-application-config items: - key: awx_settings path: settings.py - key: secret_key path: SECRET_KEY - name: {{ include "awx.fullname" . }}-application-config-krb5 configMap: name: {{ include "awx.fullname" . }}-application-config items: - key: krb5 path: krb5.conf </code></pre> <p>Volume Mounts add to both task/web container</p> <pre><code> volumeMounts: - mountPath: /etc/tower name: {{ include "awx.fullname" . }}-application-config - mountPath: /etc name: {{ include "awx.fullname" . }}-application-config-krb5 </code></pre> <p>I'm trying to mount a file to the containers in a kubernetes pod and am getting the following error:</p> <pre><code> Warning Failed 40s kubelet, aks-prdnode-18232119-1 Error: failed to start container "web": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/docker/containers/d66044fe204abbf9a4d3772370d0f8d4184e339e59ad9a018f046eade03b8418/resolv.conf\\\" to rootfs \\\"/var/lib/docker/overlay2/d9fa9705d70bbb864ed526a96f6a2873b2720c41a9f9ef5b4a428902e4cf3c82/merged\\\" at \\\"/var/lib/docker/overlay2/d9fa9705d70bbb864ed526a96f6a2873b2720c41a9f9ef5b4a428902e4cf3c82/merged/etc/resolv.conf\\\" caused \\\"open /var/lib/docker/overlay2/d9fa9705d70bbb864ed526a96f6a2873b2720c41a9f9ef5b4a428902e4cf3c82/merged/etc/resolv.conf: read-only file system\\\"\"": unknown </code></pre>
Ian Clark
<p>You'll want to use the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#volumemount-v1-core" rel="nofollow noreferrer"><code>subPath:</code></a> option to "reach into" that <code>-application-config-krb5</code> and mount only the one file:</p> <pre><code>- mountPath: /etc/krb5.conf name: {{ include "awx.fullname" . }}-application-config-krb5 subPath: krb5.conf </code></pre> <p>since, as the error correctly points out, you for sure don't want to blow away the <code>/etc</code> directory of almost <em>any</em> container environment (it'll nuke <code>/etc/passwd</code>, <code>/etc/hosts</code>, <code>resolv.conf</code>, and a bazillion other important files)</p>
mdaniel
<p>What happened: Add &quot;USER 999:999&quot; in Dockerfile to add default uid and gid into container image, then start the container in Pod , its UID is 999, but its GID is 0.</p> <p>In container started by Docker the ID is correct</p> <pre><code>docker run --entrypoint /bin/bash -it test bash-5.0$ id uid=9999 gid=9999 groups=9999 </code></pre> <p>But start as Pod, the gid is 0</p> <pre><code>kubectl exec -it test /bin/bash bash-5.0$ id uid=9999 gid=0(root) groups=0(root) bash-5.0$ bash-5.0$ cat /etc/passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin games:x:12:100:games:/usr/games:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin systemd-coredump:x:200:200:systemd Core Dumper:/:/sbin/nologin systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin systemd-resolve:x:193:193:systemd Resolver:/:/sbin/nologin dbus:x:81:81:System message bus:/:/sbin/nologin </code></pre> <p>If Dockerfile run extra &quot;useradd&quot; command , then it seems the gid is ok in Pod</p> <pre><code>RUN useradd -r -u 9999 -d /dev/null -s /sbin/nologin abc USER 9999:9999 </code></pre> <p>then the ID in container of Pod is the same as set in Dockerfile</p> <pre><code>bash-5.0$ id uid=9999(abc) gid=9999(abc) groups=9999(abc) </code></pre> <p>What you expected to happen: the GID of container in Pod should also 999</p> <p>How to reproduce it (as minimally and precisely as possible): Dockerfile add &quot;USER 999:999&quot; Then start the container in Pod</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: test imagePullPolicy: Never command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;trap : TERM INT; sleep infinity &amp; wait&quot;] </code></pre> <p>Environment:</p> <pre><code>Kubernetes version (use kubectl version): Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;17&quot;, GitVersion:&quot;v1.17.3&quot;, GitCommit:&quot;06ad960bfd03b39c8310aaf92d1e7c12ce618213&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-02-11T18:14:22Z&quot;, GoVersion:&quot;go1.13.6&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;17&quot;, GitVersion:&quot;v1.17.3&quot;, GitCommit:&quot;06ad960bfd03b39c8310aaf92d1e7c12ce618213&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-02-11T18:07:13Z&quot;, GoVersion:&quot;go1.13.6&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} OS (e.g: cat /etc/os-release): Fedora release 30 (Thirty) </code></pre> <p>docker version Client: Version: 18.09.9 API version: 1.39 Go version: go1.11.13 Git commit: 039a7df9ba Built: Wed Sep 4 16:52:09 2019 OS/Arch: linux/amd64 Experimental: false</p> <p>Server: Docker Engine - Community Engine: Version: 18.09.9 API version: 1.39 (minimum version 1.12) Go version: go1.11.13 Git commit: 039a7df Built: Wed Sep 4 16:22:32 2019 OS/Arch: linux/amd64 Experimental: false</p>
Huang Shujun
<p>I realize this isn't what you asked, but since I don't know why the <code>USER</code> directive isn't honored, I'll point out that you have explicit influence over the UID and GID used by your Pod via the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#podsecuritycontext-v1-core" rel="nofollow noreferrer"><code>securityContext:</code></a></p> <pre class="lang-yaml prettyprint-override"><code>spec: securityContext: runAsUser: 999 runAsGroup: 999 containers: - ... </code></pre>
mdaniel
<p>I'm new to Kubernetes and Rancher. I have builde node docker image with below commands:</p> <pre><code>FROM node:10 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY package.json /usr/src/app RUN npm cache clean RUN npm install COPY . /usr/src/app EXPOSE 3000 CMD ["npm","start"] </code></pre> <p>I have put docker image to my repo on docker hub. From Docker hub I'm pulling same image on Rancher/Kubernetes its showing as it as in Active state, as shown below:</p> <blockquote> <p>kubectl get svc -n nodejs</p> </blockquote> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE node-front-end ClusterIP 10.43.14.96 &lt;none&gt; 49160/TCP 21m node-front-end-nodeport NodePort 10.43.171.52 &lt;none&gt; 49160:31366/TCP 21m </code></pre> <p>But when I'm trying with above IP and Port it's giving message : "This site can’t be reached"</p> <p>So i'm not able to understand what I'm doing wrong here. </p> <p>Please guide.</p>
Vikas Banage
<blockquote> <p>But when I'm trying with above IP and Port it's giving message : "This site can’t be reached"</p> </blockquote> <p>Correct, those <code>ClusterIP</code>s are "virtual," in that they exist only inside the cluster. The address you will want to use is <em>any</em> of the <code>Node</code>'s IP addresses, and then the port <code>:31366</code> listed there in the <code>Service</code> of type <code>NodePort</code>.</p> <p>Just in case you don't already know them, one can usually find the IP address of the Nodes with <code>kubectl get -o wide nodes</code>.</p>
mdaniel
<p>I'm kinda new to the kubernetes technology, sorry if I'm asking something really dumb. I've been trying to install orangehrm with helm, with no major problems actually and the http works fine but when I try to acces through the https url, it shows me the error of bad request.</p> <p>It's been installed with a modify value.yaml for the db configuration and also for user and password to login. But the rest is just as the github repositoy is. Secret and login were set apart in my kubernetes configuration from this value.yaml file because the secret wasn't working.</p> <pre><code>image: registry: docker.io repository: bitnami/orangehrm tag: 4.3.1-0-debian-9-r8 pullPolicy: IfNotPresent orangehrmUsername: admin orangehrmPassword: admin externalDatabase: host: [REDACTED] user: [REDACTED] password: [REDACTED] database: [REDACTED] mariadb: enabled: false replication: enabled: true db: name: orangehrm user: [REDACTED] password: [REDACTED] master: persistence: enabled: true accessMode: ReadWriteOnce size: 8Gi service: type: NodePort port: 80 httpsPort: 443 nodePorts: http: "" https: "" externalTrafficPolicy: Cluster persistence: enabled: true orangehrm: storageClass: slow accessMode: ReadWriteOnce size: 8Gi apache: storageClass: slow accesMod: ReadWriteOnce size: 16Gi resources: requests: memory: 512Mi cpu: 300m podAnnotations: {} ingress: enabled: true certManager: false annotations: kubernetes.io/ingress.class: nginx hosts: - name: [REDACTED].com path: / tls: false tlsSecret: orangehrm-orangehrm secrets: metrics: enabled: false image: registry: docker.io repository: lusotycoon/apache-exporter tag: v0.5.0 pullPolicy: IfNotPresent podAnnotations: prometheus.io/scrape: "true" prometheus.io/port: "9117" </code></pre> <blockquote> <p>Bad Request</p> <p>Your browser sent a request that this server could not understand. Reason: >You're speaking plain HTTP to an SSL-enabled server port.</p> </blockquote> <p><strong>curl -v output</strong></p> <pre><code>* About to connect() to orangehrm.[REDACTED].com port 443 (#0) * Trying 192.168.20.250... * Connected to orangehrm.[REDACTED].com ([REDACTED]) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=orangehrm.[REDACTED].com,O=Internet Widgits Pty Ltd,ST=Some-State,C=AU * start date: Jun 07 13:01:54 2019 GMT * expire date: Jun 04 13:01:54 2029 GMT * common name: orangehrm.[REDACTED].com * issuer: O=[REDACTED],L=C.A.B.A.,ST=Buenos Aires,C=AR &gt; GET / HTTP/1.1 &gt; User-Agent: curl/7.29.0 &gt; Host: orangehrm.[REDACTED].com &gt; Accept: */* &gt; &lt; HTTP/1.1 400 Bad Request &lt; Server: nginx/1.15.8 &lt; Date: Wed, 12 Jun 2019 13:49:43 GMT &lt; Content-Type: text/html; charset=iso-8859-1 &lt; Content-Length: 362 &lt; Connection: keep-alive &lt; Strict-Transport-Security: max-age=15724800; includeSubDomains &lt; &lt;!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"&gt; &lt;html&gt;&lt;head&gt; &lt;title&gt;400 Bad Request&lt;/title&gt; &lt;/head&gt;&lt;body&gt; &lt;h1&gt;Bad Request&lt;/h1&gt; &lt;p&gt;Your browser sent a request that this server could not understand.&lt;br /&gt; Reason: You're speaking plain HTTP to an SSL-enabled server port.&lt;br /&gt; Instead use the HTTPS scheme to access this URL, please.&lt;br /&gt; &lt;/p&gt; &lt;/body&gt;&lt;/html&gt; * Connection #0 to host orangehrm.[REDACTED].com left intact </code></pre> <p><strong>kubectl get -o yaml pods -l chart output:</strong></p> <pre><code>apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: creationTimestamp: "2019-06-12T13:41:42Z" generateName: orangehrm-orangehrm-76dfdf78f4- labels: app: orangehrm-orangehrm chart: orangehrm-4.1.0 pod-template-hash: 76dfdf78f4 release: orangehrm name: orangehrm-orangehrm-76dfdf78f4-hdnj9 namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: orangehrm-orangehrm-76dfdf78f4 uid: d02765de-8d17-11e9-88b3-00155d00973f resourceVersion: "19055796" selfLink: /api/v1/namespaces/default/pods/orangehrm-orangehrm-76dfdf78f4-hdnj9 uid: d04480cd-8d17-11e9-88b3-00155d00973f spec: containers: - env: - name: ALLOW_EMPTY_PASSWORD value: "yes" - name: MARIADB_HOST value: 192.168.0.132 - name: MARIADB_PORT_NUMBER value: "3306" - name: ORANGEHRM_DATABASE_NAME value: orangehrm - name: ORANGEHRM_DATABASE_USER value: orangehrm_user - name: ORANGEHRM_DATABASE_PASSWORD valueFrom: secretKeyRef: key: db-password name: orangehrm-externaldb - name: ORANGEHRM_USERNAME value: admin - name: ORANGEHRM_PASSWORD valueFrom: secretKeyRef: key: orangehrm-password name: orangehrm-orangehrm - name: SMTP_HOST - name: SMTP_PORT - name: SMTP_USER - name: SMTP_PASSWORD valueFrom: secretKeyRef: key: smtp-password name: orangehrm-orangehrm - name: SMTP_PROTOCOL value: none image: docker.io/bitnami/orangehrm:4.3.0-0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /symfony/web/index.php port: http scheme: HTTP initialDelaySeconds: 120 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: orangehrm-orangehrm ports: - containerPort: 80 name: http protocol: TCP - containerPort: 443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /symfony/web/index.php port: http scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 300m memory: 512Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/orangehrm name: orangehrm-data - mountPath: /bitnami/apache name: apache-data - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-r2gbm readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostAliases: - hostnames: - status.localhost ip: 127.0.0.1 nodeName: l004 priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: orangehrm-data persistentVolumeClaim: claimName: orangehrm-orangehrm-orangehrm - name: apache-data persistentVolumeClaim: claimName: orangehrm-orangehrm-apache - name: default-token-r2gbm secret: defaultMode: 420 secretName: default-token-r2gbm status: conditions: - lastProbeTime: null lastTransitionTime: "2019-06-12T13:41:49Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-06-12T13:42:52Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-06-12T13:42:52Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-06-12T13:41:42Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://725ddef8da29d353006996d95b248f4ee5cea0bed2542350fc7d63d4dfb0fecb image: bitnami/orangehrm:4.3.0-0 imageID: docker-pullable://bitnami/orangehrm@sha256:2f0bd90d975a22c7a6237c6fd86c7939df856cf74edd8dcf839df440a5c62606 lastState: {} name: orangehrm-orangehrm ready: true restartCount: 0 state: running: startedAt: "2019-06-12T13:41:50Z" hostIP: 192.168.0.137 phase: Running podIP: 10.40.0.65 qosClass: Burstable startTime: "2019-06-12T13:41:49Z" kind: List metadata: resourceVersion: "" selfLink: "" </code></pre> <p><strong>Pod startup log</strong></p> <pre><code>Welcome to the Bitnami orangehrm container Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-orangehrm Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-orangehrm/issues nami INFO Initializing apache apache INFO ==&gt; Patching httpoxy... apache INFO ==&gt; Configuring dummy certificates... nami INFO apache successfully initialized nami INFO Initializing php nami INFO php successfully initialized nami INFO Initializing mysql-client nami INFO mysql-client successfully initialized nami INFO Initializing libphp nami INFO libphp successfully initialized nami INFO Initializing orangehrm orangeh INFO Configuring permissions orangeh INFO Creating the database... mysql-c INFO Trying to connect to MySQL server mysql-c INFO Found MySQL server listening at 192.168.0.132:3306 mysql-c INFO MySQL server listening and working at 192.168.0.132:3306 orangeh INFO Preparing webserver environment... orangeh INFO Passing wizard, please be patient orangeh INFO Configuring SMTP... orangeh INFO Setting OrangeHRM version... orangeh INFO orangeh INFO ######################################################################## orangeh INFO Installation parameters for orangehrm: orangeh INFO Username: admin orangeh INFO Password: ********** orangeh INFO Site URL: http://127.0.0.1/ orangeh INFO (Passwords are not shown for security reasons) orangeh INFO ######################################################################## orangeh INFO nami INFO orangehrm successfully initialized </code></pre> <p>I have a nginx loadbalancer which Ingress is this:</p> <pre><code> apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-body-size: "0" name: https spec: rules: - host: orangehrm.[REDACTED].com http: paths: - backend: serviceName: orangehrm-orangehrm servicePort: 443 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - orangehrm.[REDACTED].com secretName: orangehrm-https </code></pre>
agustinlare
<p>As best I can tell, you are terminating TLS at the Ingress controller, which is then proxying upstream as HTTP but <em>on port 443</em>; so you'll want to update your Ingress to say <code>servicePort: 80</code> not <code>:443</code></p> <p>If you really want to connect TLS all the way through to the Pod, you'll need to either <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="nofollow noreferrer">enable SSL passthrough</a> or perhaps switch to use <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">the HTTPS backend</a></p>
mdaniel
<p>I understand that we can expose the serive as loadbalancer. </p> <pre><code>kubectl expose deployment hello-world --type=LoadBalancer --name=my-service </code></pre> <pre><code>kubectl get services my-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s </code></pre> <pre><code> Namespace: default Labels: app.kubernetes.io/name=load-balancer-example Annotations: &lt;none&gt; Selector: app.kubernetes.io/name=load-balancer-example Type: LoadBalancer IP: 10.3.245.137 LoadBalancer Ingress: 104.198.205.71 </code></pre> <p>I have created a static IP.</p> <p>Is it possible to replace the <strong>LoadBalancer Ingress</strong> with static IP? </p>
Sunil Gajula
<p>tl;dr = yes, but trying to edit the IP in that <code>Service</code> resource won't do what you expect -- it's just reporting the current state of the world to you</p> <blockquote> <p>Is it possible to replace the LoadBalancer Ingress with static IP? </p> </blockquote> <p>First, the LoadBalancer is whatever your cloud provider created when kubernetes asked it to create one; you have <a href="https://kubernetes.io/docs/concepts/services-networking/service/#other-elb-annotations" rel="nofollow noreferrer">a lot of annotations</a> (that one is for AWS, but there should be ones for your cloud provider, too) that influence the creation, and it appears <a href="https://github.com/kubernetes/kubernetes/blob/v1.18.4/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L212" rel="nofollow noreferrer">EIPs for NLBs</a> is one of them, but I doubt that does what you're asking</p> <p>Second, the <code>type: LoadBalancer</code> is merely convenience -- it's not required to expose your <code>Service</code> outside of the cluster. It's a replacement for creating a <code>Service</code> of <code>type: NodePort</code>, then creating an external load balancer resource, associating all the Nodes in your cluster with that load balancer, pointing to the NodePort on the Node to get traffic from the outside world into the cluster. If you already have a static IP-ed load balacer, you can update its registration to point to the NodePort allocations for your existing <code>my-service</code> and you'll be back in business</p>
mdaniel
<p>I have a stateful set for MQ, exposed two ports 1414 for TCP and 9443 for HTTPS and created service of type Loadbalancer. 1414 for TCP is working fine, able to telnet from other PODs in the same cluster using service name/cluster IP..also able to connect 1414 from outside GKE cluster. </p> <p>But the problem is port 9443 is not accessible from other POD in the cluster (service name/cluster IP) or outside the cluster (external IP). The telnet is working fine when exec to the POD and test locally.. telnet 127.0.01 9443</p> <p>Is there any configuration missing for HTTPS service.. </p> <p>Note: Port forward is working fine and able to connect to the API. kubectl port-forward svc/mq-qmdtest 9443:9443</p> <p>Service Definition </p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{.Values.name}} namespace: {{.Values.namespace}} annotations: cloud.google.com/load-balancer-type: "Internal" labels : run: {{.Values.name}} spec: type: LoadBalancer loadBalancerIP: {{.Values.loadBalancerIP}} ports: - name: webui port: 9443 protocol: TCP - name: mq port: 1414 protocol: TCP selector: run: {{.Values.name}} </code></pre> <p>Stateful Set – Container port configuration </p> <pre><code> ports: - containerPort: 9443 protocol: TCP name: webui - containerPort: 1414 protocol: TCP name: mq </code></pre>
Binix John
<blockquote> <p>The telnet is working fine when exec to the POD and test locally.. telnet 127.0.01 9443 ... Port forward is working fine and able to connect to the API. kubectl port-forward svc/mq-qmdtest 9443:9443</p> </blockquote> <p>Is almost certainly caused by the pod only listening on localhost; <code>port-forward</code> also engages with localhost, so the fact that you cannot reach it from other pods in the cluster but you can from itself and you can from port-forward means the service is only listening for <em>local</em> connections.</p> <p>Without knowing more about the software I can't offer you a "open this file, change this value" type instructions, but be on the lookout for "bind host" or any "listen" configuration that would accept both a host and a port, and in that case set the "bind host" to <code>0.0.0.0</code> or the "listen" configuration to <code>0.0.0.0:9443</code></p>
mdaniel
<p>I follow <a href="https://medium.freecodecamp.org/learn-kubernetes-in-under-3-hours-a-detailed-guide-to-orchestrating-containers-114ff420e882" rel="nofollow noreferrer">this</a> tutorial about Kubernetes.</p> <p>I got to the part, which guide me to run:</p> <pre><code>minikube service sa-frontend-lb </code></pre> <p>(I used sudo to run it, because if I don't use sudo it ask me to use sudo).</p> <p>I get those following errors:</p> <pre><code>Opening kubernetes service default/sa-frontend-lb in default browser... No protocol specified No protocol specified (firefox:4538): Gtk-WARNING **: 22:07:38.395: cannot open display: :0 /usr/bin/xdg-open: line 881: x-www-browser: command not found No protocol specified No protocol specified (firefox:4633): Gtk-WARNING **: 22:07:39.112: cannot open display: :0 /usr/bin/xdg-open: line 881: iceweasel: command not found /usr/bin/xdg-open: line 881: seamonkey: command not found /usr/bin/xdg-open: line 881: mozilla: command not found No protocol specified Unable to init server: Could not connect: Connection refused Failed to parse arguments: Cannot open display: /usr/bin/xdg-open: line 881: konqueror: command not found /usr/bin/xdg-open: line 881: chromium: command not found [4749:4749:0805/220740.485576:ERROR:zygote_host_impl_linux.cc(88)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180. [4757:4757:0805/220740.725100:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180. /usr/bin/xdg-open: line 881: www-browser: command not found /usr/bin/xdg-open: line 881: links2: command not found /usr/bin/xdg-open: line 881: elinks: command not found /usr/bin/xdg-open: line 881: links: command not found </code></pre> <p>I installed chromium and xdg-utils, but neither works.</p> <p>How can I enter to the service, to see that it works?</p>
Yagel
<p>As one can see, it <em>is</em> attempting to launch a browser, but there are none installed that it recognizes, except for what I would <em>guess</em> is Chrome <em>(since one can see that "chromium" did not work out)</em>, and as the message indicates, it doesn't tolerate running as <code>root</code>.</p> <p>In that case, what you want is actually:</p> <pre><code>minikube service --url sa-frontend-lb </code></pre> <p>which causes <code>minikube</code> to <em>print</em> the URL rather than attempting to use <a href="https://github.com/pkg/browser/blob/master/browser_linux.go#L4" rel="noreferrer">xdg-open</a> to launch a browser.</p>
mdaniel
<p>I have created an HTTP server from the directory using this command:</p> <pre><code>python -c 'import BaseHTTPServer as bhs, SimpleHTTPServer as shs; bhs.HTTPServer(("0.0.0.0", 8089), shs.SimpleHTTPRequestHandler).serve_forever()' </code></pre> <p>I use Spark-K8s operator for the Spark application execution. As main file I want to use jar file stored in the directory (from which I create http server). However, I do not know to which host I should map my directory so it could be used by the Spark application running through the deployment. When I run it with a current host I get this connection error:</p> <pre><code>java.net.ConnectException: Connection refused </code></pre> <p>Basically, I have HTTP server which refers to specified host and port and I want to run this jar file using Spark on top of K8s. How can I define this host?</p> <p>For now application definition looks like this:</p> <pre><code>apiVersion: sparkoperator.k8s.io/v1alpha1 kind: SparkApplication metadata: name: spark-example namespace: default spec: type: Scala image: gcr.io/spark-operator/spark:v2.4.0 mainClass: org.apache.spark.examples.SparkExample mainApplicationFile: https://0.0.0.0:8089/spark_k8s_airflow.jar mode: cluster deps: {} driver: coreLimit: 1000m cores: 0.1 labels: version: 2.4.0 memory: 1024m serviceAccount: intended-mink-spark executor: cores: 1 instances: 1 labels: version: 2.4.0 memory: 1024m imagePullPolicy: Never </code></pre>
Cassie
<blockquote> <p>Basically, I have HTTP server which refers to specified host and port and I want to run this jar file using Spark on top of K8s. How can I define this host?</p> </blockquote> <p>The kubernetes way of doing that is via a <code>Service</code>, which by default creates a DNS entry of the form <code>service-name.service-namespace.svc.cluster.local</code> where <code>service-name</code> and <code>service-namespace</code> are not those literal words, but the other 3 are literally that. However, if you just want to play around, and creating a <code>Service</code> is too much work, then you can use the current IP of the Pod in which your <code>SimpleHTTPServer</code> is running.</p> <blockquote> <p><code>mainApplicationFile: https://0.0.0.0:8089/spark_k8s_airflow.jar</code></p> </blockquote> <p>Be aware that, at least as you have written the python example above, you cannot use <code>https:</code> since <code>SimpleHTTPServer</code> is just that <strong>HTTP</strong>. It's possible you can convince one of the built-in packages to serve https, but it'll be a lot more typing and arguably not worth the effort</p>
mdaniel
<p>I am writing an automated test that will run everytime a cluster is created. As part of this test, I need to read the content of a file inside a pod and verify a specific string exists or not. How can I achieve this?</p> <p>Currently we are using Gem Train Kubernetes. It is capable of verifying that the pod exists or not. However, how can I verify the content inside a file in this pod?</p> <p>If there is another tool or test suite that I can use, I don't mind using that. I just need some pointers. Is this possible via inspec?</p>
Faraz
<p>Using the mechanisms provided by <code>kubectl</code> are going to be the least amount of drama, but I have no idea what Gem Train Kubernetes is in order to speak to that specifically</p> <p>You can grep for the content in the Pod, if it has a shell and grep available:</p> <pre><code>kubectl exec $the_pod_name -- sh -c 'grep TheStringYouWant /the/path/you/are/testing' </code></pre> <p>or you can copy the file off of the Pod if you need to do something more complex and the file isn't too big:</p> <pre><code>kubectl cp ${the_pod_name}:/the/path/in/the/pod ./to/local </code></pre>
mdaniel
<p>I setup a docker registry using the <a href="https://github.com/helm/charts/tree/master/stable/docker-registry" rel="nofollow noreferrer">official helm chart</a> on my k8s cluster. I tried configuring notifications for my registry as per <a href="https://docs.docker.com/registry/notifications/#configuration" rel="nofollow noreferrer">the docs</a>, as follows:</p> <pre><code>apiVersion: v1 data: config.yml: |- health: storagedriver: enabled: true interval: 10s threshold: 3 http: addr: :5000 headers: X-Content-Type-Options: - nosniff notifications: endpoints: - name: keel url: http://keel.example.com/v1/webhooks/registry headers: Content-Type: application/json timeout: 500ms threshold: 5 backoff: 1s log: fields: service: registry storage: cache: blobdescriptor: inmemory version: 0.1 kind: ConfigMap </code></pre> <p>After changing the config to include notifications, the registry fails to start as it does not recognize the configuration. I get this error:</p> <pre><code>configuration error: error parsing /etc/docker/registry/config.yml: yaml: unmarshal errors: line 16: cannot unmarshal !!str `applica...` into []string Usage: registry serve &lt;config&gt; [flags] Flags: -h, --help=false: help for serve Additional help topics: </code></pre>
Badri
<p>You missed the yaml character <code>[</code> in their docs (which I freely admit is a terrible example, given that <code>[</code> is often used in documentation as "placeholder goes here"), since in yaml it is the character that turns an item into a list -- just like in JSON, from which YAML draws its inspiration</p> <p>But, that aside, the <code>cannot unmarshal str into []string</code> should have been a dead giveaway that they were expecting an <em>array</em> of strings for the header:</p> <pre><code>headers: Content-Type: - application/json </code></pre> <p>or, using the syntax of their <strong>terrible</strong> example:</p> <pre><code>headers: Content-Type: [application/json] </code></pre> <p>To follow up, the <a href="https://docs.docker.com/registry/configuration/#endpoints" rel="nofollow noreferrer"><code>endpoints:</code> reference docs</a> <em>also</em> point out that:</p> <blockquote> <p>A list of static headers to add to each request. Each header's name is a key beneath headers, and each value is a <strong>list of payloads</strong> for that header name. <strong>Values must always be lists</strong>.</p> </blockquote> <p><em>(emphasis is mine)</em></p>
mdaniel
<p>I have installed kube v1.11, since heapster is depreciated I am using matrics-server. Kubectl top node command works.</p> <p>Kubernetes dashboard looking for heapster service. What is the steps to configure dashboard to use materics server services</p> <pre><code>2018/08/09 21:13:43 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. </code></pre> <p>Thanks SR</p>
sfgroups
<p>This must be the week for asking that question; it seems that whatever is deploying heapster is omitting the <code>Service</code>, which one can fix <a href="https://stackoverflow.com/questions/51641057/how-to-get-the-resource-usage-of-a-pod-in-kubernetes/51645461#comment90363934_51645461">as described here</a> -- or the tl;dr is just: create the <code>Service</code> named <code>heapster</code> and point it at your heapster pods.</p>
mdaniel
<p>Hello I'm trying to create a containerized file-system management tool.</p> <p>Is there a way I can list files on my host nodes file-system from within my pod/container?</p>
A. Campagna
<p>Yes, if you have a <code>volumeMount</code> pointing to the <code>hostPath:</code> volume, although I can't say with high certainty it will do the right thing with regard to other bind mounts on the host:</p> <pre><code>spec: containers: - name: looker volumeMounts: - name: host mountPath: /host-fs volumes: - name: host hostPath: path: / </code></pre>
mdaniel
<p>When performing a new Rollout via the Rolling Upgrade strategy, I have found out that the main bottleneck in terms of duration is the creation of new nodes (by the Cluster Autoscaler) needed to accommodate for the simultaneous presence of old and new pods.</p> <p>Although tweaking a bit <code>.spec.strategy.rollingUpdate.maxUnavailable</code> and <code>.spec.strategy.rollingUpdate.maxSurge</code> values can mitigate the side effects a bit, I see that if I proactively (and manually for the time being) spin up new nodes, the time of the new rollout reduces dramatically.</p> <p>Are there any off-the-self tools that perform this kind of Tasks?</p> <p>If not, any recommended strategy to go about this would be highly appreciated.</p>
pkaramol
<p>If the cluster autoscaler is just acting on "simple" resource requests, such as CPU and memory, you could launch a couple of "warm-up" Pods at the beginning of your CI process, which will give the autoscaler a "head's up" that a new deployment is coming very soon.</p> <p>Something like <code>kubectl run expensive-sleep --image=busybox:latest --requests=memory=32Gi --restart=Never -- sleep 300</code> which would poke the autoscaler, then the placeholder Pod would exit, but the autoscaler usually does not scale Nodes down immediately, so you would have some freshly provisioned Nodes waiting for the actual rollout of your Deployment.</p> <p>If the autoscaler is making more complicated decisions, such as GPUs, taints/tolerations, availability zones and whathaveyou, then the trick may require more than just a <code>kubectl run</code>, but I believe the underlying idea is plausible</p>
mdaniel
<p>We have a setup, where Metricbeat is deployed as a DaemonSet on a Kubernetes cluster (specifically -- AWS EKS).</p> <p>All seems to be functioning properly, but the <strong>kubelet</strong> connection.</p> <p>To clarify, the following module:</p> <pre><code>- module: kubernetes enabled: true metricsets: - state_pod period: 10s hosts: ["kube-state-metrics.system:8080"] </code></pre> <p>works properly (the events flow into logstash/elastic).</p> <p>This module configuration, however, doesn't work in any variants of hosts value (<code>localhost</code>/<code>kubernetes.default</code>/whatever):</p> <pre><code>- module: kubernetes period: 10s metricsets: - pod hosts: ["localhost:10255"] enabled: true add_metadata: true in_cluster: true </code></pre> <blockquote> <p>NOTE: using cluster IP instead of localhost (so that it goes to control plane) also works (although doesn't retrieve the needed information, of course).</p> <p>The configuration above was taken directly from the Metricbeat documentation and immediately struck me as odd -- how does localhost get translated (from within Metricbeat docker) to corresponding kubelet?</p> </blockquote> <p>The error is, as one would expect, in light of the above:</p> <pre><code>error making http request: Get http://localhost:10255/stats/summary: dial tcp [::1]:10255: connect: cannot assign requested address </code></pre> <p>which indicates some sort of connectivity issue.</p> <p>However, when SSH-ing to any node Metricbeat is deployed on, <code>http://localhost:10255/stats/summary</code> provides the correct output:</p> <pre><code>{ "node": { "nodeName": "...", "systemContainers": [ { "name": "pods", "startTime": "2018-12-06T11:22:07Z", "cpu": { "time": "2018-12-23T06:54:06Z", ... }, "memory": { "time": "2018-12-23T06:54:06Z", "availableBytes": 17882275840, .... </code></pre> <p>I must be missing something very obvious. Any suggestion would do.</p> <p>NOTE: I cross-posted (and got no response for a couple of days) the same on <a href="https://discuss.elastic.co/t/metricbeat-kubernetes-module-cant-connect-to-kubelet/161939" rel="nofollow noreferrer">Elasticsearch Forums</a></p>
ZenMaster
<p>Inject the Pod's Node's IP via the <code>valueFrom</code> provider in the <code>env:</code> list:</p> <pre><code>env: - name: HOST_IP valueFrom: fieldRef: status.hostIP </code></pre> <p>and then update the metricbeat config file to use the host's IP:</p> <pre><code>hosts: ["${HOST_IP}:10255"] </code></pre> <p>which metricbeat will resolve via its <a href="https://www.elastic.co/guide/en/beats/libbeat/6.5/config-file-format-env-vars.html#config-file-format-env-vars" rel="noreferrer">environment variable config injection</a></p>
mdaniel
<p>I have setup Kubernetes 1.15.3 cluster on Centos 7 OS using systemd cgroupfs. on all my nodes syslog started logging this message frequently.</p> <p>How to fix this error message?</p> <p><code>kubelet: W0907 watcher.go:87 Error while processing event (&quot;/sys/fs/cgroup/memory/libcontainer_10010_systemd_test_default.slice&quot;: 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory</code></p> <p>Thanks</p>
sfgroups
<p>It's a <a href="https://github.com/kubernetes/kubernetes/issues/76531" rel="nofollow noreferrer">known issue</a> with a bad interaction with <code>runc</code>; someone observed it is actually <em>caused</em> by <a href="https://github.com/kubernetes/kubernetes/issues/76531#issuecomment-522286281" rel="nofollow noreferrer">a repeated etcd health check</a> but that wasn't my experience on Ubuntu, which exhibits that same behavior on <em>every</em> Node</p> <p>They allege that updating the <code>runc</code> binary on your hosts will make the problem go away, but I haven't tried that myself</p>
mdaniel
<p>I have a playbook with kubectl command, when I want to run this command it cannot avoid quotes and understand this directory not exist</p> <pre><code>--- - hosts: localhost vars_files: - vars/main.yaml tasks: - shell: cmd: | kubectl exec -it -n {{ namespace }} {{ pod_name }} -- bash -c \"clickhouse-client --query "INSERT INTO customer FORMAT CSV" --user=test --password=test &lt; /mnt/azure/azure/test/test.tbl\" register: output2 </code></pre> <p>Here is the error:</p> <pre><code>fatal: [127.0.0.1]: FAILED! =&gt; { "changed": true, "cmd": "kubectl exec -it -n ch-test04 chi-test-dashboard-sharded1-dashboard03-3-0-0 -- bash -c \\\"clickhouse-client --query \"INSERT INTO customer FORMAT CSV\" --user=test --password=test &lt; mnt/azure/azure/test/test.tbl\\\"\n", "delta": "0:00:00.002088", "end": "2020-04-23 13:30:00.456263", "invocation": { "module_args": { "_raw_params": "kubectl exec -it -n ch-test04 chi-test-dashboard-sharded1-dashboard03-3-0-0 -- bash -c \\\"clickhouse-client --query \"INSERT INTO customer FORMAT CSV\" --user=test --password=test &lt; mnt/azure/azure/test/test.tbl\\\"\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "msg": "non-zero return code", "rc": 2, "start": "2020-04-23 13:30:00.454175", "stderr": "/bin/sh: 1: cannot open mnt/azure/azure/test/test.tbl\": No such file", "stderr_lines": [ "/bin/sh: 1: cannot open mnt/azure/azure/test/test.tbl\": No such file" ], "stdout": "", "stdout_lines": [] } </code></pre> <p>So when I put this command in a python script ansible still manuplate quotes and got the same error. I already tried escape/quote but I think the problem is when I use '&lt;' character after query where from insert data and ansible cannot understand entire command not finished yet. But I am not sure how can I tell with the correct way.Thanks</p>
Bora Özkan
<p>You quoted the wrong characters; you want the <strong>interior</strong> quotes to be escaped, or sidestep that entire mess and use alternate characters for the outer from the inner:</p> <pre class="lang-yaml prettyprint-override"><code>- shell: | kubectl exec -i -n {{ namespace }} {{ pod_name }} -- bash -c 'clickhouse-client --query "INSERT INTO customer FORMAT CSV" --user=test --password=test &lt; /mnt/azure/azure/test/test.tbl' </code></pre>
mdaniel
<p>I deployed a kubernetes loadbalancer on google cloud.</p> <pre><code>$kubectl expose deployments nginx --port 80 --type LoadBalancer $kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.31.240.1 &lt;none&gt; 443/TCP 9m nginx LoadBalancer 10.31.253.32 35.188.14.5 80:30787/TCP 1m </code></pre> <p>Notice the nginx port has 80 and 30787. What does these two ports mean?</p>
drdot
<blockquote> <p>Notice the nginx port has 80 and 30787. What does these two ports mean?</p> </blockquote> <p>A <code>kubectl describe service nginx</code> would likely be more explanatory, but the tl;dr is that 80 is the port from inside the cluster, and 30787 is the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> pointing to port 80 of that Service. The <code>NodePort</code> is required because in order for whatever load balancer is running on <code>35.188.14.5</code> to connect into the cluster, it needs a TCP/IP port that it can use, since it (hopefully!) cannot use <code>10.31.253.32:80</code> to otherwise communicate with that Service the way things inside the CNI boundary do.</p>
mdaniel
<p>I'm trying to deploy postgres/postgis on GKE, but I continue to get the permission error: <code>initdb: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted</code>. I've tried various fixes that I've researched but I've yet to get passed this error. Below is my deployment yaml.</p> <p>What am I missing?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: replicas: 1 template: metadata: labels: app: postgres spec: terminationGracePeriodSeconds: 10 securityContext: runAsUser: 1000 fsGroup: 1000 containers: - name: postgres image: mdillon/postgis:10 imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 env: - name: POSTGRES_DB value: "database" - name: POSTGRES_USER value: "postgres" - name: POSTGRES_PASSWORD value: "postgres" volumeMounts: - name: postgredb mountPath: /var/lib/postgresql/data subPath: data volumes: - name: postgredb persistentVolumeClaim: claimName: postgres-pvc </code></pre>
Mike
<p>While it is not exactly the same question, this "use an <code>initContainer:</code> to <code>chmod</code>" answer will interest you: <a href="https://stackoverflow.com/questions/51200115/chown-changing-ownership-of-data-db-operation-not-permitted/51203031#51203031">chown: changing ownership of &#39;/data/db&#39;: Operation not permitted</a></p>
mdaniel
<p>In Kubernetes, I am having a directory permission problem. I am testing with a pod to create a bare-bones elasticsearch instance, built off of an ElasticSearch provided docker image. </p> <p>If I use a basic .yaml file to define the container, everything starts up. The problem happens when I attempt to replace a directory created from the docker image with a directory created from mounting of the persistent volume. </p> <p>The original directory was</p> <pre><code>drwxrwxr-x 1 elasticsearch root 4096 Aug 30 19:25 data </code></pre> <p>and if I mount the persistent volume, it changes the owner and permissions to</p> <pre><code>drwxr-xr-x 2 root root 4096 Aug 30 19:53 data </code></pre> <p>Now with the elasticsearch process running a the elasticsearch user, this directory can longer be accessed.</p> <p>I have set the pod's security context's fsGroup to 1000, to match the group of the elasticsearch group. I have set the container's security context's runAsUser to 0. I have set various other combinations of users and group, but to no avail. </p> <p>Here is my pod, persistent volume claim, and persistent volume definitions.</p> <p>Any suggestions are welcome.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: elasticfirst labels: app: elasticsearch spec: securityContext: fsGroup: 1000 containers: - name: es01 image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1 securityContext: runAsUser: 0 resources: limits: memory: 2Gi cpu: 200m requests: memory: 1Gi cpu: 100m env: - name: node.name value: es01 - name: discovery.seed_hosts value: es01 - name: cluster.initial_master_nodes value: es01 - name: cluster.name value: elasticsearch-cluster - name: bootstrap.memory_lock value: "true" - name: ES_JAVA_OPTS value: "-Xms1g -Xmx2g" ports: - containerPort: 9200 volumeMounts: - mountPath: "/usr/share/elasticsearch/data" name: elastic-storage2 nodeSelector: type: compute volumes: - name: elastic-storage2 persistentVolumeClaim: claimName: elastic-storage2-pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: elastic-storage2-pvc spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: 512Mi apiVersion: v1 kind: PersistentVolume metadata: name: elastic-storage2-pv spec: storageClassName: local-storage capacity: storage: 512Mi accessModes: - ReadWriteOnce hostPath: path: /var/tmp/pv </code></pre>
Scott S
<p>Your question is a tiny bit confusing about what is happening versus what you want to be happening, but in general that problem is a common one; that's why many setups use an <code>initContainer:</code> to change the ownership of freshly provisioned PersistentVolumes (<a href="https://github.com/bitnami/charts/blob/2ba572d7ec51d5ff9ecd47f5748aab308aef693e/bitnami/cassandra/templates/statefulset.yaml#L62-L72" rel="nofollow noreferrer">as in this example</a>)</p> <p>In such a setup, the <code>initContainer:</code> would run as root, but would also presumably be a very thin container whose job is only to <code>chown</code> and then exit, leaving your application container -- elasticsearch in your example -- free to run as an unprivileged user</p> <pre><code>spec: initContainers: - name: chown image: busybox command: - chown - -R - "1000:1000" - /the/data volumeMounts: - name: es-data mountPoint: /the/data containers: - name: es # etc etc </code></pre>
mdaniel
<p>Having trouble figuring out what is wrong. I have a remote kubernetes cluster up and have copied the config locally. I know it is correct because I have gotten other commands to work for me.</p> <p>The one I can't get to work is a deployment patch. My code:</p> <pre class="lang-golang prettyprint-override"><code>const namespace = &quot;default&quot; var clientset *kubernetes.Clientset func init() { kubeconfig := &quot;/Users/$USER/go/k8s-api/config&quot; config, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, kubeconfig) if err != nil { log.Fatal(err) } // create the clientset clientset, err = kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } } func main() { deploymentsClient := clientset.ExtensionsV1beta1().Deployments(&quot;default&quot;) patch := []byte(`[{&quot;spec&quot;:{&quot;template&quot;:{&quot;spec&quot;:{&quot;containers&quot;:[{&quot;name&quot;:&quot;my-deploy-test&quot;,&quot;image&quot;:&quot;$ORG/$REPO:my-deploy0.0.1&quot;}]}}}}]`) res, err := deploymentsClient.Patch(&quot;my-deploy&quot;, types.JSONPatchType, patch) if err != nil { panic(err) } fmt.Println(res) } </code></pre> <p>All I get back is: <code>panic: the server rejected our request due to an error in our request</code></p> <p>Any help appreciated, thanks!</p>
L. Norman
<p>You have mixed up <a href="https://godoc.org/k8s.io/apimachinery/pkg/types#PatchType" rel="nofollow noreferrer"><code>JSONPatchType with MergePatchType</code></a>; <code>JSONPatchType</code> wants the input to be <a href="https://www.rfc-editor.org/rfc/rfc6902#section-4.3" rel="nofollow noreferrer">RFC 6902</a> formatted &quot;commands&quot;, and in that case can be a JSON array, because there can be multiple commands applied in order to the input document</p> <p>However, your payload looks much closer to you wanting <code>MergePatchType</code>, in which case the input should <strong>not</strong> be a JSON array because the source document is not an array of <code>&quot;spec&quot;</code> objects.</p> <p>Thus, I'd bet just dropping the leading <code>[</code> and trailing <code>]</code>, changing the argument to be <code>types.MergePatchType</code> will get you much further along</p>
mdaniel
<p>I am using helm3 to deploy a PHP based application in kubernetes. I am using ingress controller with below version. I am getting below mentioned error. Even though there are secrets in desired namespace it is giving this error. When I do the deployment using "kubectl apply -f yaml", it works perfectly fine. Nginx controller support HTTPS backend with this annotation "nginx.ingress.kubernetes.io/backend-protocol: "HTTPS", but somehow it is not being recognized as shown in the error. Can someone help?</p> <pre><code>NGINX Ingress controller Release: 0.30.0 Build: git-7e65b90c4 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.17.8 </code></pre> <p><strong>Nginx controller error</strong></p> <pre><code>W0413 17:30:53.061666 6 main.go:60] Protocol "HTTPS" is not a valid value for the backend-protocol annotation. Using HTTP as protocol W0413 17:30:56.382073 6 controller.go:1105] Error getting SSL certificate "tls-test/php-tls-secret": local SSL certificate tls-test/php-tls-secret was not found. Using default certificate E0413 17:19:32.942187 6 annotations.go:200] error reading ProxySSL annotation in Ingress tls-test/abc-demo: Location denied, reason: invalid format (namespace/name) found in "abc-tls-secret </code></pre> <p>"</p> <p><strong>Values.yaml</strong></p> <pre><code> annotations: nginx.ingress.kubernetes.io/proxy-ssl-secret: | "tls-test/abc-tls-secret" nginx.ingress.kubernetes.io/auth-tls-secret: | "tls-test/php-tls-secret" nginx.ingress.kubernetes.io/backend-protocol: | "HTTPS" </code></pre>
user3847894
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: | "HTTPS" </code></pre> <p>Does not specify <code>HTTPS</code> as the <code>backend-protocol</code>, it specifies <code>"HTTPS"\n</code> as the <code>backned-protocol</code></p> <pre><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS </code></pre> <p>is the correct setting, not only because it removes the newline caused by the yaml pipe operator, but also the double quoting that is going on between the pipe and the literal <code>"</code> characters</p> <hr> <p>as for the error message, it couldn't be more clear: remove the namespace qualifier, since there is no outcome through which an Ingress resource would consult any namespace other than the one in which it is created</p>
mdaniel
<p>I am getting started with Airflow and trying to use the KubernetesPodOperator, but I am having trouble with downloading images from private registries. I did some research but I couldn't find an answer to my problem.</p> <p>Putting it simply: can I use private images from DockerHub with the KubernetesPodOperator?</p>
hscasn
<p>It looks like <a href="https://github.com/apache/incubator-airflow/blob/1.10.0rc4/airflow/contrib/kubernetes/pod_generator.py#L28" rel="nofollow noreferrer">pod_generator.PodGenerator</a> accepts some kind of object <code>kube_config</code> that <a href="https://github.com/apache/incubator-airflow/blob/1.10.0rc4/airflow/contrib/kubernetes/pod_generator.py#L142" rel="nofollow noreferrer">knows about imagePullSecrets</a>, but unfortunately <a href="https://github.com/apache/incubator-airflow/blob/1.10.0rc4/airflow/contrib/operators/kubernetes_pod_operator.py#L87" rel="nofollow noreferrer"><code>KubernetesPodOperator</code></a> doesn't provide any such <code>kube_config</code> to <code>PodGenerator</code></p> <p>As best I can tell, it's just an edge case that slipped through the cracks, although it looks like there is <a href="https://issues.apache.org/jira/browse/AIRFLOW-2854" rel="nofollow noreferrer">a Jira for that</a> which matches up with <a href="https://github.com/apache/incubator-airflow/pull/3697" rel="nofollow noreferrer">a corresponding GitHub PR</a>, but it isn't clear from looking at the changed files that it will 100% solve the problem you are describing. Perhaps weigh in on either the PR, or the Jira, or maybe even both, to ensure it is addressed.</p>
mdaniel
<p>I have a kubernetes namespace that I want to leverage for Gitlab runners. I installed the runners following the <a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_runner_chart.html" rel="nofollow noreferrer">Helm Chart</a> instructions. The problem I am running into is that when the job container spins up, I get the following ERROR: </p> <p>Job failed: image pull failed: rpc error: code = Unknown desc = Get <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a>: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)</p> <p>It's trying to connect to the public docker repo but my organizations firewall is blocking it. How would I go about having the instace go to our private repo? </p> <p>Any help would be greatly appreciated as I've been stuck on this issue for some time now :(</p>
mrspanishviking
<p>I would presume you'll need to specify a <code>values.yaml</code> to the <code>helm install</code> that points to your mirrored copy of the images it needs. So:</p> <ul> <li><a href="https://gitlab.com/charts/gitlab-runner/blob/1e329394f69b80003a43c503e7d11eb6463d008a/values.yaml#L4" rel="nofollow noreferrer">gitlab/gitlab-runner</a></li> <li><a href="https://gitlab.com/charts/gitlab-runner/blob/1e329394f69b80003a43c503e7d11eb6463d008a/values.yaml#L14" rel="nofollow noreferrer">busybox</a></li> <li><a href="https://gitlab.com/charts/gitlab-runner/blob/1e329394f69b80003a43c503e7d11eb6463d008a/values.yaml#L87" rel="nofollow noreferrer">ubuntu</a></li> </ul> <p>or whatever ones you wish to use for the <code>init</code> and <code>runner: image:</code></p> <p>Since you already have the chart deployed, I am fairly certain you can just do an <a href="https://github.com/helm/helm/blob/v2.10.0/docs/using_helm.md#helm-upgrade-and-helm-rollback-upgrading-a-release-and-recovering-on-failure" rel="nofollow noreferrer">"helm upgrade"</a> that changes only those values:</p> <pre><code>helm upgrade --set "image=repo.example.com/gitlab/gitlab-runner" \ --set "init.image=repo.example.com/etc-etc" \ [and so forth] \ $releaese_name $chart_name </code></pre> <p><em>(substituting the release name and the name of the chart as your helm knows it, of course)</em></p>
mdaniel
<p>I have microK8S cluster, and expose the API server at my domain. The <code>server.crt</code> and <code>server.key</code> in <code>/var/snap/microk8s/1079/certs</code> need to be replaced with the ones that include my domain. Otherwise, as expected, i get the error: </p> <p><code>Unable to connect to the server: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, not mydonaim.com</code></p> <p>With the help of cert-manager I have produced certificates and replaced them, my system works well. </p> <p><strong>Problem:</strong> every time server is restarted, <code>server.crt</code> and <code>server.key</code> are generated again in <code>/var/snap/microk8s/1079/certs</code>. My custom certs are deleted, making API server unreachable remotely. How can I stop the system from doing that all the time? </p> <p><strong>Workaround?</strong> Should I place my certificates elsewhere and edit config files like <code>/var/snap/microk8s/1079/args/kube-controller-manager</code> with the path to those certificates? Are those config files auto-replaced as well? </p> <h3>Cluster information:</h3> <ul> <li>Kubernetes version: 1.16.3 </li> <li>Cloud being used: Bare metal, single-node</li> <li>cluster Installation method: Ubuntu Server with Snaps </li> <li>Host OS: Ubuntu 18.04.3 LTS</li> </ul>
Vladimir Akopyan
<p>It looks like there is <a href="https://github.com/ubuntu/microk8s/issues/421#issuecomment-482105218" rel="nofollow noreferrer">an existing issue</a> that describes copying and modifying the <code>/var/snap/microk8s/current/certs/csr.conf.template</code> to include any extra IP or DNS entries for the generated certificates</p>
mdaniel
<p>I've got a <strong>persistent volume</strong> currently bound to a <strong>deployment</strong>, I've set the <strong>replica count to 0</strong> which I was hoping would unbound the volume - so I could mount it on another pod but it remains in a <code>Bound</code> status.</p> <p>How can I copy the data from this? </p> <p>I'd like to transfer it via scp to another location.</p>
Chris Stryczynski
<blockquote> <p>I've got a persistent volume currently bound to a deployment, I've set the replica count to 0 which I was hoping would unbound the volume - so I could mount it on another pod but it remains in a Bound status.</p> </blockquote> <p>"Bound" does not mean it is attached to a Node, nor to a Pod (which is pragmatically the same thing); <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="nofollow noreferrer">Bound</a> means that the cloud provider has created a Persistent Volume (out of thin air) in order to fulfill a Persistent Volume Claim for some/all of its allocatable storage. "Bound" relates to its <em>cloud</em> status, not its Pod status. That term exists because kubernetes supports <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy" rel="nofollow noreferrer">reclaiming</a> volumes, to avoid creating a new cloud storage object if there are existing ones that no one is claiming and fulfill the resource request boundaries.</p> <p>There's nothing, at least in your question, that prevents you from launching another Pod (in the same Namespace as the Deployment) with a <code>volumes:</code> that points to the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes" rel="nofollow noreferrer"><code>persistentVolumeClaim</code></a> and it will launch that Pod with the volume just as it did in your Deployment. You can then do whatever you'd like in that Pod to egress the data.</p>
mdaniel
<p>While I've been learning about Kubernetes and Terraform I've been building a Node.js microservices example.</p> <p>It's all been going well so far and with a few commands I can provision a Kubernetes cluster and deploy a couple of Node.js microservices to it.</p> <p>The full example is available on GitHub: <a href="https://github.com/ashleydavis/nodejs-microservices-example" rel="nofollow noreferrer">https://github.com/ashleydavis/nodejs-microservices-example</a></p> <p>You can see the full setup for the cluster and the pods in this file: <a href="https://github.com/ashleydavis/nodejs-microservices-example/blob/master/scripts/infrastructure/kubernetes/kubernetes.tf" rel="nofollow noreferrer">https://github.com/ashleydavis/nodejs-microservices-example/blob/master/scripts/infrastructure/kubernetes/kubernetes.tf</a></p> <p>For example one of the pods is defined like this:</p> <pre><code>resource "kubernetes_pod" "web" { metadata { name = "nodejs-micro-example-web" labels { name = "nodejs-micro-example-web" } } spec { container { image = "${var.docker_registry_name}.azurecr.io/web:${var.version}" name = "nodejs-micro-example-web" } } } </code></pre> <p>It all works great for the initial roll out, but I'm unable to get the system to update when I change the code and build new versions of the Docker images.</p> <p>When I do this I update the variable "version" that you can see in that previous snippet of code.</p> <p>When I subsequently run <code>terraform apply</code> it gives me the following error saying that the pod already exists: </p> <pre><code>kubernetes_pod.web: pods "nodejs-micro-example-web" already exists </code></pre> <p><strong>So my question</strong> is how do I use Kubernetes and Terraform to roll out code updates (i.e. updated Docker images) and have new pods be deployed to the cluster? (and at the same time have the old pods be cleaned up).</p>
Ashley Davis
<p>It's the following line that is incorrect:</p> <pre><code> name = "nodejs-micro-example-web" </code></pre> <p>because a Pod's name is unique within its namespace.</p> <p>You almost <strong>never</strong> want to deploy a standalone Pod, because kubernetes considers those as ephemeral. That's ordinarily not a problem because Pods are created <em>under the supervision</em> of a <code>Deployment</code> or <code>ReplicationController</code> (or a few others, but you hopefully get the idea). In your case, if^H^H when that Pod falls over, kubernetes will not restart it and then it's a pretty good bet that outcome will negate a lot of the value kubernetes brings to the situation.</p>
mdaniel
<p>Does Kubernetes have any strategy of picking which older version pods to be deleted during rolling update? I didn't find any existing documents describing this.</p>
zero_yu
<p>Pods are supposed to be ephemeral and disposable, and thus if you <em>do</em> care about their ordering, you'd want to use a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulsetspec-v1-apps" rel="nofollow noreferrer"><code>StatefulSet</code> with <code>podManagementPolicy:</code></a> which allows you to control the order they are created and then destroyed.</p>
mdaniel
<p>if I run <code>jx create cluster aws</code> -> it creates the cluster on aws without any issues but if I won't to specify some options like this:</p> <pre><code>jx create cluster aws --zones us-east-2b --nodes=2 --node-size=t2.micro --master-size=t2.micro </code></pre> <p>Then it fails constantly, whatever I tried to change, giving out these kind of errors for almost all options:</p> <pre><code>Error: unknown flag: - -node-size and the same for other options. Options were taken from here https://jenkins-x.io/commands/jx_create_cluster_aws/ </code></pre> <p>Setting up the cluster with kops with whatever options don't have any issues</p>
Alexz
<p>I asked about this in a comment, but the actual <em>answer</em> appears to be that you are on a version of <code>jx</code> that doesn't match the documentation. Because this is my experience with a freshly downloaded binary:</p> <pre><code>$ ./jx create cluster aws --verbose=true --zones=us-west-2a,us-west-2b,us-west-2c --cluster-name=sample --node-size=5 --master-size=m5.large kops not found kubectl not found helm not found ? Missing required dependencies, deselect to avoid auto installing: [Use arrows to move, type to filter] ❯ ◉ kops ◉ kubectl ◉ helm ? nodes [? for help] (3) ^C $ ./jx --version 1.3.90 </code></pre>
mdaniel
<p>I have a k8s service/deployment in a minikube cluster (name <code>amq</code> in <code>default</code> namespace:</p> <pre><code>D20181472:argo-k8s gms$ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argo argo-ui ClusterIP 10.97.242.57 &lt;none&gt; 80/TCP 5h19m default amq LoadBalancer 10.102.205.126 &lt;pending&gt; 61616:32514/TCP 4m4s default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5h23m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 5h23m </code></pre> <p>I spun up infoblox/dnstools, and tried <code>nslookup</code>, <code>dig</code> and <code>ping</code> of <code>amq.default</code> with the following results:</p> <pre><code>dnstools# nslookup amq.default Server: 10.96.0.10 Address: 10.96.0.10#53 Name: amq.default.svc.cluster.local Address: 10.102.205.126 dnstools# ping amq.default PING amq.default (10.102.205.126): 56 data bytes ^C --- amq.default ping statistics --- 28 packets transmitted, 0 packets received, 100% packet loss dnstools# dig amq.default ; &lt;&lt;&gt;&gt; DiG 9.11.3 &lt;&lt;&gt;&gt; amq.default ;; global options: +cmd ;; Got answer: ;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NXDOMAIN, id: 15104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;amq.default. IN A ;; Query time: 32 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Sat Jan 26 01:58:13 UTC 2019 ;; MSG SIZE rcvd: 29 dnstools# ping amq.default PING amq.default (10.102.205.126): 56 data bytes ^C --- amq.default ping statistics --- 897 packets transmitted, 0 packets received, 100% packet loss </code></pre> <p>(NB: pinging the ip address directly gives the same result)</p> <p>I admittedly am not very knowledgable about the deep workings of DNS, so I am not sure why I can do a lookup and dig for the hostname, but not ping it. </p>
horcle_buzz
<blockquote> <p>I admittedly am not very knowledgable about the deep workings of DNS, so I am not sure why I can do a lookup and dig for the hostname, but not ping it.</p> </blockquote> <p>Because <code>Service</code> IP addresses are figments of your cluster's imagination, caused by either iptables or ipvs, and don't actually exist. You can see them with <code>iptables -t nat -L -n</code> on any Node that is running <code>kube-proxy</code> (or <code>ipvsadm -ln</code>), as is described by the helpful <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#iptables" rel="noreferrer">Debug[-ing] Services</a> page</p> <p>Since they are not real IPs bound to actual NICs, they don't respond to any traffic other than the port numbers registered in the <code>Service</code> resource. The correct way of testing connectivity against a service is with something like <code>curl</code> or <code>netcat</code> and using the port number upon which you are expecting application traffic to travel.</p>
mdaniel
<p>Silly question. Just setup a 4 node kubernetes cluster (one master node). The etcd pod is running in the kube-system. But I can not find the etcdctl in regular path. </p> <pre><code># find / -name etcdctl -print /var/lib/docker/overlay2/9c0f2f2ef864f3be6f4195cbcda59b712805fc1957204b97d36a9239e3adb1cf/diff/usr/local/bin/etcdctl /var/lib/docker/overlay2/bf2ff1f903bef67c6ed8ddf0b37cc6a90788974f61275df5d5fe3d9bdaca232c/merged/usr/local/bin/etcdctl </code></pre> <p>Used the found etcdctl to take snapshot but it's hung. Do I need to install etcdctl? Thanks!</p>
TJS
<p>If you are new to <code>docker</code>, what you have found is the volume mount for the etcd container, and inside that container is the <code>etcdctl</code> binary that is designed for use with the <code>etcd</code> binary inside the container.</p> <p>You will want to <code>docker exec</code> into the container so you will have access to all the environment variables, config files, hostnames, etc that would be required to interact with <code>etcd</code>. <a href="https://github.com/etcd-io/etcd/blob/v3.3.18/Documentation/op-guide/container.md#running-a-3-node-etcd-cluster-1" rel="nofollow noreferrer">Their documentation</a> shows an example of using <code>docker exec</code>.</p> <p>The <a href="https://github.com/etcd-io/etcd/blob/v3.3.18/Documentation/op-guide/recovery.md#snapshotting-the-keyspace" rel="nofollow noreferrer">fine manual</a> also describes that you will want to ensure the <code>ETCDCTL_API</code> environment variable is correctly set while using <code>etcdctl</code>.</p>
mdaniel
<p>I'm trying to deploy GitLab on Kubernetes using minikube through <a href="https://docs.gitlab.com/ce/install/kubernetes/gitlab_chart.html#deployment-of-gitlab-to-kubernetes" rel="nofollow noreferrer">this</a> tutorial, but I don't know what values to put in the fields <code>global.hosts.domain</code>, <code>global.hosts.externalIP</code> and <code>certmanager-issuer.email</code>.</p> <p>The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?</p>
Michael Pacheco
<blockquote> <p>I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields <code>global.hosts.domain</code>, <code>global.hosts.externalIP</code> and <code>certmanager-issuer.email</code>.</p> </blockquote> <p>For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like <a href="http://www.thekelleys.org.uk/dnsmasq/doc.html" rel="nofollow noreferrer"><code>dnsmasq</code></a> or editing <code>/etc/hosts</code>, if it's important to you</p> <p>For the externalIP, that will be what <code>minikube ip</code> emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a <code>Service</code> of type <code>NodePort</code>, you're in for some more hoop-jumping to expose those ports via minikube's IP</p> <p>The <code>certmanager-issuer.email</code> you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.</p> <blockquote> <p>The tutorial is very poor in explanations.</p> </blockquote> <p>That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.</p>
mdaniel
<p>HI I know there's a way i can pull out a problematic node out of loadbalancer to troubleshoot. But how can i pull a pod out of service to troubleshoot. What tools or command can do it ?</p>
Gabriel Wu
<p>Change its labels so they no longer matches the <code>selector:</code> in the <code>Service</code>; we used to do that all the time. You can even put it back into rotation if you want to test a hypothesis. I don't recall exactly how quickly it takes effect, but I would guess "real quick" is a good approximation. :-)</p> <pre class="lang-shell prettyprint-override"><code>## for example: $ kubectl label pod $the_pod -app.kubernetes.io/name ## or, change it to non-matching $ kubectl label pod $the_pod app.kubernetes.io/name=i-am-debugging-this-pod </code></pre>
mdaniel
<p>I'm creating a pod through the <code>KubernetesPodOperator</code> in Airflow. This pod should mount a Google Cloud Storage to <code>/mnt/bucket</code> using <code>gcsfuse</code>. For this, the pod is required to be started with the <code>securityContext</code> parameter, such that it can become 'privileged'.</p> <p>It is <a href="https://stackoverflow.com/questions/52742455/airflow-kubernetespodoperator-pass-securitycontext-parameter">currently not possible</a> to pass the securityContext parameter through Airflow. Is there another way to work around this? Perhaps by setting a 'default' securityContext before the pods are even started? I've looked at creating a <code>PodSecurityPolicy</code>, but haven't managed to figure out a way.</p>
bartcode
<p>A mutating admission controller would allow you to do that: <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook</a></p> <p>The ibm-cloud team has a post about it, but I've never tried writing one: <a href="https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74" rel="nofollow noreferrer">https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74</a> and the folks at GiantSwarm have an end-to-end example using their Grumpy admission controller: <a href="https://docs.giantswarm.io/guides/creating-your-own-admission-controller/" rel="nofollow noreferrer">https://docs.giantswarm.io/guides/creating-your-own-admission-controller/</a></p> <p>I would use the labels, or annotations, or maybe even the image, to identify Pods launched by Airflow, and then mutate only them to set the <code>securityContext:</code> on the Pods to be the way you want.</p>
mdaniel
<p>Please help me. I'm following this guide: <a href="https://pleasereleaseme.net/deploy-a-dockerized-asp-net-core-application-to-kubernetes-on-azure-using-a-vsts-ci-cd-pipeline-part-1/" rel="nofollow noreferrer">https://pleasereleaseme.net/deploy-a-dockerized-asp-net-core-application-to-kubernetes-on-azure-using-a-vsts-ci-cd-pipeline-part-1/</a>. My docker images is successfully built and pushed to my private container registry and my CD pipeline is okay as well. But when I tried to check it using kubectl get pods the status is always pending, when I tried to use <code>kubectl describe pod k8s-aspnetcore-deployment-64648bb5ff-fxg2k</code> the message is:</p> <pre><code>Name: k8s-aspnetcore-deployment-64648bb5ff-fxg2k Namespace: default Node: &lt;none&gt; Labels: app=k8s-aspnetcore pod-template-hash=2020466199 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/k8s-aspnetcore-deployment-64648bb5ff Containers: k8s-aspnetcore: Image: mycontainerregistryb007.azurecr.io/k8saspnetcore:2033 Port: 80/TCP Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tr892 (ro) Conditions: Type Status PodScheduled False Volumes: default-token-tr892: Type: Secret (a volume populated by a Secret) SecretName: default-token-tr892 Optional: false QoS Class: BestEffort Node-Selectors: beta.kubernetes.io/os=windows Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3m (x57 over 19m) default-scheduler 0/1 nodes are available: 1 MatchNodeSelector. </code></pre> <p>Here is for kubectl describe deployment:</p> <pre><code>Name: k8s-aspnetcore-deployment Namespace: default CreationTimestamp: Sat, 21 Jul 2018 13:41:52 +0000 Labels: app=k8s-aspnetcore Annotations: deployment.kubernetes.io/revision=2 Selector: app=k8s-aspnetcore Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 5 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: app=k8s-aspnetcore Containers: k8s-aspnetcore: Image: mycontainerregistryb007.azurecr.io/k8saspnetcore:2033 Port: 80/TCP Environment: &lt;none&gt; Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: k8s-aspnetcore-deployment-64648bb5ff (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 26m deployment-controller Scaled up replica set k8s-aspnetcore-deployment-7f756cc78c to 1 Normal ScalingReplicaSet 26m deployment-controller Scaled up replica set k8s-aspnetcore-deployment-64648bb5ff to 1 Normal ScalingReplicaSet 26m deployment-controller Scaled down replica set k8s-aspnetcore-deployment-7f756cc78c to 0 </code></pre> <p>Here is for kubectl describe service:</p> <pre><code>Name: k8s-aspnetcore-service Namespace: default Labels: version=test Annotations: &lt;none&gt; Selector: app=k8s-aspnetcore Type: LoadBalancer IP: 10.0.26.188 LoadBalancer Ingress: 40.112.73.28 Port: &lt;unset&gt; 80/TCP TargetPort: 80/TCP NodePort: &lt;unset&gt; 30282/TCP Endpoints: &lt;none&gt; Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 26m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 25m service-controller Ensured load balancer </code></pre> <p>I don't know if my setup for Kubernetes Cluster is wrong, but here is the script I used:</p> <pre><code>#!/bin/bash azureSubscriptionId="xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx" resourceGroup="k8sResourceGroup" clusterName="k8sCluster" location="northeurope" # Useful if you have more than one Aure subscription az account set --subscription $azureSubscriptionId # Resource group for cluster - only availble in certain regions at time of writing az group create --location $location --name $resourceGroup # Create actual cluster az aks create --resource-group $resourceGroup --name $clusterName --node-count 1 --generate-ssh-keys # Creates a config file at ~/.kube on local machine to tell kubectl which cluster it should work with az aks get-credentials --resource-group $resourceGroup --name $clusterName </code></pre> <p>Here is my deployment.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: k8s-aspnetcore-deployment spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: k8s-aspnetcore spec: containers: - name: k8s-aspnetcore image: mycontainerregistryb007.azurecr.io/k8saspnetcore ports: - containerPort: 80 imagePullSecrets: - name: k8ssecret nodeSelector: "beta.kubernetes.io/os": windows </code></pre> <p>Here is my service.yaml:</p> <pre><code>kind: Service metadata: name: k8s-aspnetcore-service labels: version: test spec: selector: app: k8s-aspnetcore ports: - port: 80 type: LoadBalancer </code></pre>
jravenger4
<p>The reason they are in Pending state is described in the message at the bottom of the <code>describe pod</code> output: <code>MatchNodeSelector</code>. That means Kubernetes didn't find a Node in your cluster that was able to fulfill the Node selection criteria specified in the PodSpec.</p> <p>Specifically, it's very likely these lines:</p> <pre><code>nodeSelector: "beta.kubernetes.io/os": windows </code></pre> <p>Only a <code>kubectl describe nodes</code> would tell if there are any Nodes that could possibly fulfill that criteria</p>
mdaniel
<p>I want to pass a certificate to the helm chart and currently I am passing using --set-file global.dbValues.dbcacertificate=./server.crt but instead i want to pass the file in values file of helm chart. The Values.yaml file reads</p> <pre><code>global: dbValues: dbcacertificate: &lt;Some Way to pass the .crt file&gt; </code></pre>
Darshil Shah
<p>According to <a href="https://helm.sh/docs/chart_template_guide/accessing_files/#lines" rel="noreferrer">the relevant documentation</a>, one must pre-process a file that is external to the chart into a means that can be provided via <code>--set</code> or <code>--values</code>, since <code>.Files.Get</code> cannot read file paths that are external to the chart bundle.</p> <p>So, given the following example template <code>templates/secret.yaml</code> containing:</p> <pre><code>apiVersion: v1 kind: Secret data: dbcacertificate: {{ .Values.dbcacertificate | b64enc }} </code></pre> <p>one can use shell interpolation as:</p> <pre><code>helm template --set dbcacertificate="$(cat ./server.crt)" . </code></pre> <p>or, if shell interpolation is not suitable for your circumstances, you can pre-process the certificate into a yaml compatible format and feed it in via <code>--values</code>:</p> <pre class="lang-shell prettyprint-override"><code>$ { echo "dbcacertificate: |"; sed -e 's/^/ /' server.crt; } &gt; ca-cert.yaml $ helm template --values ./ca-cert.yaml . </code></pre>
mdaniel
<p>I'm creating a pod through the <code>KubernetesPodOperator</code> in Airflow. This pod should mount a Google Cloud Storage to <code>/mnt/bucket</code> using <code>gcsfuse</code>. For this, the pod is required to be started with the <code>securityContext</code> parameter, such that it can become 'privileged'.</p> <p>It is <a href="https://stackoverflow.com/questions/52742455/airflow-kubernetespodoperator-pass-securitycontext-parameter">currently not possible</a> to pass the securityContext parameter through Airflow. Is there another way to work around this? Perhaps by setting a 'default' securityContext before the pods are even started? I've looked at creating a <code>PodSecurityPolicy</code>, but haven't managed to figure out a way.</p>
bartcode
<p>Separate from the Mutating Admission Controller, it's also possible to deploy a DaemonSet into your cluster that mounts <code>/mnt/bucket</code> onto the host filesystem, and the the Airflow pods would use <code>{"name": "bucket", "hostPath": {"path": "/mnt/bucket"}}</code> as their <code>volume:</code>, which -- assuming it works -- would be a <strong>boatload</strong> less typing, and also not run the very grave risk of a Mutating Admission Controller borking your cluster and causing Pods to be mysteriously mutated</p>
mdaniel
<p>I'm trying to replicate what we have set up in our bare-metal infrastructure using AWS. I'm admittedly not well-versed with K8S, but a colleague of mine had started to use it in AWS so I've been piggybacking on some of the work he's done. He's been able to answer most of my questions, but this one seems to have us a little stumped.</p> <p>In our bare-metal setup, there are about 20k unique sites with individual SSL certificates. In doing a small scale test with a few sites and certs, it works. However, at full scale, it does not.</p> <p>I was able to script importing the certificates into K8S and create a voyager ingress configuration yaml file from it, but when I try to apply it, I get the following error message:</p> <pre><code>bash-5.0# kubectl apply -f generated-ingress.yaml The Ingress "generated-ingress" is invalid: metadata.annotations: Too long: must have at most 262144 characters </code></pre> <p>It seems that the ingress configuration is too large.</p> <p>The generated ingress looks something like this:</p> <pre><code>--- apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: generated-ingress namespace: default spec: tls: - secretName: ssl-cert-eample-domain.com hosts: - example-domain.com # Repeated for each domain using SSL rules: - host: '*' http: paths: - backend: serviceName: the-backend servicePort: '80' - host: example-domain.com http: paths: - backend: serviceName: the-backend servicePort: '80' headerRules: - X-HTTPS YES # Repeated for each domain using SSL </code></pre> <p>FYI the <code>X-HTTPS</code> header is needed for our backend to generate absolute urls correctly using <code>https</code> instead of <code>http</code>. It would be nice if this could be specified for all SSL connections instead of having to specify it for each individual domain, making the configuration more lean.</p> <p>In any case, is there a way to increase the limit for the configuration? Or is there a better way to configure the ingress for numerous SSL certificates?</p>
cue8chalk
<p>I believe you are getting bitten by <code>kubectl</code> trying to record the previous state of affairs in an <code>{"metadata":{"annotation":{"kubectl.kubernetes.io/last-applied-configuration": "GINORMOUS string"}}}</code> <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-to-create-objects" rel="nofollow noreferrer">as described here</a>. It's not the <code>Ingress</code> itself that is causing you problems.</p> <p>In your case, because the input object is so big, I believe you will have to use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#replace" rel="nofollow noreferrer"><code>kubectl replace</code></a> instead of <code>apply</code>, since (a) you appear to have the whole declaration in-hand anyway (b) I don't think <code>kubectl replace</code> tries to capture that annotation unless one explicitly provides <code>--save-config=true</code> to <code>replace</code></p> <hr> <p>Separate from your question, if you are not already aware, one need not declare <strong>every single <code>Ingress</code></strong> in one resource -- it is much, much, much, much, much easier to manage and reason about <code>Ingress</code> resources when they are distinct</p> <p>So your config of:</p> <pre><code>apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: generated-ingress namespace: default spec: tls: - secretName: ssl-cert-eample-domain.com hosts: - example-domain.com # Repeated for each domain using SSL rules: - host: '*' http: paths: - backend: serviceName: the-backend servicePort: '80' - host: example-domain.com http: paths: - backend: serviceName: the-backend servicePort: '80' headerRules: - X-HTTPS YES </code></pre> <p>would become:</p> <pre><code>apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: star-domain namespace: default spec: rules: - host: '*' http: paths: - backend: serviceName: the-backend servicePort: '80' </code></pre> <p>and then repeat this structure for every one of the domains:</p> <pre><code>apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: example-domain namespace: default spec: tls: - secretName: ssl-cert-example-domain.com hosts: - example-domain.com rules: - host: example-domain.com http: paths: - backend: serviceName: the-backend servicePort: '80' headerRules: - X-HTTPS YES </code></pre> <p><em>it's also redundantly redundant to put <code>ingress</code> in the <code>metadata: name:</code> of an <code>Ingress</code> resource, as when one is typing its name you will need to say <code>kubectl describe ingress/generated-ingress</code></em></p>
mdaniel
<p>Is there a way to access the underlying host/node's process from Kubernetes pod in the same way like we access the host/node's filesystem by using <code>hostPath</code> volume mount?</p> <p>PS: I am trying to monitor the node process with the help of <code>auditbeat</code> deployed as a pod on Kubernetes.</p>
Rajat Badjatya
<p>I believe what you are looking for is <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#podspec-v1-core" rel="nofollow noreferrer"><code>hostPID: true</code> in the PodSpec</a>:</p> <pre class="lang-yaml prettyprint-override"><code>spec: hostPID: true containers: - name: show-init command: - ps - -p - "1" </code></pre> <p>should, in theory, output "/sbin/init" since it is running in the host's PID namespace</p>
mdaniel
<p>Can we use nginx ingress controller without loadbalancer? If so what are the measures that needs to be taken to setup the ingress controller?</p>
Shibily Shukoor
<blockquote> <p>Can we use nginx ingress controller without loadbalancer?</p> </blockquote> <p>Yes, of course</p> <blockquote> <p>If so what are the measures that needs to be taken to setup the ingress controller?</p> </blockquote> <p>To set it up? <code>helm</code> install it (or whatever mechanism you want), and ensure its <code>Service</code> is <code>type: NodePort</code>. Then, <code>curl -H 'host: my-virtual-host.example.org' http://${node_ip_address}:${http_node_port}</code></p>
mdaniel
<p>I'm trying to start a standard example SparkPi on a kubernetes cluster. Spark-submitt creates the pod and fails with error - "Error: Could not find or load main class org.apache.spark.examples.SparkPi".</p> <p>spark-submit</p> <pre><code>spark-submit \ --master k8s://https://k8s-cluster:6443 \ --deploy-mode cluster \ --name spark-pi \ --class org.apache.spark.examples.SparkPi \ --conf spark.kubernetes.namespace=ca-app \ --conf spark.executor.instances=5 \ --conf spark.kubernetes.container.image=gcr.io/cloud-solutions-images/spark:v2.3.0-gcs \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=default \ https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar </code></pre> <p>Kubernetes creates 2 containers in pod. spark-init in which writes, that examples jar is copied.</p> <pre><code>2018-07-22 15:13:35 INFO SparkPodInitContainer:54 - Downloading remote jars: Some(https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar,https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar) 2018-07-22 15:13:35 INFO SparkPodInitContainer:54 - Downloading remote files: None 2018-07-22 15:13:37 INFO Utils:54 - Fetching https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar to /var/spark-data/spark-jars/fetchFileTemp6219129583337519707.tmp 2018-07-22 15:13:37 INFO Utils:54 - Fetching https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar to /var/spark-data/spark-jars/fetchFileTemp8698641635325948552.tmp 2018-07-22 15:13:37 INFO SparkPodInitContainer:54 - Finished downloading application dependencies. </code></pre> <p>And spark-kubernetes-driver, throws me the error.</p> <pre><code>+ readarray -t SPARK_JAVA_OPTS + '[' -n /var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar ']' + SPARK_CLASSPATH=':/opt/spark/jars/*:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar' + '[' -n /var/spark-data/spark-files ']' + cp -R /var/spark-data/spark-files/. . + case "$SPARK_K8S_CMD" in + CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[@]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS) + exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.app.id=spark-e032bc91fc884e568b777f404bfbdeae -Dspark.kubernetes.container.image=gcr.io/cloud-solutions-images/spark:v2.3.0-gcs -Dspark.kubernetes.namespace=ca-app -Dspark.jars=https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar,https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar -Dspark.driver.host=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0-driver-svc.ca-app.svc -Dspark.master=k8s://https://k8s-cluster:6443 -Dspark.kubernetes.initContainer.configMapName=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0-init-config -Dspark.kubernetes.authenticate.driver.serviceAccountName=default -Dspark.driver.port=7078 -Dspark.kubernetes.driver.pod.name=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0-driver -Dspark.app.name=spark-pi -Dspark.kubernetes.executor.podNamePrefix=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0 -Dspark.driver.blockManager.port=7079 -Dspark.submit.deployMode=cluster -Dspark.executor.instances=5 -Dspark.kubernetes.initContainer.configMapKey=spark-init.properties -cp ':/opt/spark/jars/*:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=10.233.71.5 org.apache.spark.examples.SparkPi Error: Could not find or load main class org.apache.spark.examples.SparkPi </code></pre> <p>What am I doing wrong? Thanks for the tips.</p>
JDev
<p>I would suggest using <code>https://github.com/JWebDev/spark/raw/master/spark-examples_2.11-2.3.1.jar</code> since <code>/blob/</code> is the HTML view of an asset, whereas <code>/raw/</code> will 302-redirect to the actual storage URL for it</p>
mdaniel
<p>My goal is to create a Kubernetes role to limit kubectl exec access on pods under a specific deployment. I have a policy built below that successfully creates the role when the number &amp; names of my pods are all static. The problem is that my deployment horizontally autoscales so if a new pod is created then the role will not apply to that new pod (since each pod name is explicitly defined in the role) &amp; the new pod will have a random hash appended to its name.</p> <p>Format below is in Terraform but its the same high level structure as a role defined in yaml</p> <pre><code>resource kubernetes_cluster_role alb_exec_role { metadata { name = "alb-exec-role" } rule { api_groups = [""] resources = ["pods", "pods/log"] resource_names = [&lt;pod names&gt;] verbs = ["get", "list"] } rule { api_groups = [""] resources = ["pods/exec"] resource_names = [&lt;pod names&gt;] verbs = ["create"] } } </code></pre>
user3088470
<p>Foremost, why not remove <code>pod/exec</code> from all Pods in that <code>Role</code>, and then whitelist those which you <strong>do</strong> tolerate exec-ing into?</p> <p>That said, the thing you want is likely a custom controller which listens to Pod events in that Namespace and updates the RBAC <code>Role</code> when the new Pod is created or scheduled.</p>
mdaniel