prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have a Kubernetes cluster running on my laptop and I want to connect it to Gitlab so that I can deploy my application to Kubernetes from Gitlab pipelines. I was reading the official documentation but could not find anything useful. </p> <p>Can anyone shade some light on how to achieve this?</p>
<p>You can set up a pod in your local Kubernetes as a Gitlab Runner. More information (setup a runner) <a href="https://docs.gitlab.com/ee/ci/runners/README.html" rel="nofollow noreferrer">here</a>, <a href="https://github.com/teacheraio/DevSecOps-Studio/wiki/Lesson-one:-Setting-up-Git-Server-and-Gitlab-CI-Runner." rel="nofollow noreferrer">here</a>, and (Kubernetes Runner) <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">here</a></p> <p>Note that it should work if you have egress traffic allowed In most cases, egress is allowed unless your environment is following some sort of compliance. (ie. PCI, SOX, etc)</p>
<p>Can I make nested chart of Helm? </p> <p>Subchart is defined in Helm Chart spec. e.g. Wordpress chart includes Nginx and MySQL subchart. </p> <p>Can I define more deep nested chart? e.g. Wordpress chart includes HTTPd chart and the chart also includes Nginx, HAProxy and memcached subchart. </p>
<p>Yes, each chart can depend on charts them selves to create a tree like structure of dependencies.</p> <p><code> PrimaryApplication |- Service1 |- httpd |- Database |- Service2 |- haproxy |- common </code></p>
<p>I'm building on-prem kubernetes cluster and we are a windows shop. If we have AD.com as our corporate AD DNS Server, should we use the same namespace for the kubernetes cluster?</p> <p>My "hunch" is that I should create a separate domain for the cluster and then setup domain forwarding. </p> <p>Ex. </p> <ul> <li>AD.com --> Setup conditional forwarding to Cluster.local</li> <li>Cluster.local --> use default to inherit from host, or setup stub<br> lookup</li> </ul> <p>Any best practices advice?</p>
<p>You probably want to set them up separately. Setting up <code>cluster.local</code> in AD will probably lead to unpredictable results. </p> <p><code>cluster.local</code> is a domain for Kubernetes to resolve names 'internally' in a cluster.</p> <p>Hope it helps!</p>
<p>I am planning to deploy an application in my kubernetes-clustering infra. I pushed image to dockerhub repo. How can I pull image from dockerhub?</p>
<p>One line command to create a Docker registry secret</p> <pre><code>kubectl create secret docker-registry regcred --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; -n &lt;your-namespace&gt; </code></pre> <p>Then you can use it in your deployment file under <em>spec</em></p> <pre><code>spec: containers: - name: private-reg-container-name image: &lt;your-private-image&gt; imagePullSecrets: - name: regcred </code></pre> <p>More details: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token</a></p>
<p>Could someone please help me understand the relationship between HorizontalPodAutoscaler and the number of nodes / pods. </p> <p>If the autoscaler adds more pods than a node can handle, will the new pods just go into a 'pending' state or will it automatically trigger a new node / vCPU on say... Google Kubernetes Engine to spin up?</p> <p>Cheers, Clement</p>
<p>An HPA will essentially create a minimum number of pods and will increase to the maximum number of pods according to a defined metric like CPU percentage on the pod. This is just at the Kubernetes level, it won't scale your nodes.</p> <p>To scale your nodes you need to use a different autoscaler which could be defined according to metrics on your servers. Examples of these are the Amazons Autoscaling Groups (ASGs). There are equivalents in GCP (server pools) and Azure.</p> <p>So if you don't have your server autoscaler set up correctly or you don't have enough capacity in your cluster you could run into cases where Kubernetes HPA(s) want to scale and don't find enough server capacity. If that happens then you'll have a lot of pods go into "Pending" state.</p> <p>Keep in mind that Kubernetes also has <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">Vertical Pod Autoscaler</a> and the takes care of autoscaling the resources inside the pods, such as memory and cpu percentage.</p> <p>This <a href="https://banzaicloud.com/blog/k8s-horizontal-pod-autoscaler/" rel="nofollow noreferrer">link</a> has good information about all the autoscaling methods.</p> <p>Hope it helps.</p>
<p>All is in the title: Is RKE (from Rancher) production ready? </p> <p>I tried to google it without success.</p> <p>And is there any best practices to set up a production cluster with it?</p> <p>Thanks for your help.</p>
<p>This question is rather hard to answer without knowing your acceptance criteria. What is production readiness from your point of view? </p> <ul> <li>High availability? Can be achived</li> <li>(Enterprise) Customer Support? Can be requested</li> </ul> <p>Are you concerned about the conformance of the k8s deployment through rke? Check <a href="https://www.cncf.io/certification/software-conformance/" rel="nofollow noreferrer">https://www.cncf.io/certification/software-conformance/</a> - Rancher is listed there. Maybe give <a href="https://github.com/heptio/sonobuoy" rel="nofollow noreferrer">https://github.com/heptio/sonobuoy</a> a try.</p> <p>Cheers</p>
<p>I am going to deploy a Python Flask Server with Docker on Kubernetes using Gunicorn and Gevent/Eventlet as asynchronous workers. The application will:</p> <ol> <li>Subscribe to around 20 different topics on Apache Kafka.</li> <li>Score some machine learning models with that data.</li> <li>Upload the results to a relational database.</li> </ol> <p>Each topic in Kafka will receive 1 message per minute, so the application needs to consume around 20 messages per minute from Kafka. For each message, the handling and execution take around 45 seconds. The question is how I can scale this in a good way? I know that I can add multiple workers in Gunicorn and use multiple replicas of the pod when I deploy to Kubernetes. But is that enough? Will the workload be automatically balanced between the available workers in the different pods? Or what can I do to ensure scalability?</p>
<p>I recommend you set up an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a> Horizontal Pod Autoscaler for your workers.</p> <p>It will require to set up support for the <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/" rel="nofollow noreferrer">metrics API</a>. For custom metrics on the later versions of Kubernetes <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">heapster</a> has been deprecated in favor of the <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">metrics server</a></p> <p>If you are using the public Cloud like AWS, GCP, or Azure I'd also recommend setting up an Autoscaling Group so that you can scale your VMs or server base on metrics like CPU utilization average.</p> <p>Hope it helps!</p>
<p>I have a cassandra stateful workload, and I would like to migrate it to a new node pool in the same GKE cluster. The persistent volume of each cassandra pod is backed by a GCEPersistentDisk resource.</p> <p>During workload(i.e. cassandra pods) migration, what would happen to its underlying persistent volume? Will the underlying persistent volume get automatically moved to the new node as well? I'm assuming that each persistent volume(or GCEPersistentDisk resource) is bound to a GKE node.</p> <p>Besides the regular migration commands(e.g. cordon old nodes, drain old nodes that run cassandra pods), are there any extra commands I should run to make sure that "data is not lost" during this pod migration?</p>
<p>Short answer: The GCEPersistentDisks will move with your Cassandra pods.</p> <p>So when a pod moves from one node to another, its GCEPersistentDisk is detached from the current node and then when it's scheduled on another node Kubernetes re-attaches it to that new node.</p> <p>In the event that your current node is shut down abruptly, the GCEPersistentDisk will be released (detached) and eventually Kubernetes will schedule your workload on a new node and re-attach the disk. This is assuming that on your cloud provider you don't have selected the option of with the functionality relating to deleting the volume when the instance is terminated.</p> <p>In summary, it should all work seamlessly given that Kubernetes talks to the cloud providers. More information <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider" rel="nofollow noreferrer">here</a>. Note that this is being deprecated in favor of the <a href="https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/" rel="nofollow noreferrer">Kubernetes Cloud Controller Manager</a></p>
<p>When I try to run <code>kubectl get namespaces</code> or <code>kubectl get nodes</code> commands etc. I am getting this error (I am using Azure Kubernetes Service). I would appreciate any help with this issue.</p> <pre><code>Error from server (Forbidden): namespaces is forbidden: User "XXXXXXXXXXXXXX" cannot list namespaces at the cluster scope </code></pre>
<p>This is an authorization module error message: as explained in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes docs</a> and, for example, <a href="https://speakerdeck.com/mhausenblas/kubernetes-security-from-image-hygiene-to-network-policies?slide=29" rel="nofollow noreferrer">shown here</a> you need to have the permissions to carry out a certain action (in this case: list namespaces and nodes).</p> <p>Since you didn't share more background on how this cluster is set up or who is responsible for it, I can only suggest to either reach out to the cluster admin to give you the rights, or, if you've set up the cluster yourself, have a look at the <a href="https://learn.microsoft.com/en-us/azure/aks/aad-integration" rel="nofollow noreferrer">AD integration</a>, which may be of use here.</p>
<p>I have a Kubernetes cluster with 2 containers running in a single workload. </p> <p>One container is running a Flask server application and the other is running an angular application. I need to have this pod set up in a way where both applications can communicate with each other within the localhost. I need the angular container which is exposed in port 4200 to communicate with the unexposed flask server which is on port 5000. I am stuck when it comes to having these containers communicate within the pod. </p>
<p>Rather than localhost (<code>127.0.0.1</code>), make sure your flask server is reachable via <em>any</em> local IP, that is, <code>app.run(host='0.0.0.0')</code>.</p>
<p>I have a multizone (3 zones) GKE cluster (1.10.7-gke.1) of 6 nodes and want each zone to have at least one replica of my application.</p> <p>So I've tried preferred podAntiAffinity:</p> <pre><code> affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: component operator: In values: - app topologyKey: failure-domain.beta.kubernetes.io/zone </code></pre> <p>Everything looks good the first time I install (scale from 1 to 3 replicas) my application. After the next rolling update, everything gets mixed up and I can have 3 copies of my application in one zone. Since additional replicas are created and the old ones are terminated.</p> <p>When I am trying the same term with <em>requiredDuringSchedulingIgnoredDuringExecution</em> everything looks good but rolling updates don't work because new replicas can't be scheduled (pods with "component" = "app" already exist in each zone).</p> <p>How to configure my deployment to be sure I have replica in each availability zone?</p> <p>UPDATED:</p> <p>My workaround now is to have hard anti-affinity and deny additional pods (more than 3) during the rolling update:</p> <pre><code> replicaCount: 3 affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: component operator: In values: - app topologyKey: failure-domain.beta.kubernetes.io/zone deploymentStrategy: type: RollingUpdate rollingUpdate: maxSurge: 0 maxUnavailable: 1 </code></pre>
<p>I don't think the Kubernetes scheduler provides a way to guarantee pods in all availability zones. I believe it's a best-effort approach when it comes to that and there may be some limitations.</p> <p>I've opened an <a href="https://github.com/kubernetes/kubernetes/issues/68981" rel="nofollow noreferrer">issue</a> to check whether this can be supported either through NodeAffinity or PodAffiity/PodAntiAffinity.</p>
<p>I have a security pod that needs to run everywhere including master. I do not want, however, master to run any other (non kubernetes) pods. I know I can taint master node, and I know I can setup affinity for a pod. Yet (unless I am misunderstanding something) that isn't quite what I want.</p> <p>What I want is to setup affinity in a way that this security pod runs on every single node including master as a part of same daemon set. It is important that I only have a single definition due to how this security pod gets deployed. </p> <p>Can this be done?</p> <p>I am running Kubernetes 1.8</p>
<p>I think this is more or less duplicate to this <a href="https://stackoverflow.com/questions/48495263/scheduler-is-not-scheduling-pod-for-daemonset-in-master-node/48496629#48496629">question</a>.</p> <p>What you need is a combination of two features:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset" rel="nofollow noreferrer">DaemonSet</a> will allow you to schedule Pod to run on every node</li> <li><a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">Tolerations</a> in the DaemonSet Pods will allow this workload to run even on the node which has the master taint.</li> </ul> <p>That way your security pods will run everywhere even on the master with the taint because they can tolerate it. I think there is an example directly on the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset" rel="nofollow noreferrer">DaemonSet website</a>.</p> <p>But other pods without this toleration will not be scheduled on master because they do not tolerate the taint. </p>
<p>I have 2 different namespace: <strong>prod-01</strong> and <strong>prod-02</strong>, What I want to do is build a copy of my <strong>prod-01</strong> into <strong>prod-02</strong> namespace keeping the same names for its pvcs, so that I don't have to maintain 2 sets of charts for each different namespace. </p> <p>Here's how it looks like:</p> <pre><code>$ kubectl get ns | grep prod prod-01 Active 178d prod-02 Active 8d $ </code></pre> <p>As shown below, I have 2 pairs of pv's for each namespace:</p> <pre><code>$ kubectl get pv -o wide | grep prod prod-01-db-pv 50Gi RWX Retain Bound prod-01/app-db 164d prod-01-nosql-db-pv 5Gi RWX Retain Bound prod-01/app-nosql-db 149d prod-02-db-pv 50Gi RWX Retain Available prod-02/app-db 41m prod-02-nosql-db-pv 5Gi RWX Retain Available prod-02/app-nosql-db 19m $ </code></pre> <p>Here's how pvc's for <strong>prod-01</strong> are being displayed:</p> <pre><code>$ kubectl get pvc --namespace=prod-01 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE app-db Bound prod-01-db-pv 50Gi RWX 164d app-nosql-db Bound prod-01-nosql-db-pv 5Gi RWX 149d $ </code></pre> <p>And here's what I'm trying to accomplish:</p> <pre><code>$ kubectl get pvc --namespace=prod-02 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE app-db Pending prod-02-db-pv 0 2m app-nosql-db Pending prod-02-nosql-db-pv 0 24m $ </code></pre> <p>As shown above, the pvc's for <strong>prod-02</strong> namespace are stuck forever with <strong>Pending</strong> status. </p> <p>Them when I change the pvc names on <strong>prod-02</strong> to anything different, they bond as expected.</p> <p>Which leads me to think I can't use the same names on pvc's even when they are in different namespaces and pointing to different pv's ... However, when searching the documentation, I could not find any evidence to this issue, and was wondering if I could be missing something. </p> <p>So to put it simple, can I have multiple pvc's with the same name accross different namespaces (considering that they are using different pv's)?</p> <hr> <p><strong>Update:</strong> result of <code>kubectl describe pvc</code> </p> <pre><code>$ kubectl describe pvc app-db --namespace=prod-02 Name: app-db Namespace: prod-02 StorageClass: Status: Pending Volume: prod-02-db-pv Labels: &lt;none&gt; Annotations: &lt;none&gt; Finalizers: [] Capacity: 0 Access Modes: Events: &lt;none&gt; $ </code></pre> <p>Also here's the output of <code>kubectl get pvc</code>:</p> <pre><code>$ kubectl get pvc app-db --namespace=prod-02 -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: 2018-09-22T22:00:34Z name: app-db namespace: prod-02 resourceVersion: "43027607" selfLink: /api/v1/namespaces/prod-02/persistentvolumeclaims/app-db uid: ee81b951-beb2-11e8-b972-005056bbded7 spec: accessModes: - ReadWriteMany resources: requests: storage: 50Gi volumeName: prod-02-db-pv status: phase: Pending $ </code></pre> <p>And here are some details about the pv too:</p> <pre><code>$ kubectl get pv prod-02-db-pv --namespace=prod-02 -o yaml apiVersion: v1 kind: PersistentVolume metadata: creationTimestamp: 2018-09-22T21:15:19Z name: prod-02-db-pv resourceVersion: "43020171" selfLink: /api/v1/persistentvolumes/prod-02-db-pv uid: 9c07d7a6-beac-11e8-b972-005056bbded7 spec: accessModes: - ReadWriteMany capacity: storage: 50Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: app-db namespace: prod-02 nfs: path: /nfs_server/prod02/db server: 158.87.52.35 persistentVolumeReclaimPolicy: Retain status: phase: Available $ </code></pre> <hr> <p>Thanks in advance for the help!</p>
<p>PVC is a namespaced resource but not PV. ie., you can have multiple PVC's with same name across difference namespaces.</p> <p>There might be issues in the way you have configured the pv.</p> <p>Can you make sure you are using the right ip address in pv configuration just under <code>nfs</code> attribute:</p> <pre><code>nfs: path: /nfs_server/prod01/db server: 158.87.52.35 </code></pre>
<p>I need to provide a list of hostnames under <code>kubernetes.io/hostname</code> nodeSelector. I tried giving comma separated list but kubernetes throws some validation error when I do <code>kubectl apply -f &lt;yaml file&gt;</code>.</p> <p>I also tried giving multiple hostnames by providing repeated sets of <code>kubernetes.io/hostname</code> , but kubernetes takes only the last value. Could not find any valid example when I browsed.</p> <p>Thanks</p> <pre><code>The DaemonSet "ssh-daemonset" is invalid: spec.template.spec.nodeSelector: Invalid value: "1z.t3.x52.y7,l0.1k7.1p0.2q0": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?') </code></pre>
<p>To constrain DaemonSet pods to run on specific nodes only, see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes</a></p> <p>Using the same label, label the nodes on which you want the DaemonSet pods to run (<code>kubectl label nodes &lt;node-name&gt; &lt;label-key&gt;=&lt;label-value&gt;</code>) and specify that label in <code>spec.template.spec.nodeSelector</code>.</p> <p>Example: <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector</a></p>
<p>Note: My question is very similar to <br> <a href="https://stackoverflow.com/questions/25494761/file-location-if-target-path-not-specified-with-scp-command/25496316">File location if target path not specified with scp command</a> <br>But that question is asking what the target of this is:</p> <pre><code>scp -r /localdir/ root@ubuntu </code></pre> <p>My question is asking what the target of this is:</p> <pre><code>scp -r /localdir/ root@ubuntu: </code></pre> <p>Also if anyone's curious I found this syntax on the following webpage: <br> <a href="https://kubernetes.io/docs/setup/independent/high-availability/#copy-required-files-to-the-correct-locations" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/#copy-required-files-to-the-correct-locations</a></p> <p>It's a Bash Script:</p> <pre><code>USER=ubuntu # customizable CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: done </code></pre>
<p>In my test, I ran the command as follows to copy <code>test</code> folder from host <code>heron01</code> to host <code>heron02</code>:</p> <pre><code>yitian@heron01:~$ scp -r test/ yitian@heron02: </code></pre> <p>And the result shows that the <code>test</code> folder can be found at:</p> <pre><code>yitian@heron02:~$ pwd /home/yitian </code></pre> <p>So, the result of your question is: The target directory is the <code>home directory</code> of the target user you are using in the scp command. In my example, the target directory is: <code>/home/yitian</code> in host <code>heron02</code>.</p>
<p>I'm trying to run private stellar blockchain infrastructure on kubernetes (not to join to existing public or test stellar network) but my question can be generalized to the scenario of running any peer to peer services on kubernetes. Therefore, I will try to explain my problem in a generalized way (hoping that it can yield answers that are applicable to any similar topology running on the kubernetes).</p> <p>Here is the scenario:</p> <p>I want to run 3 peers (in kube terms: pods) which are able to communicate with each other in a decentralized way but the problem lies in the fact that each of these peers has a slightly different configuration. In general, configuration looks like this (this is an example for pod0):</p> <pre><code>NETWORK_PASSPHRASE="my private network" NODE_SEED=&lt;pod0_private_key&gt; KNOWN_PEERS=[ "stellar-0", "stellar-1", "stellar-2"] [QUORUM_SET] VALIDATORS=[ &lt;pod1_pub_key&gt;, &lt;pod2_pub_key&gt; ] </code></pre> <p>The problem lies in the fact that each pod would have different:</p> <ul> <li>NODE_SEED</li> <li>VALIDATORS list</li> </ul> <p>My first idea (before realizing this problem) was to:</p> <ul> <li>Create config map for this configuration</li> <li>Create statefulset (3 replicas) with headless service to enable stable reachability between pods (stellar-0, stellar-1, stellar-2...etc.)</li> </ul> <p>Another idea (after realizing this problem) would be to:</p> <ul> <li>Create separate config maps for each peer</li> <li>Create statefulset (1 replica) with service</li> </ul> <p>I'm wondering if there is any better solution/pattern that could be utilized for this purpose rather than running completely same services with slightly different configuration as separate entities (statefulset, deployment..) with their separate service through which these peers would be available (but this kind of defeats a purpose of using kubernetes high level resources which enable replication)?</p> <p>Thanks</p>
<p>So you can have a single <code>ConfigMap</code> with multiple keys each one uniquely meant for one of your replicas. You can also deploy your pods using a <code>StatefulSet</code> with an <code>initContainer</code> to setup the configs. This is just an example (You'll have to tweak it to your needs):</p> <p>ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: stellar labels: app: stellar data: stellar0.cnf: | NETWORK_PASSPHRASE="my private network" NODE_SEED=&lt;stellar0_private_key&gt; KNOWN_PEERS=[ "stellar-0", "stellar-1", "stellar-2"] [QUORUM_SET] VALIDATORS=[ &lt;stellar1_pub_key&gt;, &lt;stellar2_pub_key&gt; ] stellar1.cnf: | NETWORK_PASSPHRASE="my private network" NODE_SEED=&lt;stellar1_private_key&gt; KNOWN_PEERS=[ "stellar-0", "stellar-1", "stellar-2"] [QUORUM_SET] VALIDATORS=[ &lt;stellar0_pub_key&gt;, &lt;stellar2_pub_key&gt; ] stellar2.cnf: | NETWORK_PASSPHRASE="my private network" NODE_SEED=&lt;stellar2_private_key&gt; KNOWN_PEERS=[ "stellar-0", "stellar-1", "stellar-2"] [QUORUM_SET] VALIDATORS=[ &lt;stellar0_pub_key&gt;, &lt;stellar1_pub_key&gt; ] </code></pre> <p>StatefulSet:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: stellarblockchain spec: selector: matchLabels: app: stellar serviceName: stellar replicas: 3 template: metadata: labels: app: stellar spec: initContainers: - name: init-stellar image: stellar-image:version command: - bash - "-c" - | set -ex # Generate config from pod ordinal index. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/stellar0.cnf /mnt/conf.d/ elif [[ $ordinal -eq 1 ]]; then cp /mnt/config-map/stellar1.cnf /mnt/conf.d/ else cp /mnt/config-map/stellar2.cnf /mnt/conf.d/ fi volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map containers: - name: stellar image: stellar-image:version ports: - name: stellar containerPort: &lt;whatever port you need here&gt; volumeMounts: - name: conf mountPath: /etc/stellar/conf.d &lt;== wherever your config for stellar needs to be volumes: - name: conf emptyDir: {} - name: config-map configMap: name: stellar </code></pre> <p>Service (if you need to expose it)</p> <pre><code>apiVersion: v1 kind: Service metadata: name: stellar labels: app: stellar spec: ports: - name: stellar port: &lt;stellar-port&gt; clusterIP: None selector: app: stellar </code></pre> <p>Hope it helps!</p>
<p>I am trying to enable a deployment in the gateway namespace to send metrics to an external service at <code>engine-report.apollodata.com</code></p> <p>I have written the following service entry and virtual service rules, as per the <a href="https://istio.io/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer">Istio documentation</a> yet no traffic is able to access the endpoint.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: apollo-engine-ext namespace: {{ .Release.Namespace }} labels: chart: {{ .Chart.Name }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: hosts: - '*.apollodata.com' ports: - number: 80 name: http protocol: HTTP - number: 443 name: https protocol: HTTPS resolution: DNS location: MESH_EXTERNAL --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: apollo-engine-ext namespace: {{ .Release.Namespace }} labels: chart: {{ .Chart.Name }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: hosts: - '*.apollodata.com' tls: - match: - port: 443 sni_hosts: - '*.apollodata.com' route: - destination: host: '*.apollodata.com' port: number: 443 weight: 100 </code></pre> <p>What might be causing this issue</p>
<p>Using the following configuration, the issue was resolved. I'm still unsure as to what was causing the issue. In clarification of some of the comments to my original question. Using the original configuration, it was possible to curl <a href="http://engine-report.apollodata.com" rel="nofollow noreferrer">http://engine-report.apollodata.com</a> and <a href="https://engine-report.apollodata.com" rel="nofollow noreferrer">https://engine-report.apollodata.com</a> endpoints, but as intended through the istio service mesh external endpoints like <a href="http://www.google.com" rel="nofollow noreferrer">http://www.google.com</a> were unavailable.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: "apollo-engine-ext" namespace: {{ .Release.Namespace }} labels: chart: {{ .Chart.Name }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: hosts: - apollodata.com - engine-report.apollodata.com - apollographql.com location: MESH_EXTERNAL ports: - number: 443 name: https protocol: HTTPS - number: 80 name: http protocol: HTTP resolution: DNS --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: "apollo-engine-ext" namespace: {{ .Release.Namespace }} labels: chart: {{ .Chart.Name }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: hosts: - apollodata.com - engine-report.apollodata.com - apollographql.com tls: - match: - port: 443 sniHosts: - apollodata.com route: - destination: host: apollodata.com - match: - port: 443 sniHosts: - engine-report.apollodata.com route: - destination: host: engine-report.apollodata.com - match: - port: 443 sniHosts: - apollographql.com route: - destination: host: apollographql.com </code></pre>
<p>I trying to setup kubernetes on my local environment using docker. I've built the necessary docker image with this Dockerfile:</p> <pre><code>FROM node:9.11.1 WORKDIR /usr/src/app COPY package.json /usr/src/app/ RUN npm install COPY . /usr/src/app/ EXPOSE 3002 CMD [ "npm", "start" ] </code></pre> <p>I then pushed this image to my private docker repo on the google cloud repository. Now i can confirm that i can push and pull the image from the cloud repo, so i then built a docker-compose using that repo as the image source file:</p> <pre><code>version: '3' services: redis: image: redis ports: - 6379:6379 networks: - my-network mongodb: image: mongo ports: - 27017:27017 volumes: - ./db:/data/db networks: - my-network my-test-app: tty: true image: gcr.io/my-test-app ports: - 3002:3002 depends_on: - redis - mongodb networks: - my-network volumes: - .:/usr/src/app environment: - REDIS_PORT=6379 - REDIS_HOST=redis - DB_URI=mongodb://mongodb:27017/ command: bash -c "ls &amp;&amp; npm install" networks: my-network: driver: bridge volumes: mongodb: </code></pre> <p>Then finally building off of that i use Kubernetes kompose to generate my deployment file which looks like this: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.12.0 () creationTimestamp: null labels: io.kompose.service: my-test-app name: my-test-app spec: replicas: 1 strategy: type: Recreate template: metadata: creationTimestamp: null labels: io.kompose.service: my-test-app spec: imagePullSecrets: - gcr-json-key containers: - args: - bash - -c - ls &amp;&amp; npm install env: - name: DB_URI value: mongodb://mongodb:27017/ - name: REDIS_HOST value: redis - name: REDIS_PORT value: "6379" image: gcr.io/my-test-app name: my-test-app ports: - containerPort: 3002 resources: {} tty: true volumeMounts: - mountPath: /usr/src/app name: my-test-app-claim0 restartPolicy: Always volumes: - name: my-test-app-claim0 persistentVolumeClaim: claimName: my-test-app-claim0 status: {} </code></pre> <p>As you can see in the args section of my yaml i am listing all the files in my directory <code>/usr/src/app</code> However it logs do the only file that appears is a single <code>package-lock.json</code> file which causes the following install command to fail. This error however does not occur when i use docker-compose to launch my app so for some reason only my kubernetes is having trouble. Also i can confirm that my image does contain a <code>package.json</code> file by running an interactive shell. I'm unsure on how to proceed so any help would be appreciated!</p>
<p>You are mounting something else over <code>/usr/src/app</code> where <code>package.json</code> is supposed to be located. That hides all the files in there. Remove the <code>volumes</code> and <code>volumeMounts</code> sections.</p>
<p>I can log into console from one of the pods (on kubernetes) and run this command:</p> <pre><code>RAILS_ENV=production bin/delayed_job start </code></pre> <p>The jobs are run correctly doing that. However when the pods are deleted or restarted, the jobs stop running. </p> <p>I also tried adding the command above in an initializer file (eg config/initializers/delayed_jobs_runner.rb), but I get a recursive loop when starting the app. </p> <p>Another thing I tried to do is create a new file called <code>my-jobs.yaml</code> with this</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: job spec: template: spec: containers: - name: job image: gcr.io/test-app-123/somename:latest command: ["/bin/bash", "-l", "-c"] args: ["RAILS_ENV=production bundle exec rake jobs:work"] restartPolicy: Never backoffLimit: 4 </code></pre> <p>I then do <code>kubectl apply -f my-jobs.yaml</code>, but the jobs aren't running.</p> <p>Any idea how to run delayed_jobs correctly in kubernetes?</p> <p>EDIT: Here's my Dockerfile:</p> <pre><code>FROM gcr.io/google_appengine/ruby # Install 2.5.1 if not already preinstalled by the base image RUN cd /rbenv/plugins/ruby-build &amp;&amp; \ git pull &amp;&amp; \ rbenv install -s 2.5.1 &amp;&amp; \ rbenv global 2.5.1 &amp;&amp; \ gem install -q --no-rdoc --no-ri bundler # --version 1.11.2 ENV RBENV_VERSION 2.5.1 # Copy the application files. COPY . /app/ # Install required gems. RUN bundle install --deployment &amp;&amp; rbenv rehash # Set environment variables. ENV RACK_ENV=production \ RAILS_ENV=production \ RAILS_SERVE_STATIC_FILES=true # Run asset pipeline. RUN bundle exec rake assets:precompile CMD ["setup.sh"] # Reset entrypoint to override base image. ENTRYPOINT ["/bin/bash"] ################### setup.sh ############################ cd /app &amp;&amp; RAILS_ENV=production bundle exec script/delayed_job -n 2 start bundle exec foreman start --formation "$FORMATION" ######################################################### </code></pre>
<p>Running multiple processes in one docker container is problematic as you cannot easily observe lifetime of particular process - every container need one process which is "main" and when it exit, container also exit.</p> <p>As looking on Github (<a href="https://github.com/collectiveidea/delayed_job#user-content-running-jobs" rel="nofollow noreferrer">https://github.com/collectiveidea/delayed_job#user-content-running-jobs</a>) I would strongly suggest to change a little your starting command to run it in foreground because now when you are starting Kubernetes job with daemons - job is ending immediately as docker container lifetime is directly related to "main" foreground process lifetime so when you run only background process your main process exit immediately and your container too.</p> <p>Change your command to:</p> <pre><code>RAILS_ENV=production script/delayed_job run </code></pre> <p>What start worker in foreground so your Kubernetes Job won't exit. Please note also that Kubernetes Jobs are not intended to such infinitive tasks (job should has start and end) so I would suggest to use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a> for that</p>
<p>I would like to perform a system test using multiple services. During test I would like to prevent deployments from happening for a given namespace in order to have stable environment. Is it possible to have such lock?</p> <p>My use case is CI/CD flow:</p> <p>I will have bitbucket / gitlab pipeline, deploy a service. Then I want to perform system tests among multiple services inside a namespace. During this system test I want to prevent deployments in order to make tests more stable.</p>
<p>You can use <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> with a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Role</a> limited to a single namespace.</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: user namespace: mynamespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: user-full-access namespace: mynamespace rules: - apiGroups: ["", "extensions", "apps"] resources: ["*"] verbs: ["*"] - apiGroups: ["batch"] resources: - jobs - cronjobs verbs: ["*"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: mynamespace-user-view namespace: mynamespace subjects: - kind: ServiceAccount name: user namespace: mynamespace roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: user-full-access </code></pre>
<p>I am using helm in my CI to upgrade deployments with newer versions of charts.</p> <pre><code>helm upgrade --wait --install . </code></pre> <p><strong>Expected behavior:</strong> the --wait flag should wait for the readinessProbe defined in the new chart. See also: <a href="https://docs.helm.sh/helm/#helm-upgrade" rel="nofollow noreferrer">https://docs.helm.sh/helm/#helm-upgrade</a></p> <p>However, it does not wait and simply deploys the new chart, even if the readinessProbe is failing.</p> <p>Which results in a failed new chart and a killed old chart.</p> <p>It has nothing to do with <a href="https://github.com/helm/helm/issues/3173" rel="nofollow noreferrer">https://github.com/helm/helm/issues/3173</a>, as the readinessProbe is properly executed and fails. But Helm does simply not wait for this.</p> <p>Did anybody face issues like that? Thanks!</p>
<p>The Issue was fixed by setting the following Kubernetes yaml description inside the deployment ressource:</p> <pre><code> strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 </code></pre> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes Deployment Documentation:</a></p> <blockquote> <p>Note: The Deployment controller will stop the bad rollout automatically, and will stop scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (maxUnavailable specifically) that you have specified. Kubernetes by default sets the value to 1 and .spec.replicas to 1 so if you haven’t cared about setting those parameters, your Deployment can have 100% unavailability by default! This will be fixed in Kubernetes in a future version.</p> </blockquote>
<p>I am using helm to install istio-1.0.0 version with <code>--set grafana.enabled=true</code>.</p> <p>To access the grafana dashboard, I have to do port forwarding using <code>kubectl</code> command. It works okay. However, i want to access it using public ip, hence I am using this gateway yaml file</p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: grafana-gateway namespace: agung-ns spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 15031 name: http-grafana protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grafana-global-route namespace: agung-ns spec: hosts: - "grafana.domain" gateways: - grafana-gateway - mesh http: - route: - destination: host: "grafana.istio-system" port: number: 3000 weight: 100 </code></pre> <p>I tried to <code>curl</code> it, but it returns 404 status, which means something wrong with routing logic and/or my configuration above.</p> <pre><code>curl -HHost:grafana.domain http://&lt;my-istioingressgateway-publicip&gt;:15031 -I HTTP/1.1 503 Service Unavailable date: Tue, 14 Aug 2018 13:04:27 GMT server: envoy transfer-encoding: chunked </code></pre> <p>Any idea?</p>
<p>I think the problem is that you refer service in different namespace. You need to add FQDN (grafana.istio-system.svc.cluster.local).</p> <p>If you need istio, grafana, prometheus and jaeger integrated, exposed through gateway and with enabled security you can check the project I am working on: <a href="https://github.com/kyma-project/kyma" rel="nofollow noreferrer">https://github.com/kyma-project/kyma</a></p>
<p>After installation three node cluster <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a></p> <p>I've entered into ApiServer container using</p> <pre><code>sudo docker exec --user root -it 1ea54fd4cd683 /bin/sh </code></pre> <p>and executed</p> <pre><code>kube-apiserver --enable-admission-plugins=DefaultStorageClass </code></pre> <p>but it writes</p> <blockquote> <p>I0923 14:37:58.270848 90 server.go:703] external host was not specified, using 192.168.41.29 W0923 14:37:58.271386 90 authentication.go:378] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer Error: --etcd-servers must be specified</p> </blockquote> <p>Could smb say why it happens and how fix it?</p>
<p>First of all, I'm pretty sure that's not the recommended way to add flags to the apiserver.<br> Those changes will not persist.<br> You probably want to edit <code>/etc/kubernetes/manifests/kube-apiserver.json</code> on the master, kill the kube-apiserver pod, and wait for it to respawn.</p> <p>I'm guessing here, but try adding <code>--anonymous-auth=false</code> ?</p>
<p>I have a scenario where I have a Kubernetes cluster of three nodes. Now there is a service(say <code>testService</code>) running on the cluster(any one of the three nodes). I would like to know if there is any way through which I can get the IP of the node on which the service is running by using the kubernetes API.</p> <p>I looked through the documentation and all I could find was APIs to get the status/names of the services.</p> <p><strong>Edit</strong>: There is a UI page which targets the service. The problem is that on a new deployment the service can be scheduled on either of the three nodes.</p>
<p>You should be able to do it with the <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Kubernetes Java API</a>. </p> <p>You can basically get it with a <em>describe service </em> call and picking up the endpoints of your service. An example endpoint output would look like something this:</p> <pre><code>192.168.1.1:8888,192.168.2.2:8888 </code></pre>
<p>Given the following json:</p> <pre><code> apiVersion: v1 kind: ConfigMap metadata: name: {{ template "something.server.fullname" . }} data: {{ (.Files.Glob "dashboards/*.json").AsConfig | indent 2 }} {{ (.Files.Glob "datasources/*.json").AsConfig | indent 2 }} </code></pre> <p>How can I check if the folder exists and is not empty?</p> <p>Currently, if the folder is missing or doesn't have any files, helm install will abort with this message:</p> <pre><code>Error: YAML parse error on domething/charts/grafana/templates/dashboards-configmap.yaml: error converting YAML to JSON: yaml: line 6821: could not find expected ':' </code></pre>
<p>You can pull your Globs out to variables, and then move everything within <code>if</code> blocks, e.g.:</p> <pre><code>{{- $globdash := .Files.Glob "dashboards/*.json" }} {{ if $globdash }} {{- $globdata := .Files.Glob "datasources/*.json" }} {{ if $globdata }} apiVersion: v1 kind: ConfigMap metadata: name: {{ template "something.server.fullname" . }} data: {{ ($globdash).AsConfig | indent 2 }} {{ ($globdata).AsConfig | indent 2 }} {{ end }} {{ end }} </code></pre>
<p>I have a kubernetes cluster and I am getting cgroup out of memory. I have resources declared in the YAML but I have no idea which apache2 needs more memory. It gives me a process id but how do I tell which pod is being killed?</p> <p>Thank you. </p>
<p>It is what it is. Your Apache process is using more memory than you are allowing in your pod/container definition.</p> <p>Reasons why it could be needing more memory:</p> <ol> <li>You have an increase in traffic and sessions being handled</li> <li>Apache is forking more processes within the container running into memory limits.</li> <li>Apache not reaping some lingering sessions because of a config issue.</li> </ol> <p>If you are running Docker for containers (which most people do) you can ssh into the node in your cluster and run a:</p> <pre><code>docker ps -a </code></pre> <p>You should see the <code>Exited</code> container where your Apache process(es) was running. Then you can run:</p> <pre><code>docker logs &lt;container-id&gt; </code></pre> <p>And you might get details on why Apache was doing before it was killed. If you only see minimal info, I recommend increasing the verbosity of your Apache logs.</p> <p>Hope it helps.</p>
<p>Is there any way to configure <code>nodeSelector</code> at the namespace level?</p> <p>I want to run a workload only on certain nodes for this namespace.</p>
<p>To achieve this you can use <code>PodNodeSelector</code> admission controller.</p> <p>First, you need to enable it in your <strong>kubernetes-apiserver</strong>:</p> <ul> <li>Edit <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>: <ul> <li>find <code>--enable-admission-plugins=</code></li> <li>add <code>PodNodeSelector</code> parameter</li> </ul> </li> </ul> <p>Now, you can specify <code>scheduler.alpha.kubernetes.io/node-selector</code> option in annotations for your namespace, example:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: your-namespace annotations: scheduler.alpha.kubernetes.io/node-selector: env=test spec: {} status: {} </code></pre> <p>After these steps, all the pods created in this namespace will have this section automatically added:</p> <pre><code>nodeSelector env: test </code></pre> <p>More information about the <code>PodNodeSelector</code> you can find in the official Kubernetes documentation: <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector</a></p> <hr /> <h1>kubeadm users</h1> <p>If you deployed your cluster using kubeadm and if you want to make this configuration persistent, you have to update your kubeadm config file:</p> <pre><code>kubectl edit cm -n kube-system kubeadm-config </code></pre> <p>specify <code>extraArgs</code> with custom values under <code>apiServer</code> section:</p> <pre><code>apiServer: extraArgs: enable-admission-plugins: NodeRestriction,PodNodeSelector </code></pre> <p>then update your kube-apiserver static manifest on all control-plane nodes:</p> <pre><code># Kubernetes 1.22 and forward: kubectl get configmap -n kube-system kubeadm-config -o=jsonpath=&quot;{.data}&quot; &gt; kubeadm-config.yaml # Before Kubernetes 1.22: # &quot;kubeadmin config view&quot; was deprecated in 1.19 and removed in 1.22 # Reference: https://github.com/kubernetes/kubeadm/issues/2203 kubeadm config view &gt; kubeadm-config.yaml # Update the manifest with the file generated by any of the above lines kubeadm init phase control-plane apiserver --config kubeadm-config.yaml </code></pre> <hr /> <h1>kubespray users</h1> <p>You can just use <code>kube_apiserver_enable_admission_plugins</code> variable for your api-server configuration variables:</p> <pre><code> kube_apiserver_enable_admission_plugins: - PodNodeSelector </code></pre>
<p>I am new to Kubernetes monitoring and self-healing. I wonder what kind of self-healing Kubernetes can provide, such as restart failed pod if necessary? anything else? what Kubernetes cannot provide.</p> <p>As for Kubernetes monitoring, what kind of metrics we need to monitor in order to operate on Kubernetes instead of Kubernetes self-healing?</p> <p>Any ideas welcomed. Thanks.</p>
<p>I'm afraid your question goes beyond what is possible to answer here on stackoverflow. </p> <p>Yes, k8s is able to restart/reschedule pods. If you are already a bit familiar with key concepts, maybe <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">pod-lifecycle</a> is a point to start. If you have little knowledge about k8s basics, I suggest you study <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSets</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a> etc. because Monitoring in k8s relies heavily on them!</p> <p>You did not say what kind of metrics you are interested in. For system metrics like io/cpu time etc. you can start with e.g. <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">Kubernetes Metrics Server.</a> If you want to get insights into k8s metrics (how many services, uptime, etc.) have a look at <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> which is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.</p> <p>Have fun with k8s</p> <p>Cheers</p>
<p>I have a Kubernetes cluster on Google Kubernetes Engine. I want to assign a static IP for all outgoing traffic of a cluster.</p> <p>I already have reserved external IPs but I can't assign them to a cluster with the GCP console.</p> <p>I found a solution to do it with the cli : <a href="https://stackoverflow.com/questions/41133755/static-outgoing-ip-in-kubernetes">Static outgoing IP in Kubernetes</a></p> <p>but it targets the VM and I will need to set it each time I deploy. So it's not targeting the cluster.</p> <p>Can anybody provide any pointers? Thanks.</p>
<p>GKE currently doesn't have an option to create the cluster with all your nodes using a reserved public IP. All you get in advanced networking options is something like this:</p> <p><a href="https://i.stack.imgur.com/Dj4P9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dj4P9.png" alt="advanced network k8s"></a></p> <p>You will have to use the <code>gcloud</code> API that you mentioned which should be easy to put in a script.</p> <p>Or you can also use the UI by editing the instance(s) and going into 'Network Interfaces' like this:</p> <p><a href="https://i.stack.imgur.com/ZevJe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZevJe.png" alt="network"></a></p>
<p>I have created Cassandra stateful/headless cluster on AWS and it's working fine inside the cluster. The only problem is I am not able to access it from outside cluster. I tried most of the things on the Kubernetes documentation or StackOverflow references, but still not able to solve it.</p> <p>I have a working security group from AWS. Here are my service and statefulset yaml files.</p> <pre> apiVersion: v1 kind: Service metadata: name: cassandra spec: externalTrafficPolicy: Local ports: - nodePort: 30000 port: 30000 protocol: TCP targetPort: 9042 selector: app: cassandra type: NodePort </pre> <pre> apiVersion: "apps/v1" kind: StatefulSet metadata: name: cassandra spec: serviceName: cassandra replicas: 2 selector: matchLabels: app: cassandra template: metadata: labels: role: cassandra app: cassandra spec: terminationGracePeriodSeconds: 10 containers: - env: - name: MAX_HEAP_SIZE value: 1024M - name: HEAP_NEWSIZE value: 1024M - name: CASSANDRA_SEEDS value: "cassandra-0.cassandra.default.svc.cluster.local" - name: CASSANDRA_CLUSTER_NAME value: "SetuCassandra" - name: CASSANDRA_DC value: "DC1-SetuCassandra" - name: CASSANDRA_RACK value: "Rack1-SetuCassandra" - name: CASSANDRA_SEED_PROVIDER value: io.k8s.cassandra.KubernetesSeedProvider - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP image: library/cassandra:3.11 name: cassandra volumeMounts: - mountPath: /cassandra-storage name: cassandra-storage ports: - containerPort: 9042 name: cql volumeClaimTemplates: - metadata: name: cassandra-storage spec: accessModes: - ReadWriteOnce resources: requests: storage: 320Gi </pre> <p>I appreciate any help on this.</p>
<p>There are not enough details on the AWS security groups. But my guess is that your security group(s) in your cluster are not allowing traffic from the security groups or IP addresses from the other cluster. Something like this:</p> <p><a href="https://i.stack.imgur.com/DuimM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DuimM.png" alt="enter image description here"></a></p>
<p>For faster builds I want to reuse my gradle-user-home in gitlab-runner. To achieve this I created a pvc and want to mount it on my build-containers. Can I do this with the default helm-chart?</p>
<p>So if you see <a href="https://gitlab.com/charts/gitlab-runner/blob/master/templates/deployment.yaml" rel="nofollow noreferrer">here</a>. There are no PVCs (Physical Volume Claims) or PVs (Physical Volumes) defined on the default Helm Chart.</p> <p>You'd have to have something like this:</p> <pre><code> volumeMounts: - name: gradle-user-home &lt;== add this mountPath: /path/to/home subPath: home - name: runner-secrets mountPath: /secrets - name: etc-gitlab-runner mountPath: /home/gitlab-runner/.gitlab-runner - name: scripts mountPath: /scripts {{- if .Values.certsSecretName }} - name: custom-certs readOnly: true mountPath: /home/gitlab-runner/.gitlab-runner/certs/ {{- end }} resources: {{ toYaml .Values.resources | indent 10 }} volumes: - name: runner-secrets emptyDir: medium: "Memory" - name: etc-gitlab-runner emptyDir: medium: "Memory" - name: init-runner-secrets projected: sources: {{- if .Values.runners.cache }} - secret: name: {{ template "gitlab-runner.cache.secret" . }} items: - key: accesskey path: s3-access-key - key: secretkey path: s3-secret-key {{- end }} - secret: name: {{ template "gitlab-runner.secret" . }} items: - key: runner-registration-token path: runner-registration-token - key: runner-token path: runner-token {{- if .Values.certsSecretName }} - name: custom-certs secret: secretName: {{ .Values.certsSecretName }} {{- end }} - name: scripts configMap: name: {{ template "gitlab-runner.fullname" . }} volumeClaimTemplates: &lt;== Add this - metadata: name: gradle-user-home spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi </code></pre> <p>Note the this is only a rough draft template. You'll have to convert to the appropriate Helm template if you want to make it a Helm chart. (This happens to be a Golang template)</p> <p>You don't have to modify the template if you don't want to. You can directly modify your deployment if you'd like with: <code>kubectl -n &lt;namespace&gt; edit deployment &lt;gitlab-runner-deployment&gt;</code></p> <p>Also if you are using PVCs (Physical Volume Claims), you'll have to define a default <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">Kubernetes Storage Class</a></p> <p>Hope it helps!</p>
<p>When I <a href="https://i.stack.imgur.com/ZkdAP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZkdAP.png" alt="deploy"></a> a Docker image to <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">Kubernetes</a> </p> <p>I get this permission denied error <a href="https://i.stack.imgur.com/qV0YH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qV0YH.png" alt="enter image description here"></a></p> <p>I'm using <a href="https://cloud.google.com/container-registry/" rel="nofollow noreferrer">Google Container-Registry/</a> to store docker image that was build with <a href="https://cloud.google.com/cloud-build/" rel="nofollow noreferrer">cloud build</a> <a href="https://i.stack.imgur.com/nBnJv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nBnJv.png" alt="enter image description here"></a></p>
<p>This totally looks like <code>backend</code> doesn't have 'execute' permissions in the container.</p> <p>On your Dockerfile just try adding it execute permissions. Something like this:</p> <pre><code>RUN chmod 755 /backend </code></pre> <p>Hope it helps.</p>
<p>I have installed a K8S cluster on laptop using Kubeadm and VirtualBox. It seems a bit odd that the cluster has to be up and running to see the documentation as shown below.</p> <pre><code>praveensripati@praveen-ubuntu:~$ kubectl explain pods Unable to connect to the server: dial tcp 192.168.0.31:6443: connect: no route to host </code></pre> <p>Any workaround for this?</p>
<p>So the rather sobering news is that AFAIK there's not out-of-the box way how to do it, though you could totally write a <code>kubectl</code> plugin (it has become rather trivial now in 1.12). But for now, the best I can offer is the following:</p> <pre><code># figure out which endpoint kubectl uses to retrieve docs: $ kubectl -v9 explain pods # from above I learn that in my case it's apparently # https://192.168.64.11:8443/openapi/v2 so let's curl that: $ curl -k https://192.168.64.11:8443/openapi/v2 &gt; resources-docs.json </code></pre> <p>From here you can, for example, <a href="http://andrew.gibiansky.com/blog/command-line/jq-primer/" rel="nofollow noreferrer">use jq</a> to query for the descriptions. It's not as nice as a proper explain, but kinda is a good enough workaround until someone writes an docs offline query kubectl plugin.</p>
<p>I have installed a K8S cluster on laptop using Kubeadm and VirtualBox. It seems a bit odd that the cluster has to be up and running to see the documentation as shown below.</p> <pre><code>praveensripati@praveen-ubuntu:~$ kubectl explain pods Unable to connect to the server: dial tcp 192.168.0.31:6443: connect: no route to host </code></pre> <p>Any workaround for this?</p>
<p>See &quot;<a href="https://blog.heptio.com/kubectl-explain-heptioprotip-ee883992a243" rel="nofollow noreferrer">kubectl explain — #HeptioProTip</a>&quot;</p> <blockquote> <p>Behind the scenes, <strong><code>kubectl</code> just made an API request to my Kubernetes cluster</strong>, grabbed the current Swagger documentation of the API version running in the cluster, and output the documentation and object types.</p> </blockquote> <p>Try <code>kubectl help</code> as an offline alternative, but that won't be as complete (limite to kubectl itself).</p>
<p>I am developing database environment on Minikube. I'd like to persist MySQL data by PersistentVolume function of Kubernetes. However, an error will occur when starting MySQL server and will not start up, if hostPath specified /var/lib/mysql(MySQL data directory).</p> <p>kubernetes-config.yaml</p> <pre><code> apiVersion: v1 kind: PersistentVolume metadata: name: nfs001-pv labels: app: nfs001-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain mountOptions: - hard nfs: path: /share/mydata server: 192.168.99.1 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: "" selector: matchLabels: app: nfs001-pv --- apiVersion: apps/v1beta2 kind: Deployment metadata: name: sk-app labels: app: sk-app spec: replicas: 1 selector: matchLabels: app: sk-app template: metadata: labels: app: sk-app spec: containers: - name: sk-app image: mysql:5.7 imagePullPolicy: IfNotPresent env: - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 volumeMounts: - mountPath: /var/lib/mysql name: mydata volumes: - name: mydata persistentVolumeClaim: claimName: nfs-claim --- apiVersion: v1 kind: Service metadata: name: sk-app labels: app: sk-app spec: type: NodePort ports: - port: 3306 nodePort: 30001 selector: app: sk-app </code></pre> <p>How can I launch it?</p> <p>-- Postscript --</p> <p>When I tried "kubectl logs", I got following error message.</p> <p><code>chown: changing ownership of '/var/lib/mysql/': Operation not permitted</code></p> <p>When I tried "kubectl describe xxx", I got following results.</p> <p>kubectl describe pv:</p> <pre><code>Name: nfs001-pv Labels: app=nfs001-pv Annotations: pv.kubernetes.io/bound-by-controller=yes StorageClass: Status: Bound Claim: default/nfs-claim Reclaim Policy: Retain Access Modes: RWX Capacity: 1Gi Message: Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.99.1 Path: /share/mydata ReadOnly: false Events: &lt;none&gt; </code></pre> <p>kubectl describe pvc:</p> <pre><code>Name: nfs-claim Namespace: default StorageClass: Status: Bound Volume: nfs001-pv Labels: &lt;none&gt; Annotations: pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes Capacity: 1Gi Access Modes: RWX Events: &lt;none&gt; </code></pre> <p>kubectl describe deployment:</p> <pre><code>Name: sk-app Namespace: default CreationTimestamp: Tue, 25 Sep 2018 14:22:34 +0900 Labels: app=sk-app Annotations: deployment.kubernetes.io/revision=1 Selector: app=sk-app Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=sk-app Containers: sk-app: Image: mysql:5.7 Port: 3306/TCP Environment: MYSQL_ROOT_PASSWORD: password Mounts: /var/lib/mysql from mydata (rw) Volumes: mydata: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nfs-claim ReadOnly: false Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: &lt;none&gt; NewReplicaSet: sk-app-d58dddfb (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 23s deployment-controller Scaled up replica set sk-app-d58dddfb to 1 </code></pre>
<p>Volumes look good, so looks like you just have a permission issue on the root of your nfs volume that gets mounted as <code>/var/lib/mysql</code> on your container.</p> <p>You can:</p> <p>1) Mount that nfs volume using nfs mount commands and run a:</p> <pre><code>chmod 777 . # This gives rwx to anybody so need to be mindful. </code></pre> <p>2) Run an initContainer in your deployment, similar to this:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: sk-app labels: app: sk-app spec: replicas: 1 selector: matchLabels: app: sk-app template: metadata: labels: app: sk-app spec: initContainers: - name: init-mysql image: busybox command: ['sh', '-c', 'chmod 777 /var/lib/mysql'] volumeMounts: - mountPath: /var/lib/mysql name: mydata containers: - name: sk-app image: mysql:5.7 imagePullPolicy: IfNotPresent env: - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 volumeMounts: - mountPath: /var/lib/mysql name: mydata volumes: - name: mydata persistentVolumeClaim: claimName: nfs-claim accessModes: - ReadWriteMany </code></pre>
<p>I'm trying to setup a Prometheus monitoring on a dedicated Linux VM to get Kubernetes cluster metrics.</p> <p>Whatever I try, I'm always blocked with the nice message Unauthorized...</p> <p>This is my scrape config for pods: </p> <pre><code>- job_name: 'Kubernetes' scheme: https tls_config: insecure_skip_verify: true bearer_token: %VeryLongLine% kubernetes_sd_configs: - api_server: https://%ClusterName%.hcp.westeurope.azmk8s.io tls_config: insecure_skip_verify: true role: node namespaces: names: [default] </code></pre> <p>The token is the correct one because a Invoke-WebRequest from PowerShell works just fine.</p> <p>Does anyone have an idea? </p> <p>Thank you</p>
<pre><code>- job_name: 'Kubernetes' scheme: https tls_config: insecure_skip_verify: true kubernetes_sd_configs: - api_server: https://%ClusterName%.hcp.westeurope.azmk8s.io bearer_token: %VeryLongLine% role: node namespaces: names: [default] </code></pre> <p>Bearer token at the right place, so in the kubernetes config and not the job</p>
<p>I'm looking for an easy way to find what version of my images I have deployed in my kubernetes environment. </p> <p>The closest thing I can find to what I want is <code>helm get values &lt;namespace&gt; -a</code> <br>(but this gets values and dumps all (computed) values)</p> <p>Is there an easier/clean way to get a list of images and versions deployed??</p> <p>Thanks in advance</p>
<p>You can use <code>kubectl</code> to get all images form all pods running in the namespace/cluster. See <a href="https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/" rel="nofollow noreferrer">List All Container Images Running in a Cluster</a>.</p> <p>For one namespace:</p> <pre><code>kubectl get pods -n &lt;namespace&gt; -o jsonpath="{..image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c </code></pre> <p>For the whole cluster:</p> <pre><code>kubectl get pods --all-namespaces -o jsonpath="{..image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c </code></pre>
<p>So we are using <code>RDD</code> and do a <code>flatMap</code> on a set of data. Then each element we are transforming with a map operation.</p> <pre class="lang-scala prettyprint-override"><code>val elementsRDD: RDD[Element] = ... val result = elements.map(processData); </code></pre> <p>On a fixed set of elements we see that on each run if some executors dies during a map operation the spark spin ups new executors but it doesn't provide a data to operation as a result we are loosing data. Our expectation that Spark should provide data or at least re-run stage from scratch.</p> <p>We use a newest Kubernetes feature of Spark 2.4(which is still in development)</p> <p>UPDATE: Documentation says that it's impossible situation, but our logging from executors shows that we lose different pieces of data during a data processing on a fixed set of data. Moreover that if we do not kill any executors during a process we do not lose any data. </p>
<p>No. The data processed for the died executor will be lost, but when the driver notices the failure of an executor, it distributes the jobs of the death executor among the still alive executors. Spark won't successfully success the application until every job has been successfully completed.</p> <p>You can read some notes on Spark's high availability <a href="https://gist.github.com/aseigneurin/3af6b228490a8deab519c6aea2c209bc" rel="nofollow noreferrer">here</a></p> <h2>Update:</h2> <p>As @user6910411 pointed out, there is a case in which you could lose data: if the data source being used by your Spark application is not persistent or it provides temporal data. In those cases the modification of the data used by the Spark application may lead to loss of data. </p>
<p>I am deploying some NodeJS code into Kubernetes. It used to be that you needed to run either PM2 or the NodeJS cluster module in order to take full advantage of multi-core hardware. </p> <p>Now that we have Kubernetes, it is unclear if one must use one or the other, to get the full benefit of multiple cores.</p> <p>Should a person specify the number of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units" rel="noreferrer">CPU units</a> in their pod YAML configuration? </p> <p>Or is there simply no need to account for multiple cores with NodeJS in Kubernetes?</p>
<p>You'll achieve utilization of multiple cores either way; the difference being that with the nodejs cluster module approach, you'd have to "request" more resources from Kubernetes (i.e., multiple cores), which might be more difficult for Kubernetes to schedule than a few different containers requesting one core (or less...) each (which it can, in turn, schedule on multiple nodes, and not necessarily look for one node with enough available cores).</p>
<p>How can I issue a <code>kubectl run</code> that pulls an environment var from a k8s secret configmap?</p> <p>Currently I have:</p> <p><code>kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=foo </code></p>
<p>Look into the <code>overrides</code> flag of the <code>run</code> command... it reads as:</p> <blockquote> <p>An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.</p> </blockquote> <p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run</a></p> <p>So in your case I guess it would be something like:</p> <pre><code>kubectl run oneoff -i --rm --overrides=' { "spec": { "containers": [ { "name": "oneoff", "image": "IMAGE", "env": [ { "name": "ENV_NAME" "valueFrom": { "secretKeyRef": { "name": "SECRET_NAME", "key": "SECRET_KEY" } } } ] } ] } } ' --image= IMAGE </code></pre>
<p>I'm new to Kubernetes.</p> <p>I have a question about Kubernetes deployment replicas. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">kubernetes official web site</a></p> <p>If I set replicas on a deployment yaml, does it means replica pods would work at the same time on the cluster? or only one pod would offer services and other replicated pods work like waiting player and will be replaced when working pod shut down accidentally? </p> <p>For example, if I declare 3 replicas on a deployment yaml, only 1 pod really work on the cluster and other 2 pods waiting for replacement time like when working pod shut down accidentally.</p>
<p>The moment you create the deployment resource, the no of replicas defined in the deployment yaml would be all up and running for serving the required functions exposed by the pod</p>
<p>I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.</p> <p>I need to do something like this :</p> <pre><code>args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11'] </code></pre> <p>But I don't want to hard code those values for <code>--arg1</code> and <code>--arg2</code>, instead it should be like,</p> <pre><code>args: ['--arg1='$HOST1,'--arg2='$HOST2] </code></pre> <p>How should I do this?</p>
<p>You have two options that are quite different and really depend on your use-case, but both are worth knowing:</p> <p>1) <a href="https://github.com/helm/helm" rel="nofollow noreferrer">Helm</a> would allow you to create <em>templates</em> of Kubernetes definitions, that can use variables.<br> Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.<br> You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)<br></p> <p>2) <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMaps</a> allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.<br> You can later reference the ConfigMap from your pod/deployment manifests.</p> <p>Hope this helps!</p>
<p>Kubernetes kubeflow scaling is not working</p> <p>I have installed kubernetes, kubectl and ksonnet as per suggested.</p> <p>I have created kubeflow namespace and deployed kubeflow core components.</p> <p>Then, I have created ksonnet app and namespace and h2o3-scaling component.</p> <p>Then, I have tried to run some examples. Everything is working fine. </p> <p>I have followed all the stepes provided by this url <a href="https://github.com/h2oai/h2o-kubeflow" rel="nofollow noreferrer">https://github.com/h2oai/h2o-kubeflow</a></p> <p>But horizontal scaling is not working as expected.</p> <p>Thanks in advance. Please help anyone to solve this problem.</p>
<p>I'm not sure about H2O3, but Kubeflow itself doesn't really support autoscaling. There are few components:</p> <ol> <li>Tf-operator - it doesn't run training itself, it runs pods that run training and you specify number of replicas in TFJob definition, so no autoscaling.</li> <li>Tf-serving - potentially could do autoscaling, but we don't right now, again you specify replicas.</li> <li>Jupyterhub - same as tf-operator, spawns pods, don't autoscale.</li> </ol> <p>What is exact use case you're aiming for?</p>
<p>We are using Kubernetes 1.10.1 </p> <p>I'm having a statefulset which has a single volumeClaim attached.</p> <p>I would like to add an additional volumeClaim since resizing of an existing PV is still in beta and also not available in Kubernetes 1.10.</p> <p>My change would basically just add an additional PVC (and of course also mount it which is not mentioned here):</p> <pre><code>*** 1,10 **** --- 1,19 ---- volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi + - metadata: + name: data2 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi </code></pre> <p>As result I get an error:</p> <p><code>Error: UPGRADE FAILED: StatefulSet.apps "my-app" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.</code></p> <p>Shouldn't at least adding additional PVCs be supported as long the existing ones are not being touched? Am I missing something here? I haven't found anything related to this in the documentation.</p>
<p>I don't know if there's any rationale on the design but the only reason why you would not want to update the storage template on a StatefulSet would be the stability of your application overall.</p> <p>In any case, I agree that this should be allowed since you are adding an extra storage, I would argue that resizing your volumes should be allowed too when it's available. Opened <a href="https://github.com/kubernetes/kubernetes/issues/69041" rel="nofollow noreferrer">this</a> feature request.</p>
<p>When I deploy a docker image to <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">Kubernetes Engine</a>,</p> <p><a href="https://i.stack.imgur.com/LtwaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtwaW.png" alt="enter image description here"></a> the pods can't be initialize, pods are just making a simple get request to <a href="https://jsonplaceholder.typicode.com/" rel="nofollow noreferrer">https://jsonplaceholder.typicode.com/</a> <a href="https://i.stack.imgur.com/qtxIt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtxIt.png" alt="code"></a></p> <p>I get an error message <code>certificate signed by unknown authority</code></p> <p><a href="https://i.stack.imgur.com/jeX7D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jeX7D.png" alt="enter image description here"></a></p>
<p>From the comments in your question, I expect you are running up against the common problem of Alpine base images not being populated with the <code>ca-certificates</code> package, which contains a number of root CA certificates to anchor your root of trust.</p> <p>Add the following command to your <code>Dockerfile</code> to ensure these are installed in the produced image:</p> <pre><code>RUN apk update &amp;&amp; apk add ca-certificates &amp;&amp; rm -rf /var/cache/apk/* </code></pre> <p>(we run multiple operations in a single <code>RUN</code> step to <a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run" rel="noreferrer">avoid introducing unnecessary bloat</a> in the layers of your final image).</p> <p>Base images which include the CA certificates package are also available in the container registry (although with this statement I make no claims as to their suitability or provenance).</p>
<p>I am new to configuring Ingress rules for my Kubernetes cluster.</p> <p>My Kubernetes cluster is deployed on Bare Metal. No cloud.</p> <p>I followed this <a href="https://kubernetes.github.io/ingress-nginx/deploy/#generic-deployment" rel="nofollow noreferrer">link</a> to set up my nginx-controller with RBAC in my cluster.</p> <p>This is what I have deployed :</p> <pre><code># kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/default-http-backend-7c5bc89cc9-ks6kd 1/1 Running 0 2h pod/nginx-ingress-controller-5b6864749-8xbhf 1/1 Running 0 2h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/default-http-backend ClusterIP 10.233.15.56 &lt;none&gt; 80/TCP 2h service/ingress-nginx NodePort 10.233.38.84 &lt;none&gt; 80:31118/TCP,443:32003/TCP 2h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/default-http-backend 1 1 1 1 2h deployment.apps/nginx-ingress-controller 1 1 1 1 2h NAME DESIRED CURRENT READY AGE replicaset.apps/default-http-backend-7c5bc89cc9 1 1 1 2h replicaset.apps/nginx-ingress-controller-5b6864749 1 1 1 2h </code></pre> <p>Given that I have my setup, I want to access my grafana dashboard using a URL.</p> <p>My grafana setup is working perfectly fine.</p> <pre><code># kubectl get all -n default NAME READY STATUS RESTARTS AGE pod/grafana-67c6585fbd-4jl7p 1/1 Running 0 2h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grafana NodePort 10.233.5.111 &lt;none&gt; 3000:32093/TCP 2h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/grafana 1 1 1 1 2h NAME DESIRED CURRENT READY AGE replicaset.apps/grafana-67c6585fbd 1 1 1 2h </code></pre> <p>I can access the dashboard using <code>http://10.27.239.145:32093</code> which is the IP of one of my K8S worker nodes.</p> <p>Now rather than accessing via IP:NodePort, I want to access via URL e.g. <code>grafana.test.mydomain.com</code></p> <p>So the ingress rule that I configured in my default namespace is :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx creationTimestamp: 2018-09-25T20:32:24Z generation: 5 name: grafana namespace: default resourceVersion: "28485" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/jenkins-tls uid: 1c51cece-c102-11e8-bf0f-02000a1bef39 spec: rules: - host: grafana.test.mydomain.com http: paths: - backend: serviceName: grafana servicePort: 3000 path: / </code></pre> <p>On my local laptop from where I am testing, Ive added to my <code>/etc/hosts</code> the following entry :</p> <pre><code>10.27.239.145 grafana.test.mydomain.com </code></pre> <p>And in my browser, I am trying to access <code>http://grafana.test.mydomain.com</code> but I only get <code>This site can’t be reached grafana.test.mydomain.com refused to connect.</code></p> <p>I have a strong feeling that I am missing out on something but can't figure it out.</p> <p>I changed the NodePort to ClusterIP but no luck.</p> <p>I know that my ingress controller is working since everytime I make a change to my ingress rules, I get logs from my ingress controller.</p> <pre><code>I0925 21:00:19.041440 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"grafana", UID:"1c51cece-c102-11e8-bf0f-02000a1bef39", APIVersion:"extensions/v1beta1", ResourceVersion:"28485", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/grafana I0925 21:00:19.041732 9 controller.go:171] Configuration changes detected, backend reload required. I0925 21:00:19.216044 9 controller.go:187] Backend successfully reloaded. I0925 21:00:19.217645 9 controller.go:204] Dynamic reconfiguration succeeded. </code></pre> <p>Any help will strongly be appreciated regarding what might I have missed.</p>
<p>From what I see, you need to set <code>grafana.test.mydomain.com</code> to point to <code>10.233.38.84</code>. </p> <p>Basically, your nginx controller service is directing the traffic to your ingress and then your ingress forwards it to the backend on the nodePort (this is implicit in the ingress). It works for me, but I'm using an AWS ELB, I basically set <code>grafana.test.mydomain.com</code> to point to <code>aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com</code></p> <pre><code>$ kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/default-http-backend-6586bc58b6-snxbv 1/1 Running 0 1h pod/grafana-5b969bb7f9-tsv5k 1/1 Running 0 52m pod/nginx-ingress-controller-6bd7c597cb-lfwcf 1/1 Running 0 1h pod/prometheus-server-5dbf9f4fc9-mnwn4 1/1 Running 0 53m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/default-http-backend ClusterIP 10.x.x.x &lt;none&gt; 80/TCP 1h service/grafana NodePort 10.x.x.x &lt;none&gt; 3000:30073/TCP 52m service/ingress-nginx LoadBalancer 10.x.x.x aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com 80:30276/TCP,443:32011/TCP 1h service/prometheus-server NodePort 10.x.x.x &lt;none&gt; 9090:32419/TCP 53m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/default-http-backend 1 1 1 1 1h deployment.apps/grafana 1 1 1 1 52m deployment.apps/nginx-ingress-controller 1 1 1 1 1h deployment.apps/prometheus-server 1 1 1 1 53m NAME DESIRED CURRENT READY AGE replicaset.apps/default-http-backend-6586bc58b6 1 1 1 1h replicaset.apps/grafana-5b969bb7f9 1 1 1 52m replicaset.apps/nginx-ingress-controller-6bd7c597cb 1 1 1 1h replicaset.apps/prometheus-server-5dbf9f4fc9 1 1 1 53m $ kubectl describe ingress grafana-ingress -n ingress-nginx Name: grafana-ingress Namespace: ingress-nginx Address: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- grafana.test.mydomain.com / grafana:3000 (&lt;none&gt;) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"grafana-ingress","namespace":"ingress-nginx"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"grafana","servicePort":3000},"path":"/"}]}}]}} nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 40m nginx-ingress-controller Ingress ingress-nginx/grafana-ingress Normal UPDATE 22m (x2 over 40m) nginx-ingress-controller Ingress ingress-nginx/grafana-ingress </code></pre>
<p>I am new to configuring Ingress rules for my Kubernetes cluster.</p> <p>My Kubernetes cluster is deployed on Bare Metal. No cloud.</p> <p>I followed this <a href="https://kubernetes.github.io/ingress-nginx/deploy/#generic-deployment" rel="nofollow noreferrer">link</a> to set up my nginx-controller with RBAC in my cluster.</p> <p>This is what I have deployed :</p> <pre><code># kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/default-http-backend-7c5bc89cc9-ks6kd 1/1 Running 0 2h pod/nginx-ingress-controller-5b6864749-8xbhf 1/1 Running 0 2h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/default-http-backend ClusterIP 10.233.15.56 &lt;none&gt; 80/TCP 2h service/ingress-nginx NodePort 10.233.38.84 &lt;none&gt; 80:31118/TCP,443:32003/TCP 2h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/default-http-backend 1 1 1 1 2h deployment.apps/nginx-ingress-controller 1 1 1 1 2h NAME DESIRED CURRENT READY AGE replicaset.apps/default-http-backend-7c5bc89cc9 1 1 1 2h replicaset.apps/nginx-ingress-controller-5b6864749 1 1 1 2h </code></pre> <p>Given that I have my setup, I want to access my grafana dashboard using a URL.</p> <p>My grafana setup is working perfectly fine.</p> <pre><code># kubectl get all -n default NAME READY STATUS RESTARTS AGE pod/grafana-67c6585fbd-4jl7p 1/1 Running 0 2h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grafana NodePort 10.233.5.111 &lt;none&gt; 3000:32093/TCP 2h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/grafana 1 1 1 1 2h NAME DESIRED CURRENT READY AGE replicaset.apps/grafana-67c6585fbd 1 1 1 2h </code></pre> <p>I can access the dashboard using <code>http://10.27.239.145:32093</code> which is the IP of one of my K8S worker nodes.</p> <p>Now rather than accessing via IP:NodePort, I want to access via URL e.g. <code>grafana.test.mydomain.com</code></p> <p>So the ingress rule that I configured in my default namespace is :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx creationTimestamp: 2018-09-25T20:32:24Z generation: 5 name: grafana namespace: default resourceVersion: "28485" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/jenkins-tls uid: 1c51cece-c102-11e8-bf0f-02000a1bef39 spec: rules: - host: grafana.test.mydomain.com http: paths: - backend: serviceName: grafana servicePort: 3000 path: / </code></pre> <p>On my local laptop from where I am testing, Ive added to my <code>/etc/hosts</code> the following entry :</p> <pre><code>10.27.239.145 grafana.test.mydomain.com </code></pre> <p>And in my browser, I am trying to access <code>http://grafana.test.mydomain.com</code> but I only get <code>This site can’t be reached grafana.test.mydomain.com refused to connect.</code></p> <p>I have a strong feeling that I am missing out on something but can't figure it out.</p> <p>I changed the NodePort to ClusterIP but no luck.</p> <p>I know that my ingress controller is working since everytime I make a change to my ingress rules, I get logs from my ingress controller.</p> <pre><code>I0925 21:00:19.041440 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"grafana", UID:"1c51cece-c102-11e8-bf0f-02000a1bef39", APIVersion:"extensions/v1beta1", ResourceVersion:"28485", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/grafana I0925 21:00:19.041732 9 controller.go:171] Configuration changes detected, backend reload required. I0925 21:00:19.216044 9 controller.go:187] Backend successfully reloaded. I0925 21:00:19.217645 9 controller.go:204] Dynamic reconfiguration succeeded. </code></pre> <p>Any help will strongly be appreciated regarding what might I have missed.</p>
<p>As far as I can see, you only have a NodePort Service on port 32093.</p> <p>Your NodePort publishes the port <code>3000</code> to <code>32093</code> to any external node address as you have already proven, but you configured Ingress to contact port <code>3000</code> on <code>grafana</code> service.</p> <p>Either add the <code>targetPort</code>, <code>port</code> and <code>nodePort</code> to the service for your Grafana instance and point <code>targetPort</code> and <code>port</code> to 3000 and leave nodePort empty/set it to 32092. Then the ingress should work as you posted. Snippet:</p> <pre><code>nodePort: 32093 port: 3000 protocol: TCP targetPort: 3000 </code></pre> <p>Or try to set <code>servicePort: 3000</code> in your ingress configuration to <code>32093</code>. <strong>Warning</strong>: I never tested this. I do not know if Ingress supports that. According to the documentation it should as NodePort is a superset of ClusterIP:</p> <blockquote> <p>NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :.</p> </blockquote> <p><strong>Edit</strong> Btw: <a href="http://grafana.test.mydomain.com:32093" rel="nofollow noreferrer">http://grafana.test.mydomain.com:32093</a> should then already work with your configuration (NodePort)</p>
<p>I have kubernetes Cluster v1.10 Over Centos 7 </p> <p>I installed kubernetes by hard-way </p> <p>I have installed Kong ingress controller using helm </p> <pre><code>helm repo add stable https://kubernetes-charts.storage.googleapis.com helm install stable/kong </code></pre> <p>and this output </p> <pre><code>NOTES: 1. Kong Admin can be accessed inside the cluster using: DNS=guiding-wombat-kong-admin.default.svc.cluster.local PORT=8444 To connect from outside the K8s cluster: HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}') PORT=$(kubectl get svc --namespace default guiding-wombat-kong-admin -o jsonpath='{.spec.ports[0].nodePort}') 2. Kong Proxy can be accessed inside the cluster using: DNS=guiding-wombat-kong-proxy.default.svc.cluster.local PORT=8443 To connect from outside the K8s cluster: HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}') PORT=$(kubectl get svc --namespace default guiding-wombat-kong-proxy -o jsonpath='{.spec.ports[0].nodePort}') </code></pre> <p>and I deployed dummy file </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: http-svc spec: replicas: 1 selector: matchLabels: app: http-svc template: metadata: labels: app: http-svc spec: containers: - name: http-svc image: gcr.io/google_containers/echoserver:1.8 ports: - containerPort: 8080 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP --- apiVersion: v1 kind: Service metadata: name: http-svc labels: app: http-svc spec: type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: http-svc --- </code></pre> <p>and I deployed ingress.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-bar spec: rules: - host: foo.bar http: paths: - path: / backend: serviceName: http-svc servicePort: 80 </code></pre> <p>and when I run : </p> <pre><code>kubectl get ing NAME HOSTS ADDRESS PORTS AGE foo-bar foo.bar 80 1m </code></pre> <p>and when I browse </p> <pre><code>https://node-IP:controller-admin {"next":null,"data":[]} </code></pre> <p>How can I troubleshoot this issue and find the solution? </p> <p>Thank you :D </p>
<p>I recommend installing it using <a href="https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/minikube.md" rel="nofollow noreferrer">this guide</a> only not using minikube.</p> <p>It work for me on AWS:</p> <pre><code>$ curl -H 'Host: foo.bar' http://35.162.32.30 Hostname: http-svc-66ffffc458-jkxsl Pod Information: node name: ip-x-x-x-x.us-west-2.compute.internal pod name: http-svc-66ffffc458-jkxsl pod namespace: default pod IP: 192.168.x.x Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=192.168.x.x method=GET real path=/ query= request_version=1.1 request_uri=http://192.168.x.x:8080/ Request Headers: accept=*/* connection=keep-alive host=192.168.x.x:8080 user-agent=curl/7.58.0 x-forwarded-for=172.x.x.x x-forwarded-host=foo.bar x-forwarded-port=8000 x-forwarded-proto=http x-real-ip=172.x.x.x Request Body: -no body in request- </code></pre>
<p>I have had a running k8s cluster for 2 days and then it has started behaving strangely. </p> <p>My specific question is on kube-proxy. kube-proxy is not updating iptables.</p> <p>From kube-proxy logs, I can see it failed to connect to kubernetes-apiserver (in my case connection is kube-prxy --> Haproxy --> k8s API server). But the pod is shown as RUNNING.</p> <p>Question: I am expecting kube-proxy pod to be down if it is not able to register with apiserver for events. </p> <p>How do I achieve this behavior via liveness probes?</p> <p>Note: After killing the pod, kube-proxy works fine.</p> <h2>kube-proxy logs</h2> <pre><code>sudo docker logs 1de375c94fd4 -f W0910 15:18:22.091902 1 server.go:195] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. I0910 15:18:22.091962 1 feature_gate.go:226] feature gates: &amp;{{} map[]} time="2018-09-10T15:18:22Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.15.0-33-generic/modules.dep.bin'\nmodprobe: WARNING: Module ip_vs not found in directory /lib/modules/4.15.0-33-generic`, error: exit status 1" time="2018-09-10T15:18:22Z" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed." I0910 15:18:22.185086 1 server.go:409] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config. I0910 15:18:22.186885 1 server_others.go:140] Using iptables Proxier. W0910 15:18:22.438408 1 server.go:601] Failed to retrieve node info: nodes "$(node_name)" not found W0910 15:18:22.438494 1 proxier.go:306] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP I0910 15:18:22.438595 1 server_others.go:174] Tearing down inactive rules. I0910 15:18:22.861478 1 server.go:444] Version: v1.10.2 I0910 15:18:22.867003 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 2883584 I0910 15:18:22.867046 1 conntrack.go:52] Setting nf_conntrack_max to 2883584 I0910 15:18:22.867267 1 conntrack.go:83] Setting conntrack hashsize to 720896 I0910 15:18:22.893396 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0910 15:18:22.893505 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0910 15:18:22.893737 1 config.go:102] Starting endpoints config controller I0910 15:18:22.893749 1 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller I0910 15:18:22.893742 1 config.go:202] Starting service config controller I0910 15:18:22.893765 1 controller_utils.go:1019] Waiting for caches to sync for service config controller I0910 15:18:22.993904 1 controller_utils.go:1026] Caches are synced for endpoints config controller I0910 15:18:22.993921 1 controller_utils.go:1026] Caches are synced for service config controller W0910 16:13:28.276082 1 reflector.go:341] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: watch of *core.Endpoints ended with: very short watch: k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Unexpected watch close - watch lasted less than a second and no items received W0910 16:13:28.276083 1 reflector.go:341] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: watch of *core.Service ended with: very short watch: k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Unexpected watch close - watch lasted less than a second and no items received E0910 16:13:29.276678 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:29.276677 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:30.277201 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:30.278009 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:31.277723 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:31.278574 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:32.278197 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:32.279134 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:33.278684 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused E0910 16:13:33.279587 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused </code></pre>
<blockquote> <p>Question: I am expecting kube-proxy pod to be down if it is not able to register with apiserver for events.</p> </blockquote> <p>The kube-proxy is not supposed to go down. It listens for events on the kube-apiserver and performs whatever it needs to do when a change/deployment happens. The rationale that I can think of is that it may be caching information to keep the iptables on your system consistent. Kubernetes is designed in such a way that if your master/kube-apiserver/or master components go down, then traffic should still be flowing to the nodes with no downtime.</p> <blockquote> <p>How do I achieve this behavior via liveness probes?</p> </blockquote> <p>You can always add liveness probes to the <code>kube-proxy</code> DaemonSet but it's not a recommended practice:</p> <pre><code>spec: containers: - command: - /usr/local/bin/kube-proxy - --config=/var/lib/kube-proxy/config.conf image: k8s.gcr.io/kube-proxy-amd64:v1.11.2 imagePullPolicy: IfNotPresent name: kube-proxy resources: {} securityContext: privileged: true livenessProbe: exec: command: - curl &lt;apiserver&gt;:10256/healthz initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>Make sure that <code>--healthz-port</code> is enabled on the kube-apiserver.</p>
<p>I want to try and configure a Filter in Envoy Proxy to block ingress and egress to the service based on some IP's, hostname, routing table, etc.</p> <p>I have searched for the documentation and see it's possible. But didn't get any examples, of its usage. </p> <p>Can someone point out some example of how It can be done?</p> <ul> <li><p>One configuration example is present on this page: <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto</a></p> <ul> <li>But this is for a service account, like in Kubernetes.</li> </ul></li> <li><p>The closest to what I want, I can see here in this page: <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics</a><br/></p> <ul> <li>Mentioned as, <em>"The filter supports configuration with either a safe-list (ALLOW) or block-list (DENY) set of policies based on properties of the connection (IPs, ports, SSL subject)."</em> </li> <li>But it doesn't show how to do it.</li> </ul></li> </ul> <p>I have figured out something like this:</p> <pre><code>network_filters: - name: service-access config: rules: action: ALLOW policies: "service-access": principals: source_ip: 192.168.135.211 permissions: - destination_ip: 0.0.0.0 - destination_port: 443 </code></pre> <p><strong>But I am not able to apply this network filter. All the configurations give me configuration error.</strong></p>
<p>I would recommend <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a>. You can set up a <code>Rule</code> that will deny all traffic not originating from <code>192.168.0.1</code> IP. </p> <pre><code>apiVersion: "config.istio.io/v1alpha2" kind: denier metadata: name: denyreviewsv3handler spec: status: code: 7 message: Not allowed --- apiVersion: "config.istio.io/v1alpha2" kind: checknothing metadata: name: denyreviewsv3request spec: --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: denyreviewsv3 spec: match: source.ip != ip("192.168.0.1") actions: - handler: denyreviewsv3handler.denier instances: [ denyreviewsv3request.checknothing ] </code></pre> <p>You can match other attributes specified in <a href="https://istio.io/docs/reference/config/policy-and-telemetry/attribute-vocabulary/" rel="nofollow noreferrer">Attribute Vocabulary</a>, for example, block <code>curl</code> command <code>match: match(request.headers["user-agent"], "curl*")</code></p> <p>More about <a href="https://istio.io/docs/concepts/traffic-management/#ingress-and-egress" rel="nofollow noreferrer">Traffic Management</a> and <a href="https://istio.io/docs/tasks/policy-enforcement/denial-and-list/" rel="nofollow noreferrer">Denials and White/Black Listing</a> can be found in <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> documentation. I can also recommend you this <a href="https://github.com/szihai/istio-workshop" rel="nofollow noreferrer">istio-workshop</a> published by <a href="https://github.com/szihai" rel="nofollow noreferrer">szihai</a>.</p>
<p>When a pod is being scheduled, I dynamically (and transparently) mount some shared libraries folder into the client containers through Kubernetes DevicePlugins. Now, in the container I want to append/extend these dynamically mounted shared libraries to LD_LIBRARY_PATH environmental variables.</p> <p>Inside the container: This can be achieved by running command on the bash "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/some/new/directory"</p> <p>From the Host: I can add the export command to the pod.yaml file under pod.spec.command and args. </p> <p>But, I wanted to do it transparently without the client/admin specifying it in the yaml file using Kubernetes DevicePlugins or Extended-Schedulers</p> <p>I am looking method/hack by which I can append/extend the LD_LIBRARY_PATH inside the container only using Kubernetes source code.</p> <p>Thanks.</p>
<p>You can just bake into your Dockerfile and create an image that you use in Kubernetes for that. No need to hack the Kubernetes source code.</p> <p>In your Dockerfile in some line:</p> <pre><code>ENV LD_LIBRARY_PATH /extra/path:$LD_LIBRARY_PATH </code></pre> <p>Then:</p> <pre><code>docker build -t &lt;your-image-tag&gt; . docker push &lt;your-image-tag&gt; </code></pre> <p>Then, update your pod or deployment definition and deploy to Kubernetes.</p> <p>Hope it helps.</p>
<p>I created a pod with <code>kubectl create -f pod.xml</code> and <code>kubectl apply -f pod.xml</code> using the below yaml and I don't see any difference, a pod gets created with both the commands. The <a href="https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/" rel="noreferrer">K8S document</a>, mentions imperative and declarative commands. But, still the create and apply behave the same way.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo Hello Kubernetes! &amp;&amp; sleep 3600'] </code></pre> <p>What's the difference? Also, how is <code>kubectl apply</code> declarative and <code>kubectl create</code> imperative? Both of them take one or multiple yaml files with the object details in it.</p>
<p>There is a subtle difference between <code>kubectl create</code> and <code>kubectl apply</code> commands.</p> <p>The <code>kubectl create</code> command creates a new resource. So, if the command is run again it will throw an error as resource names should be unique in a namespace.</p> <pre><code>kubectl get pods No resources found. kubectl create -f pod.xml pod/myapp-pod created kubectl create -f pod.xml Error from server (AlreadyExists): error when creating "pod.xml": pods "myapp-pod" already exists </code></pre> <p>2) The <code>kubectl apply</code> command applies the configuration to a resource. If the resource is not there then it will be created. The <code>kubectl apply</code> command can be run the second time as it simply applies the configuration as shown below. In this case, the configuration hasn't changed. So, the pod hasn't changed.</p> <pre><code>kubectl delete pod/myapp-pod pod "myapp-pod" deleted kubectl apply -f pod.xml pod/myapp-pod created kubectl apply -f pod.xml pod/myapp-pod unchanged </code></pre> <p>In the <code>kubectl create</code>, we specify a certain action, in this case <code>create</code> and so it is <strong>imperative</strong>. In the <code>kubectl apply</code> command we specify the target state of the system and don't specify a certain action and so <strong>declarative</strong>. We let the system decide what action to take. If the resource is not there it will create it, if the resource is there then it will apply the configuration to the existing resource.</p> <p>From an execution perspective, there is no difference when a resource is created for the first time between <code>kubectl create</code> and <code>kubectl apply</code> as shown above. But, the second time the <code>kubectl create</code> will throw an error.</p> <p>It took me some time to get around it, but it makes sense now.</p>
<p>I'm currently running in to this exception when I run several tests in parallel against a selenium grid that is deployed using K8s. I have deployed clusters in both AWS and Azure and received the same error. The error occurs when I try to run more tests than there are nodes, I can run a few tests successfully, then after a short amount of time, it will fail all of the remaining tests with this error.</p> <blockquote> <p>OpenQA.Selenium.WebDriverException : A exception with a null response was thrown sending an HTTP request to the remote WebDriver server for URL xxxxxx.xxx.xxx/wd/hub/session. The status of the exception was ConnectionClosed, and the message was: The underlying connection was closed: The connection was closed unexpectedly.</p> </blockquote> <p>I have adjusted timeouts on the selenium hub(browser timeout, timeout, newsesssiontimeout) as well as the command timeouts from the remotewebdriver and nothing changes. I also do not get the error when I test locally. </p> <p>Here is my current stack.</p> <ul> <li>RemoteWebdriver: 3.14.0</li> <li>Selenium Hub : 3.14.0</li> <li>Selenium Node : 3.14.0</li> <li>Chrome : 69</li> <li>C#/Nunit</li> <li>AKS/EKS </li> </ul> <p>Code:</p> <pre><code> [TestCaseSource(typeof(MyFactoryClass), nameof(MyFactoryClass.TestCases))] public void ZaleniumTest(int x) { var caps = new DesiredCapabilities(); caps.SetCapability("browserName", "chrome"); var driver = new RemoteWebDriver(new Uri(Url), caps, TimeSpan.FromSeconds(1200)) {Url = "http://www.google.com"}; driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(1200); var query = driver.FindElement(By.Name("q")); query.SendKeys("Cheese"); query.Submit(); System.Threading.Thread.Sleep(3000); Assert.IsTrue(driver.Title.Contains("Google")); driver.Quit(); } </code></pre> <p>Here are the commands that I run to deploy the grid:</p> <pre><code> kubectl run selenium-hub --image selenium/hub:latest --port 4444 kubectl expose deployment selenium-hub --type=LoadBalancer kubectl run selenium-node-chrome --image selenium/node-chrome:latest --env="HUB_PORT_4444_TCP_ADDR=selenium-hub" --env="HUB_PORT_4444_TCP_PORT=4444" </code></pre> <p>With this simple grid set up(1 hub 1 chrome node) I try to run 20 tests with the expectation that the tests will be queued up. After about 10 passing tests, the test run will fail and produce that error.</p> <p>I am looking for the correct places to add a wait or a time out so that we can handle the test queue correctly. </p> <p>Thanks in advance. </p>
<p>I was able to locate the issue. It turns out there is a idle timeout setting for the load balancers both in AWS and Azure. It was set to 60 seconds for my case in AWS. I turned the timeout to 3600 seconds and that seemed to fix the problem. </p>
<p>I have Windows 10 Pro with Docker for Windows v18.06.1-ce with kubernetes enabled.</p> <p>Using <code>kubectl create -f</code>, I've created rc.yml:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: hello-rc spec: replicas: 9 selector: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-ctr image: nigelpoulton/pluralsight-docker-ci:latest ports: - containerPort: 8080 </code></pre> <p>svc.yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: hello-svc labels: app: hello-world spec: type: NodePort ports: - port: 8080 nodePort: 30001 protocol: TCP selector: app: hello-world </code></pre> <h1>How do I access the website behind the service?</h1> <p>I would expect localhost:8080 to be working, but it isn't, nor is 10.108.96.27:8080</p> <pre><code>&gt; kubectl describe service/hello-svc Name: hello-svc Namespace: default Labels: app=hello-world Annotations: &lt;none&gt; Selector: app=hello-world Type: NodePort IP: 10.108.96.27 LoadBalancer Ingress: localhost Port: &lt;unset&gt; 8080/TCP TargetPort: 8080/TCP NodePort: &lt;unset&gt; 30001/TCP Endpoints: 10.1.0.10:8080,10.1.0.11:8080,10.1.0.12:8080 + 6 more... Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre>
<p>There are two ways to expose a service to the outer world from a Kubernetes cluster:</p> <ol> <li><p>type: <strong>LoadBalancer</strong>. However, it works only with cloud providers.</p></li> <li><p>type: <strong>NodePort</strong>. As you used in this case. Now, to access service inside Kubernetes cluster, you need to use the IP address of one of your Nodes and the port from the field <code>nodePort</code> For example, <code>12.34.56.78:30001</code></p></li> </ol> <p>For more information, look through the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="noreferrer">official documentation</a>.</p>
<p>I have Kubernetes Cluster v1.10 over centos 7 </p> <p>when I deploy service type LoadBalancer it still in pending status </p> <p>Do I need to know how LoadBalancer type work?</p>
<p>I suggest you read the docs on the Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> abstraction. </p> <p>Depending on the Cloud or your infrastructure provider this could be several things</p> <p>Generally, this means that the kube-controller manager can't create the load balancer in your cloud provider and the kubelet on your nodes can't add the routes to the service. </p> <p>And, this could be because you don't have the right credentials, you don't have right cloud provider configured on Kubernetes, or you are running a completely bare-metal server, or you are running something like <a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">minikube</a>, or simply you don't have connectivity to your cloud provider.</p>
<p>I have create a pod with the below yaml definition.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: praveensripati/docker-demo:1.2 ports: - containerPort: 3000 </code></pre> <p>And now I expose the pod, which creates a service.</p> <pre><code>kubectl expose pod myapp-pod --type=NodePort </code></pre> <p>The port 3000 on the container is exposed to port 31728 on the nodes. And I am able to do access the page using curl on port 31728.</p> <pre><code>kubectl get service myapp-pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp-pod NodePort 10.107.254.254 &lt;none&gt; 3000:31728/TCP 5s </code></pre> <p>This time I wanted to expose the service not a random port, but on port 80. And so I specify the port number as 80, by using --port. The service details are a bit odd. It says that port 80 on the container is exposed to port 31316 on the nodes. Also, I am able to access the page using curl on the random port (31316 in this case) and not port 80.</p> <pre><code>kubectl expose pod myapp-pod --type=NodePort --target-port=3000 --port=80 kubectl get service myapp-pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp-pod NodePort 10.105.123.73 &lt;none&gt; 80:31316/TCP 12s </code></pre> <p>I am not able to expose a service on a specific port and not on a random port. I tried a few combinations and read the k8s documentation, but no success.</p> <p>How do I expose a service on a specific port instead of a random port?</p>
<p>Your question is about exposing the NodePort type of service on a specific port. For that you need to specify the <code>nodePort</code> field under <code>ports</code> in your service definition.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: myapp ports: - protocol: TCP port: 3000 nodePort: 32321 type: NodePort </code></pre> <p>Note that it has to be within a given range provided in the configs. Which defaults to <code>30000-32767</code>. This range can be specified in the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="noreferrer">kube-apiserver</a> configs using the <code>--service-node-port-range</code> option.</p>
<p>I have my microservice (springboot+actuator in Kubernetes) exposing <code>/metrics</code> endpoint.</p> <p>From a pod I can retrieve metrics by using a simple <code>curl svc:8080/metrics</code></p> <p>But in Prometheus target I get this error: <code>INVALID is not a valid start token</code></p> <p><a href="https://i.stack.imgur.com/U1Lsd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U1Lsd.png" alt="enter image description here"></a></p> <p>My <code>prometheus.yml</code> is the following:</p> <pre><code> global: scrape_interval: 60s scrape_timeout: 10s evaluation_interval: 10s alerting: alertmanagers: - static_configs: - targets: - alertmanager:9093 rule_files: - "/etc/prometheus-rules/usage.rules" - "/etc/prometheus-rules/availability.rules" scrape_configs: - job_name: 'prometheus' static_configs: - targets: - prometheus:9090 - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https - job_name: 'kubernetes-nodes' kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - source_labels: [__address__] regex: '(.*):10250' replacement: '${1}:10255' target_label: __address__ - job_name: 'kubernetes-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: ${1}:${2} target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - source_labels: [__meta_kubernetes_pod_container_port_number] action: keep regex: 9\d{3} - job_name: 'kubernetes-services' kubernetes_sd_configs: - role: service metrics_path: /health params: module: [http_2xx] relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__address__] target_label: __param_target - target_label: __address__ replacement: blackbox - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name </code></pre> <p>Any suggestions, please?</p>
<p>Sure, you can manually reach <code>&lt;svc&gt;:&lt;port&gt;/metrics</code>, but you never told Prometheus to scrape it ;-)</p> <p>See an example here, take a close look at lines #86 &amp; #129: <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml</a></p> <p>If you run into other issues with Prometheus, or to quickly solve your current one, try questioning the method you used to set up Prometheus on Kubernetes to begin with (because your YAML is probably only one symptom), and re-install from the Helm chart: <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus</a></p>
<p>I already found the way using <code>gcloud</code> CLI.</p> <pre><code>gcloud auth activate-service-account --key-file=serviceaccount.json gcloud container clusters get-credentials $clusterName \ --zone=$zone --project=$project kubectl config view --minify --flatten </code></pre> <p>However, to eliminate dependency to gcloud cli, Is there any programmatic way to achieve a similar result as above? Preferably using API exposed in Google's python client library.</p> <p>My expected result is a portable config file that can be passed to any <code>kubectl --kubeconfig=...</code> command.</p> <hr> <p><strong>update</strong>: I have found that the commands I showed above results in a kube config file that still depends on gcloud cli as auth helper, probably to automatically handle token expiration. So, any workarounds are welcome.</p>
<p>I wrote a shell script which basically does exactly what you are expecting.</p> <p><a href="https://gitlab.com/workshop21/open-source/rbac" rel="nofollow noreferrer">https://gitlab.com/workshop21/open-source/rbac</a></p>
<p>How to connect <code>graphql</code> api which is on private network and accessible through private ip address. My frontend server and api is on the <code>VNET</code>.</p> <pre><code>import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { createUploadLink } from 'apollo-upload-client' const uploadLink = createUploadLink({ uri: 'http://10.0.0.10:3000'+'/api' }) const client = new ApolloClient({ link: uploadLink, cache: new InMemoryCache() }) export default client </code></pre> <p>Both applications are running on <code>kubernetes</code> same cluster different pods. Private services are accessible within cluster and when I <code>exec</code> into the frontend pod I am able to access <code>graphql</code> end point with private ip address.</p> <p>But, On the browser, it's not connecting and giving this error: <code>ERR_CONNECTION_REFUSED</code></p> <blockquote> <p>frontend (public ip) --> graphql (private ip)</p> </blockquote>
<p>The 3 main methods for accessing an internal kubernetes service from outside are: <strong>NodePort</strong>, <strong>LoadBalancer</strong>, and <strong>Ingress</strong>.</p> <p>You can read about some of the main differences between them here <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a></p> <hr /> <h1>NodePort</h1> <h2>Map a randomly or manually selected high port from a certain range to a service on a 1 to 1 basis.</h2> <p>Either allow kubernetes to randomly select a high port, or manually define a high port from a predefined range which is by default 30000–32767 (but can be changed), and map it to an internal service port on a 1 to 1 basis.</p> <p>Warning: Although it is possible to manually define a NodePort port number per service, it is generally not recommended due to possible issues such as port conflicts. So in most cases, you should let the cluster randomly select a NodePort port number for you.</p> <p>From official docs: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p> <blockquote> <p>If you set the type field to NodePort, the Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service.</p> </blockquote> <hr /> <h1>LoadBalancer</h1> <h2>Attach a service to an external ip provided by an IP Provider service such as a cloud provider Public IP Service.</h2> <p>The functionality of this service type depends on external drivers/plugins. Most modern clouds offer support to supply public IPs for LoadBalancer definitions. But if you are spinning a custom cluster with no means to assign public IPs (such as with Rancher with no IP provider plugins), the best you can probably do with this is assign an IP of a host machine to a single service.</p> <p>From the official docs: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a></p> <blockquote> <p>On cloud providers which support external load balancers, setting the type field to LoadBalancer will provision a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer will be published in the Service’s .status.loadBalancer field.</p> </blockquote> <hr /> <h1>Ingress</h1> <h2>Run a central application router service which receives all traffic on a certain port (or ports) and routes it to services based on parameters like the requested domain and path.</h2> <p>To install it you must create an application router service (such as nginx) which runs in your cluster and analyzes every new resource of type Ingress that is created. Then you create Ingress resource that define the routing rules you would like such as which DNS request to listen to and which service to forward the request to.</p> <p>Although multiple solutions exist for this purpose, I recommend Nginx Ingress</p> <p><a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a> <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p> <p>Official Docs:</p> <blockquote> <p>What is Ingress? Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:</p> <pre><code>internet | ------------ [ Services ] An Ingress is a collection of rules that allow inbound connections to reach the cluster services. internet | [ Ingress ] --|-----|-- [ Services ] It can be configured to give services externally-reachable URLs, load balance </code></pre> <p>traffic, terminate SSL, offer name based virtual hosting, and more. Users request ingress by POSTing the Ingress resource to the API server. An Ingress controller is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.</p> </blockquote>
<p>I am trying to do the log monitoring of Kubernetes cluster using EFK. I got Kibana dashboard but it doesn't show any logs of Kubernetes cluster. </p> <p>Here is the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">link</a> which I followed in my task.By default my dashboard shows like</p> <p><a href="https://i.stack.imgur.com/nFgTs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nFgTs.png" alt="enter image description here"></a></p> <p>After that i changed the index-pattern in dashboard as</p> <p><a href="https://i.stack.imgur.com/2M5El.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2M5El.png" alt="enter image description here"></a> </p> <p>Then it showed as</p> <p><a href="https://i.stack.imgur.com/BWeAY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BWeAY.png" alt="enter image description here"></a></p> <p>My dought is, how Can i view the logs of each and every pod logs in kubernetes cluster?</p> <p>Could anybody suggest me how to do the log monitoring of kubernetes cluster using EFK? </p>
<blockquote> <p>Note: in order for Fluentd to work, every Kubernetes node must be labeled with beta.kubernetes.io/fluentd-ds-ready=true, as otherwise the Fluentd DaemonSet will ignore them.</p> </blockquote> <p>Have you made sure to address this?</p>
<p>One of my kubernetes Node, I see my kubelet listening on port 38641. How to map this port number to actual POD its pointing too.</p> <pre><code># netstat -alpn |grep 38641 tcp 0 0 127.0.0.1:38641 0.0.0.0:* LISTEN 9832/kubelet tcp 0 0 127.0.0.1:45230 127.0.0.1:38641 CLOSE_WAIT 9832/kubelet tcp 0 0 127.0.0.1:39922 127.0.0.1:38641 CLOSE_WAIT 9832/kubelet tcp 0 0 127.0.0.1:39238 127.0.0.1:38641 ESTABLISHED 9832/kubelet </code></pre>
<p>I assume you have exposed a service in kubernetes using the NodePort type which would account for the high port number. If this is the case, you can simply get a list of all services running in the cluster and search then for the port you want. ie. </p> <p><code>kubectl get svc --all-namespaces | grep 38641</code></p> <p>Once you have the service which exposes this port, you can then inspect the service and see which pods are being selected by the service</p> <pre><code>PORT=38641 SERVICE_OUTPUT=$(kubectl get svc --all-namespaces | grep $PORT) NAMESPACE=$(echo $SERVICE_OUTPUT | awk '{ print $1 }') SERVICE=$(echo $SERVICE_OUTPUT | awk '{ print $2 }') kubectl describe -n $NAMESPACE svc $SERVICE </code></pre> <p>Now let's get the selector:</p> <pre><code>SELECTOR=$(kubectl describe -n $NAMESPACE svc $SERVICE | grep Selector | awk '{ print $2}') </code></pre> <p>Ok, now that we know what pods are being selected we can find these pods easily:</p> <p><code>kubectl get po --selector $SELECTOR -n $NAMESPACE</code></p> <p>We can capture the pod names for further inspection, such as for determining the replicaset they are from:</p> <pre><code>PODS=$(kubectl get po --selector $SELECTOR -n $NAMESPACE --no-headers=true | awk '{ print $1}') echo $PODS </code></pre> <p>If you need to you can trace the pod origin:</p> <pre><code>POD=full-pod-name RS=$(kubectl -n $NAMESPACE describe po $POD | grep -hoe ReplicaSet.*); echo rs: $RS; DEPLOYMENT=$(kubectl -n $NAMESPACE describe $RS | grep -hoe Deployment.*); echo deployment: $DEPLOYMENT; </code></pre>
<p>I want to run a <code>kubernetes</code> deployment in the likes of the following:</p> <pre><code>apiVersion: v1 kind: Deployment metadata: name: my-deployment namespace: my-namespace spec: replicas: 1 template: spec: containers: - name: my-app image: our-own-registry.com/somerepo/my-app:${IMAGE_TAG} env: - name: FOO value: "BAR" </code></pre> <p>This will be delivered to the developers so that they can perform on demand deployments using the image tag of their preference.</p> <p>What is best way / recommended pattern to pass the tag variable?</p> <p>performing an export on the command line to make it available as env var on the shell from which the <code>kubectl</code> command will run?</p>
<p>Unfortunately, it's impossible via native kubernetes tools. From <a href="https://github.com/kubernetes/kubernetes/issues/52787" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>kubectl will never support variable substitution.</p> </blockquote> <p>But, that issue case also has some good workarounds. The best way is deploy your apps via <code>Helm</code> <a href="https://docs.helm.sh/chart_template_guide/" rel="nofollow noreferrer">charts using templates</a></p>
<p>I would like to install Kubernetes on my debian machine:</p> <pre><code>Distributor ID: Debian Description: Debian GNU/Linux 9.5 (stretch) Release: 9.5 Codename: stretch </code></pre> <p>Looking into google deb package archive I only find the package for "kubectl", nothing else:</p> <p><a href="https://packages.cloud.google.com/apt/dists/kubernetes-stretch/main/binary-amd64/Packages" rel="noreferrer">https://packages.cloud.google.com/apt/dists/kubernetes-stretch/main/binary-amd64/Packages</a></p> <p>Comparing to ubuntu xenial many packages are missing. Could someone be so kind and give me more information how to deal with this ? Is it possible to install kubeadm and kubelet on debian stretch too ?</p> <p><a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl" rel="noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl</a></p> <p>Thank you very much in advance !</p>
<blockquote> <p>Is it possible to install kubeadm and kubelet on debian stretch too ?</p> </blockquote> <p>Yes it is! Please refer to the helpful guide that you linked at the end of your question: <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl</a></p> <p>Here is the relevant part:</p> <pre><code>apt-get update &amp;&amp; apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl </code></pre> <p>Note, that you are using kubernetes-xenial above, not kubernetes-stretch.</p> <p>Do not forget to install docker first, for how, see the same linked page above. </p>
<p>Facing this issue if I am connecting to ingress for web socket service failed: Error during WebSocket handshake: Unexpected response code: 400</p> <p><strong>Ingress YAML</strong></p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 metadata: name: websocket-producer-cdph spec: rules: host: some.domain.com http: paths: path: "/" backend: serviceName: websocket-producer-cdph servicePort: 8183 status: loadBalancer: ingress: {} </code></pre> <p><strong>Service YAML</strong></p> <pre><code>kind: Service apiVersion: v1 metadata: name: websocket-producer-cdph spec: ports: name: ws protocol: TCP port: 8183 targetPort: 8183 selector: app: websocket-producer-cdph clusterIP: 10.100.254.99 type: ClusterIP sessionAffinity: None status: loadBalancer: {} </code></pre> <p>When I try to listen ws://some.domain.com/ws it's showing Error during WebSocket handshake: Unexpected response code: 400</p> <p><em>/ws is the path</em></p> <p>But if go and update spec type in service to LoadBalancer, it'll generate an IP 192.168.1.17:8183 and listening to that ws://192.168.1.17:8183/ws its working but I need to expose the URL using ingress so it can be used outside of the network.</p> <p>I am using the following image for ingress controller:</p> <p><strong>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0</strong></p> <p>How can I create ingress for a web-socket service?</p>
<p>As stated in the <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/websocket#websocket-support" rel="noreferrer">nginx-ingress documentation</a>, to proxy WebSocket traffic you should use annotation with the name of websocket service. Don't forget to use quotes:</p> <pre><code>nginx.org/websocket-services: "service1[,service2,...]" </code></pre> <p>In this example from the <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/websocket#websocket-support" rel="noreferrer">documentation</a>, WebSocket is enabled only for one of three services (<code>ws-svc</code>):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress annotations: nginx.org/websocket-services: "ws-svc" spec: rules: - host: cafe.example.com http: paths: - path: /tea backend: serviceName: tea-svc servicePort: 80 - path: /coffee backend: serviceName: coffee-svc servicePort: 80 - path: /ws backend: serviceName: ws-svc servicePort: 8008 </code></pre>
<p>I am just shifting to container Orchestration and would like to know if a mid size SharePoint 2016 farm can be provisioned inside a highly available container orchestration platform called Azure Kubernetes Service (AKS). Can we use Kubernetes cluster instead of VMs created on latest Windows Server 2016/2012 to provision SharePoint Farm? Since SharePoint is a highly resource consuming server application, scaling decision is a key to keep the Prod farm up and running within the containers. Containers are a better solution than highly scaled VMs but will it work for SharePoint? Please provide any pointers if anyone could.</p> <p>Regards, Manish</p>
<p>No, this is not possible. You cannot run sharepoint in docker.</p> <p><a href="https://sharepoint.stackexchange.com/questions/212904/can-an-on-prem-solution-of-sharepoint-2013-or-2016-be-hosted-in-a-docker-windows">https://sharepoint.stackexchange.com/questions/212904/can-an-on-prem-solution-of-sharepoint-2013-or-2016-be-hosted-in-a-docker-windows</a><br> <a href="https://stackoverflow.com/questions/38677018/active-directory-accounts-inside-a-windows-container-server-2016-tp5">Active directory accounts inside a windows container (server 2016 TP5)</a></p>
<p>Attempting and failing to use NFS storage for Kubernetes volumes. </p> <p><em>persistentvolumeclaim</em> is unable to bind to already created <em>persistentvolume</em>, see below</p> <p><strong>Creating persistentvolume</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: nfs spec: capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: server: 192.0.2.100 path: "/nfsshare" </code></pre> <p><strong>Creating persistentvolumeclaim</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi </code></pre> <p>Event logs for the persistentvolumeclaim shows</p> <pre><code>kubectl get events: "provisionable volume plugin matched" </code></pre> <p>Any ideas what might be stop k8s to bind the two?</p>
<p>delete any default storage class (nfs) and try creating PV &amp; PVC again</p> <pre><code> kubectl get sc kubectl delete sc </code></pre>
<p>I am trying to create a service on <code>kubernetes</code> but although no error is printed, I am unable to list the service;</p> <p>working locally with <code>minikube</code></p> <pre><code>*$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 4h /home/pkara/Workspace/gitlab/my-minikube *$ kubectl apply -f mydb.yaml service/mydatabase unchanged /home/pkara/Workspace/gitlab/my-minikube *$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 4h </code></pre> <p>Here is the manifest used:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: mydatabase namespace: esa-local-dev labels: app: mydatabase spec: ports: - name: mydatabase-port port: 3306 targetPort: 3306 selector: app: mydatabase clusterIP: None --- apiVersion: apps/v1 kind: Deployment metadata: name: mydatabase-deployment labels: app: mydatabase spec: selector: matchLabels: app: mydatabase strategy: type: Recreate template: metadata: labels: app: mydatabase spec: containers: - name: mysql image: mysql:5.7 env: - name: MYSQL_ROOT_PASSWORD value: "root" - name: MYSQL_DATABASE value: "mydatabase" - name: MYSQL_USER value: "mydatabase" - name: MYSQL_PASSWORD value: "mydatabase" ports: - containerPort: 3306 name: mysql restartPolicy: Always </code></pre>
<p>The Service is created on <code>esa-local-dev</code> namespace. In your yaml, you have:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mydatabase namespace: esa-local-dev ... </code></pre> <p>You can see the service by running,</p> <pre><code>$ kubectl get services -n esa-local-dev </code></pre>
<p>istio-pilot pod on minikube kubernetes cluster is always in Pending state. Increased CPU=4 and memory=8GB. Still the status of istio-pilot pod is Pending. </p> <p>Is specific change required to run istio on minikube other than the ones mentioned in documentation?</p>
<p>Resolved the issue . Im running minikube with Virtual box and running minikube with higher memory and CPU does not reflect until minikube is deleted and started with new parameters. Without this it was resulting in Insufficient memory.</p>
<p>Is it possible to use Litmus to test kafka? basically some test to figure out what happens in various scenarios. Initial thoughts are:</p> <ul> <li>deploying confluent helm chart for kafka and checking brokers gets Storage bound successfully</li> <li>Kill broker and ensure it comes back</li> <li>kill zookeeper pod and ensure it comes back</li> <li>kill consumer pods (my own apps) and ensure they come a back and doesn't miss message</li> <li>kill producer pod and ensure it comes back and ensure all messages get sent.</li> </ul>
<p>I assume that you mean a Litmus test as in a mini chaos engineering test and I'm also assuming that you mean to do in Kubernetes. Yes, you should be able to do as long as you have all your pods defined in a higher level abstraction like a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a>, and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> that inherently creates a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a>.</p> <p>In the Kubernetes world to force this test you can just delete the pod where your Kafka/Zookeeper workload is running and they should be brought up by the ReplicaSet on to of your pods.</p>
<p>I've tested the guide <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">Kubernetes the hard way</a> and the adaptation for AWS <a href="https://github.com/slawekzachcial/kubernetes-the-hard-way-aws" rel="nofollow noreferrer">Kubernetes The Hard Way - AWS</a>.</p> <p>Everything runs fine with the DNS addon and even the dashboard as explained <a href="https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html" rel="nofollow noreferrer">here</a>.</p> <p>But if I create a LoadBalancer service, it doesn't work as cloud-controller-manager isn't deployed (either as master component nor daemonset).</p> <p>I read this <a href="https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/</a> to get some information on how to deploy it but if I apply the changes needed (on kubelet : --cloud-provider=external) and deploy the daemonset : </p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: cloud-controller-manager name: cloud-controller-manager namespace: kube-system spec: selector: matchLabels: k8s-app: cloud-controller-manager template: metadata: labels: k8s-app: cloud-controller-manager spec: serviceAccountName: cloud-controller-manager containers: - name: cloud-controller-manager image: k8s.gcr.io/cloud-controller-manager:v1.8.0 command: - /usr/local/bin/cloud-controller-manager - --cloud-provider=aws - --leader-elect=true - --use-service-account-credentials - --allocate-node-cidrs=true - --configure-cloud-routes=true - --cluster-cidr=${CLUSTERCIRD} tolerations: - key: node.cloudprovider.kubernetes.io/uninitialized value: "true" effect: NoSchedule - key: node-role.kubernetes.io/master effect: NoSchedule nodeSelector: node-role.kubernetes.io/master: "" </code></pre> <p>The instances (controllers and workers) have all the right roles.</p> <p>I can't even create a pod, the status stays "Pending"...</p> <p>Do you know how to deploy cloud-controller-manager as daemonset or master component (without using kops, kubeadm,...) on a AWS cluster? </p> <p>Do you know a guide that could help me with that? </p> <p>Would you give a example of cloud-controller-manager daemonset configuration?</p> <p>Thanks in advance</p> <p><strong>UPDATE</strong></p> <p>When executing, <code>kubectl get nodes</code> I get a <code>No resources found</code>.</p> <p>And when describing a launched pod, I get only one event : <code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 28s (x2 over 28s) default-scheduler no nodes available to schedule pods </code></p> <p>The question should be now : How to get nodes ready with cloud-controller-manager deployed for aws?</p>
<p>As samhain1138 mentioned, your cluster does not look healthy to install anything. In simple cases, it could be fixed, but sometimes it is better to reinstall everything.</p> <p>Let's try to investigate the problem.<br> First of all, check your master node state. Usually, it means that you should have a <code>kubelet</code> service running.<br> Check the kubelet log for errors:</p> <pre><code>$ journalctl -u kubelet </code></pre> <p>Next, check the state of your static pods. You can find a list of them in the <code>/etc/kubernetes/manifets</code> directory:</p> <pre><code>$ ls /etc/kubernetes/manifests etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5cbdc1c13c25 8a7739f672b4 "/sidecar --v=2 --..." 2 weeks ago Up 2 weeks k8s_sidecar_kube-dns-86c47599bd-l7d6m_kube-system_... bd96ffafdfa6 6816817d9dce "/dnsmasq-nanny -v..." 2 weeks ago Up 2 weeks k8s_dnsmasq_kube-dns-86c47599bd-l7d6m_kube-system_... 69931b5b4cf9 55ffe31ac578 "/kube-dns --domai..." 2 weeks ago Up 2 weeks k8s_kubedns_kube-dns-86c47599bd-l7d6m_kube-system_... 60885aeffc05 k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-dns-86c47599bd-l7d6m_kube-system_... 93144593660c 9f355e076ea7 "/install-cni.sh" 2 weeks ago Up 2 weeks k8s_install-cni_calico-node-nxljq_kube-system_... b55f57529671 7eca10056c8e "start_runit" 2 weeks ago Up 2 weeks k8s_calico-node_calico-node-nxljq_kube-system_... d8767b9c07c8 46a3cd725628 "/usr/local/bin/ku..." 2 weeks ago Up 2 weeks k8s_kube-proxy_kube-proxy-lf8gd_kube-system_... f924cefb953f k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_calico-node-nxljq_kube-system_... 09ceddabdeb9 k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-proxy-lf8gd_kube-system_... 9fc90839bb6f 821507941e9c "kube-apiserver --..." 2 weeks ago Up 2 weeks k8s_kube-apiserver_kube-apiserver-kube-master_kube-system_... 8ea410ce00a6 b8df3b177be2 "etcd --advertise-..." 2 weeks ago Up 2 weeks k8s_etcd_etcd-kube-master_kube-system_... dd7f9b381e4f 38521457c799 "kube-controller-m..." 2 weeks ago Up 2 weeks k8s_kube-controller-manager_kube-controller-manager-kube-master_kube-system_... f6681365bea8 37a1403e6c1a "kube-scheduler --..." 2 weeks ago Up 2 weeks k8s_kube-scheduler_kube-scheduler-kube-master_kube-system_... 0638e47ec57e k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_etcd-kube-master_kube-system_... 5bbe35abb3a3 k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-controller-manager-kube-master_kube-system_... 2dc6ee716bb4 k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-scheduler-kube-master_kube-system_... b15dfc9f089a k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-apiserver-kube-master_kube-system_... </code></pre> <p>You can see the detailed description of any pod’s container using the command:</p> <pre><code>$ docker inspect &lt;container_id&gt; </code></pre> <p>Or check the logs:</p> <pre><code>$ docker logs &lt;container_id&gt; </code></pre> <p>This should be enough to understand what to do next, either try to fix the cluster or tear down everything and start from the beginning.</p> <p>To simplify the process of provisioning Kubernetes cluster, you could use <code>kubeadm</code> as follows:</p> <pre><code># This instruction is for ubuntu VMs, if you use CentOS, the commands will be # slightly different. ### These steps are the same for the master and the worker nodes # become root $ sudo su # add repository and keys $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF # install components $ apt-get update $ apt-get -y install ebtables ethtool docker.io apt-transport-https kubelet kubeadm kubectl # adjust sysctl settings $ cat &lt;&lt;EOF &gt;&gt;/etc/ufw/sysctl.conf net/ipv4/ip_forward = 1 net/bridge/bridge-nf-call-ip6tables = 1 net/bridge/bridge-nf-call-iptables = 1 net/bridge/bridge-nf-call-arptables = 1 EOF $ sysctl --system ### Next steps are for the master node only. # Create Kubernetes cluster $ kubeadm init --pod-network-cidr=192.168.0.0/16 or if you want to use older KubeDNS instead of CoreDNS: $ kubeadm init --pod-network-cidr=192.168.0.0/16 --feature-gates=CoreDNS=false # Configure kubectl $ mkdir -p $HOME/.kube $ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ chown $(id -u):$(id -g) $HOME/.kube/config # install Calico network $ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml # or install Flannel (not both) $ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # Untaint master or/and join other nodes: $ kubectl taint nodes --all node-role.kubernetes.io/master- # run on master if you forgot the join command: $ kubeadm token create --print-join-command # run command printed on the previous step on the worker node to join it to the existing cluster. # At this point you should have ready to user Kubernetes cluster. $ kubectl get nodes -o wide $ kubectl get pods,svc,deployments,daemonsets --all-namespaces </code></pre> <p>After recovering the cluster, could you try to install <code>cloud-controller-manager</code> again and share the results?</p>
<p>We are trying to deploy our model on Kubernetes using Tensorflow Serving. Earlier we deployed our model (<strong>SSD + Inception</strong>) on K8S with our <strong>own base image</strong> for docker that we built using bazel. K8S configuration was as below: Cluster size - 2 Nodes Per Node config - 20 GB Memory, 2 GPU, 8vCPU</p> <p>Now we have changed our model and are using <strong>Retinanet with Resnet50</strong>. This time we are using the base image for docker from Tensorflow's docker hub (<strong>tensorflow/serving:latest-devel-gpu</strong>) with the same K8S configuration.</p> <p>Now the problem is earlier we were able to get the prediction for 500 images per batch and we were able to send these 500 images per batch using multiple workers (unlimited) but now in the new deployment, we are not able to send more than 100 images per batch. We are getting OOM error as follows:</p> <pre><code>{'error': 'OOM when allocating tensor with shape[150,256,160,160] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\n\t [[Node: FeatureExtractor/resnet_v1_50/resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FeatureExtractor/resnet_v1_50/resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/Relu6, FeatureExtractor/resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/weights)]]\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info </code></pre> <p>We checked K8S memory utilization as well and it wasn't fully utilized (maximum 30%). Can anyone tell us why are we getting this Out of Memory error and Which memory is Tensorflow referring to here?</p> <p>TIA</p>
<p>The problem is that you are OOMing the GPU. Given the error message you posted, you are trying to allocate <code>150 * 256 * 160 * 160 * 32 / 1024 / 1024 / 1024 = 29.3 GB</code> of GPU memory. Tesla cards come with either 12/16 GB of vRAM and some (probably not yet available in any cloud as they are very new) with 32 like the GV100, but that's a Quadro card.</p> <p>So, you have two options. Either decrease the batch size or any other dimension of that huge Tensor you are trying to allocate. Or find the specific operation in your graph and force it to run on the main memory with a</p> <pre><code>with tf.device('cpu:0'): # operation goes here </code></pre> <p>However, this second method will just alleviate the problem and you will OOM in some other part. Plus, by running the operation on the CPU, you'll have a huge performance decrease without even counting the back and forth transfers of data between main memory and GPU memory.</p> <p>So, summarizing, you should definitely consider decreasing one of the dimensions of that tensor, being the batch size, one of the image sizes (or both), or the number of channels.</p> <p>The model you used before was probably not using so many output channels in its convolutional layers.</p>
<p>So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:</p> <p>Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)</p> <p>So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.</p> <p>The services (Apache and App) are in both cases a NodePort service.</p> <p>What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.</p> <p>App service yaml looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: preprocessor-service labels: app: preprocessor spec: type: NodePort selector: app: preprocessor ports: - port: 80 targetPort: 8081 </code></pre>
<p>If you are going through the <strong>clusterIP</strong> and are using the default proxy mode to be <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">iptables</a>, then the <code>NodePort</code> service will do a random approach (Kubernetes 1.1 or later), this is called <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">iptables proxy mode</a>. For earlier Kubernetes 1.0 the default was <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace" rel="nofollow noreferrer">userspace proxy mode</a> which does round robin. If you want to control this behavior you can use the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">ipvs proxy mode</a>.</p> <p>When I say <strong>clusterIP</strong> I mean the IP address that is only understood by the cluster such as the one below:</p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc NodePort 10.109.87.179 &lt;none&gt; 80:30723/TCP 5d kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 69d </code></pre> <p>When you specify <code>NodePort</code> it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific <code>NodePort</code> and traffic would be forwarded according to a healthcheck on the port.</p> <p>I'm not sure in your case, is it possible that you are not using the <code>clusterIP</code>?</p>
<p>According to the <a href="https://doc.traefik.io/traefik/v1.7/configuration/backends/kubernetes/" rel="nofollow noreferrer">Traefik 1.7 documentation</a> you should be able to have Traefik perform a 302 redirect using:</p> <ul> <li><code>traefik.ingress.kubernetes.io/redirect-regex</code></li> <li><code>traefik.ingress.kubernetes.io/redirect-replacement</code></li> </ul> <p>My goal is to simply remove the <code>www.</code> from the address.</p> <p>This is what I've tried, but I get a 404 service not found.</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: www-redirect namespace: public annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/redirect-regex: ^https?://www.example.com/(.*) traefik.ingress.kubernetes.io/redirect-replacement: https://example.com/$1 spec: rules: - host: www.example.com </code></pre> <p>Unfortunately the documentation isn't explicit on <em>how</em> to use them. At the time of writing the only google hit on this is the documentation (above).</p> <p>My current work around (assuming it'll help explain the question) is to route <code>www.</code> traffic to nginx which returns a 302.</p> <pre><code>server { listen 80; server_name www.example.com; return 302 https://example.com$request_uri; } </code></pre> <p>This seems like overkill.</p>
<p>I was having the same issue and ended up making it work with:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: www-redirect namespace: public annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/preserve-host: "true" traefik.ingress.kubernetes.io/redirect-permanent: "true" traefik.ingress.kubernetes.io/redirect-regex: "^https://www.(.*)" traefik.ingress.kubernetes.io/redirect-replacement: "https://$1" spec: tls: - hosts: - "example.com" - "www.example.com" secretName: example-tls rules: - host: example.com - host: www.example.com </code></pre> <p>Basically I needed both rules.</p> <p>As a side note, I also start the trafik pod with the following flags:</p> <pre><code>args: - --api - --kubernetes - --logLevel=INFO - --entryPoints=Name:https Address::443 TLS - --entrypoints=Name:http Address::80 Redirect.EntryPoint:https - --defaultentrypoints=https,http </code></pre>
<p>I already have some services in my k8s cluster and want to mantain them separately. Examples:</p> <ul> <li>grafana with custom dashboards and custom dockerfile</li> <li>prometheus-operator instead of basic prometheus</li> <li>jaeger pointing to elasticsearch as internal storage</li> <li>certmanager in my own namespace (also I use it for nginx-ingress legacy routing)</li> </ul> <p>Is it possible to use existing instances instead of creating istio-specific ones? Can istio communicate with them or it's hardcoded?</p>
<p>Yes - it is possible to use external services with istio. You can disable grafana and prometheus just by setting proper flags in values.yaml of istio helm chart (grafana.enabled=false, etc). You can check <a href="https://github.com/kyma-project" rel="nofollow noreferrer">kyma-project</a> project to see how istio is integrated with prometheus-operator, grafana deployment with custom dashboards, and custom jaeger deployment. From your list only certmanager is missing.</p>
<p>I am trying to run the helm chart for mongodb and I am running into readiness / liveness probe issue. The Mongodb helm chart from the stable repository installs the pods and the services. But the pods crash because of the readiness probe. I can try removing them, but I want to know why this is happening. </p> <pre><code>helm install --name mongodb stable/mongodb kubectl describe pod mongodb-5ff464b5b9-nnz5x Containers: mongodb: Container ID: docker://055f12c14992dc111ee8f0baff7faceafae6a16e518b27a174a7997408235afa Image: docker.io/bitnami/mongodb:4.0.2-debian-9 Image ID: docker-pullable://bitnami/mongodb@sha256:6c050bf9063976e12aabb7bafcc7b2616df1e1dbbf19ac41b4799826abec92a3 Port: 27017/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 137 Started: Thu, 27 Sep 2018 17:54:01 -0400 Finished: Thu, 27 Sep 2018 17:55:50 -0400 Ready: False Restart Count: 6 Liveness: exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6 Readiness: exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6 Environment: MONGODB_ROOT_PASSWORD: &lt;set to the key 'mongodb-root-password' in secret 'mongodb'&gt; Optional: false MONGODB_USERNAME: MONGODB_DATABASE: MONGODB_ENABLE_IPV6: yes MONGODB_EXTRA_FLAGS: Mounts: /bitnami/mongodb from data (rw) </code></pre> <p>In the errors section, I see the following. Do I need to do anything to open the container ports?</p> <pre><code>connecting to: mongodb://127.0.0.1:27017 2018-09-27T21:43:04.664+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:257:13 @(connect):1:6 exception: connect failed Warning Unhealthy 6m kubelet, vm-b768f1df-3df6-4a79-50e5-9af633c6d9b9 Readiness probe failed: MongoDB shell version v4.0.2 connecting to: mongodb://127.0.0.1:27017 2018-09-27T21:43:14.670+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:257:13 @(connect):1:6 exception: connect failed Warning Unhealthy 6m kubelet, vm-b768f1df-3df6-4a79-50e5-9af633c6d9b9 Readiness probe failed: MongoDB shell version v4.0.2 connecting to: mongodb://127.0.0.1:27017 2018-09-27T21:43:24.657+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:257:13 @(connect):1:6 exception: connect failed </code></pre> <h2>Edit 1</h2> <p>Logs from the pod I checked the logs, nothing noteworthy </p> <pre><code>`kubectl logs mongodb-5ff464b5b9-nnz5x -p` ------ Welcome to the Bitnami mongodb container Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues nami INFO Initializing mongodb mongodb INFO ==&gt; Deploying MongoDB from scratch... mongodb INFO ==&gt; No injected configuration files found. Creating default config files... </code></pre> <h2>Edit 2</h2> <p>Ran the same helm chart on GKE and I ran into the exact same issue</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 1m (x4 over 1m) default-scheduler PersistentVolumeClaim is not bound: "mongodb-bitnami" (repeated 3 times) Normal Scheduled 1m default-scheduler Successfully assigned mongodb-bitnami-7b65895fd7-xr6gg to gke-cluster-default-pool-bec82955-xmhz Normal SuccessfulMountVolume 1m kubelet, gke-cluster-default-pool-bec82955-xmhz MountVolume.SetUp succeeded for volume "default-token-whl5w" Normal SuccessfulMountVolume 1m kubelet, gke-cluster-default-pool-bec82955-xmhz MountVolume.SetUp succeeded for volume "pvc-ddda94dd-c2c1-11e8-b2f0-42010af00119" Normal Pulling 1m kubelet, gke-cluster-default-pool-bec82955-xmhz pulling image "docker.io/bitnami/mongodb:4.0.2-debian-9" Normal Pulled 39s kubelet, gke-cluster-default-pool-bec82955-xmhz Successfully pulled image "docker.io/bitnami/mongodb:4.0.2-debian-9" Normal Created 38s kubelet, gke-cluster-default-pool-bec82955-xmhz Created container Normal Started 37s kubelet, gke-cluster-default-pool-bec82955-xmhz Started container Warning Unhealthy 28s kubelet, gke-cluster-default-pool-bec82955-xmhz Readiness probe failed: MongoDB shell version v4.0.2 connecting to: mongodb://127.0.0.1:27017 2018-09-28T01:58:44.936+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:257:13 @(connect):1:6 exception: connect failed Warning Unhealthy 20s kubelet, gke-cluster-default-pool-bec82955-xmhz Readiness probe failed: MongoDB shell version v4.0.2 connecting to: mongodb://127.0.0.1:27017 2018-09-28T01:58:52.114+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:257:13 @(connect):1:6 exception: connect failed </code></pre>
<p>It's not related to whether ports are open or not and looks more like mongo itself is crashing and the readiness probes are failing.</p> <p>You can find more insights from looking at the logs:</p> <pre><code>kubectl logs &lt;pod-crashing-name&gt; (-c &lt;container-name-optionally) </code></pre> <p>You can also ssh into the nodes where your pods are running an get the docker logs</p> <pre><code>docker ps -a | grep mongo docker logs &lt;container-id-that-crashed&gt; </code></pre>
<p>Following this document step by step:</p> <p><a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?shortFooter=true" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?shortFooter=true</a></p> <p>I created EKS cluster using aws cli instead-of UI. So I got the following output</p> <pre><code>proxy-kube$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 &lt;none&gt; 443/TCP 18h </code></pre> <p>But when I am following this getting started and associating Worker nodes with the cluster, I get </p> <pre><code>proxy-kube$ kubectl get nodes No resources found. </code></pre> <p>I can see 3 EC2 instances created and running in AWS console (UI). But I am unable to deploy and run even Guestbook application. When I deploy application, I get following: </p> <pre><code>~$ kubectl get services -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR guestbook LoadBalancer 10.100.46.244 a08e89122c10311e88fdd0e3fbea8df8-1146802048.us-east-1.elb.amazonaws.com 3000:32758/TCP 17s app=guestbook kubernetes ClusterIP 10.100.0.1 &lt;none&gt; 443/TCP 21h &lt;none&gt; redis-master ClusterIP 10.100.208.141 &lt;none&gt; 6379/TCP 1m app=redis,role=master redis-slave ClusterIP 10.100.226.147 &lt;none&gt; </code></pre> <p>But if I try to access EXTERNAL-IP, It shows </p> <blockquote> <p>server is not reachable</p> </blockquote> <p>in browser.</p> <p>Also tried to get Dashboard for kubernetes but it failed to show anything on 127.0.0.1:8001</p> <p>Does anyone know what might be going wrong? Any help on this is appreciated. </p> <p>Thanks</p>
<p>Looks you your kubelet (your node) is not registering with the master. If you don't have any nodes basically you can't run anything.</p> <p>You can ssh into one of the nodes and check the logs in the kubelet with something like this:</p> <pre><code>journalctl -xeu kubelet </code></pre> <p>Also, it would help to post the output of <code>kubectl describe deployment &lt;deployment-name&gt;</code> and <code>kubectl get pods</code></p>
<p>I have kubernetes Cluster v1.10 over centos 7 , bare-metal </p> <pre><code>helm version Client: &amp;version.Version{SemVer:"v2.11.0-rc.3", GitCommit:"28d295be2a94115b786ee277dffcc2b5483bde47", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.11.0-rc.3", GitCommit:"28d295be2a94115b786ee277dffcc2b5483bde47", GitTreeState:"clean"} </code></pre> <p>I am trying to install nginx using helm </p> <pre><code>helm install stable/nginx-ingress --name nginx </code></pre> <p>It returns </p> <pre><code>Error: release nginx failed: clusterroles.rbac.authorization.k8s.io "nginx-nginx-ingress" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:[""], Resources:["configmaps"], Verbs: ["list"]} PolicyRule{APIGroups:[""], Resources:["configmaps"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources: ["endpoints"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["endpoints"], Verbs:["watch"]} PolicyRule{APIGroups: [""], Resources:["nodes"], Verbs:["list"]} PolicyRule{APIGroups: [""], Resources:["nodes"], Verbs:["watch"]} PolicyRule{APIGroups: [""], Resources:["pods"], Verbs:["list"]} PolicyRule{APIGroups: [""], Resources:["pods"], Verbs:["watch"]} PolicyRule{APIGroups: [""], Resources:["secrets"], Verbs:["list"]} PolicyRule{APIGroups: [""], Resources:["secrets"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["services"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["services"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["services"], Verbs: ["update"]} PolicyRule{APIGroups:[""], Resources:["services"], Verbs:["watch"]} PolicyRule{APIGroups:["extensions"], Resources: ["ingresses"], Verbs:["get"]} PolicyRule{APIGroups:["extensions"], Resources:["ingresses"], Verbs:["list"]} PolicyRule{APIGroups: ["extensions"], Resources:["ingresses"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["events"], Verbs:["create"]} PolicyRule{APIGroups:[""], Resources:["events"], Verbs:["patch"]} PolicyRule{APIGroups:["extensions"], Resources: ["ingresses/status"], Verbs:["update"]}] user=&amp; {system:serviceaccount:kube-system:default 8f248058-b684-11e8- b781-daf0a0c10949 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[] </code></pre> <p>How can i solve this issue ? </p> <p>Thank You :D </p>
<p>As <code>kubectl apply -f</code> is used to create or update resources, Helm, on the other hand, can be called Kubernetes Package Manager. The solution you have posted is a workaround as the question is about Helm and not how to create resources using the <code>kubectl apply</code>. Adding <code>--set rbac.create=false</code> indicates that you probably deployed Tiller with no RBAC support. The question is, was it done intentionally? If you don't have RBAC authorization turned on, you can keep using <code>--set rbac.create=false</code>; if you want RBAC on, you have to add Tiller service account the <code>cluster-admin</code> role. </p> <p>You can find how to configure it properly <a href="https://github.com/helm/helm/blob/master/docs/rbac.md" rel="nofollow noreferrer">here.</a></p> <p>More information about installation and configuration of Helm and Tiller is available <a href="https://docs.helm.sh/using_helm/" rel="nofollow noreferrer">here</a> and the detailed explanation of the similar case in this Github <a href="https://github.com/jetstack/cert-manager/issues/256" rel="nofollow noreferrer">issue</a>.</p>
<p>I can't seem to find any clear information on this anywhere, but is it possible in a Helm chart to require a third party, such as <code>stable/jenkins</code>, and specify configuration values? </p> <p>All the examples I see are for running the <code>helm install</code> command directly but I would like to be able to configure it as part of my application.</p>
<p>In answer, @desaintmartin referred me to these documents in Slack:</p> <ul> <li><a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer" title="Subcharts and Globals">Subcharts and Globals</a></li> <li><a href="https://helm.sh/docs/topics/chart_best_practices/dependencies/" rel="nofollow noreferrer">Requirements</a></li> <li><a href="https://helm.sh/docs/helm/helm_dependency/" rel="nofollow noreferrer">Helm Dependencies</a></li> </ul> <p>This led me to find the <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">specific part I was looking for</a>, where the parent chart can override sub-charts by specifying the chart name as a key in the parent <code>values.yaml</code>.</p> <p>In the application chart's <code>requirements.yaml</code>:</p> <pre><code>dependencies: - name: jenkins # Can be found with &quot;helm search jenkins&quot; version: '0.18.0' # This is the binaries repository, as documented in the GitHub repo repository: 'https://kubernetes-charts.storage.googleapis.com/' </code></pre> <p>Run:</p> <pre><code>helm dependency update </code></pre> <p>In the application chart's <code>values.yaml</code>:</p> <pre><code># ...other normal config values # Name matches the sub-chart jenkins: # This will be override &quot;someJenkinsConfig&quot; in the &quot;jenkins&quot; sub-chart someJenkinsConfig: value </code></pre>
<p>I'm running a containerized java application in Kubernetes.</p> <p>In order to make the jvm reserve memory according to the container specifications, the flags <code>-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap</code> must be set.</p> <p>If both those flags are set along with Xms and Xmx flags, what would the behavior of the jvm be? Do one flag overwrite the other? </p> <p>For example, if we had <code>java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Xms -Xms2500M -Xmx2500M -jar myjar.jar</code> in a pod with container limits 4Gi for requests and 4Gi for responses, in a host machine that has 128Gi memory, how much memory would the JVM reserve?</p>
<p>The -Xmx flag overwrites the -XX:+UseCGroupMemoryLimitForHeap flag.</p> <p>The flag -XX:+UseCGroupMemoryLimitForHeap lets the JVM detect what the max heap size in a container should be.</p> <p>The -Xmx flag set the max heap size to a fixed size.</p> <p>To answer your example, the JVM would reserve 2500M heap space. There will be some additional memory usage for non-heap and jvm stuff.</p> <p>To further tune your memory usage in a container you could use the -XX:MaxRAMFraction flag. See this article: <a href="https://blog.csanchez.org/2017/05/31/running-a-jvm-in-a-container-without-getting-killed/" rel="noreferrer">https://blog.csanchez.org/2017/05/31/running-a-jvm-in-a-container-without-getting-killed/</a></p>
<p>I created a ReplicationController using the below command.</p> <blockquote> <p>kubectl run nginx --image=nginx -r=2 --generator=run/v1</p> </blockquote> <p>Now I tried upgrading the image to version 1.7.1.</p> <blockquote> <p>kubectl set image rc/nginx nginx=nginx:1.7.1</p> </blockquote> <p>But, the image doesn't seem to update.</p> <blockquote> <p>watch -n1 "kubectl describe pods | grep "Image:""</p> </blockquote> <p>Also tried <code>kubectl edit ....</code> and the <code>kubectl apply -f ....</code> command, but the image is not getting updated.</p> <p>How do I update an image in K8S ReplicationController?</p>
<p>Here in documentation is described how to make rolling upgrade on replication controllers <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#rolling-updates" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#rolling-updates</a> </p> <p>You need to know that actually your image is updated in replication controller but replication controller won't kill existing pods and spawn new with new image. So to achieve that you need to do one of 2 options:</p> <ol> <li>Manually kill pods</li> <li>Scale your RC to 0 to kill pods and then to desired number of replicas using following command <code>kubectl scale --replicas=3 rc/nginx</code></li> </ol>
<p>I am trying to run a simple image on a specific namespace to debug some issues</p> <pre><code>kubectl run busy --image busybox --namespace my-local-dev deployment.apps/busy created </code></pre> <p>However for some reason the container keeps restarting</p> <pre><code>busy-67b577b945-ng2lt 0/1 CrashLoopBackOff 5 3m </code></pre> <p>and I am unable to get any logs, even with the <code>--previous</code> flag</p> <pre><code>$ kubectl logs -f --namespace my-local-dev busy-67b577b945-ng2lt --previous Unable to retrieve container logs for docker://c8b9fce066686b3be01df1ed3343be5ec65607cb203e054fd9365511f77bd4af/home/pkara/Desktop $ kubectl logs -f --namespace my-local-dev busy-67b577b945-ng2lt $ _ </code></pre> <p>Any suggestions?</p>
<p><code>CrashLoopBackOff</code> means that your pod continues on crashing and gets restarted and crashes again.</p> <p>Depending on the point of crash, for example soon at startup or later during the execution of your app, you may or may not see the logs.</p> <p>In this case (no logs shown) it's likely that your pod has NOT some requested resources available. It may be a secret or a volume, for example.</p> <p>A good way is to watch Kubernetes events: <code>kubectl get events</code></p> <p>Or in similar way describe your resource and read the relative events: <code>kubectl describe pod &lt;pod_name&gt;</code>, the last part of the screen is dedicated to events on that resource.</p>
<p>Is there any way of viewing cache data in kubernetes cluster ? Does it get stored in a folder somewhere or can It be shown via a monitoring tool (Prometheus) ? And how to clean the cache data in k8s cluster ?</p>
<p>There is cache directory in Kubernetes; however, it is held within the Master Node. As shown in this <a href="https://github.com/kubernetes/kubernetes/issues/59356" rel="nofollow noreferrer">github link</a>, the issue was found when debugging the kube-controller which is handled by the Master Node. </p> <p>I am not familiar with Amazon's cloud service; however, within GKE, the Master node is a managed service and customers do not have access to it. So to answer your question, yes there is a cache directory in Kubernetes, but you cannot view it within GKE.</p>
<p>I have three versions of the app that I need to serve to the user based on selection (for now assume it's a drop-down)</p> <p>I have 3 containers running v1, v2, v3 of the app.</p> <p>How do we route the app to the right container in Kubernetes based on the selection? How do I separate my versions?</p>
<p>Create 3 different <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes deployments</a> with pods using containers v1, v2, and v3 respectively. Also, create 3 <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes services</a> to forward traffic to those 3 deployments.</p>
<p><strong>Use case / Problem</strong></p> <p>I am in charge of maintaining a kubernetes cluster with 40 nodes (split across 2 zones). We have roughly 100 microservices and platform stuff like Kafka brokers running in this cluster. All microservices have defined resource request &amp; limits. Most of them however are burstable and don't have guaranteed RAM. Developers who deploy their services in our cluster defined limits far greater than the request (see example below) which eventually caused a lot of evicted pods on various nodes. We still want to use burstable resources in our services though, as we can save money using burstable resources. Therefore I need a better monitoring possibility of all pods running on each node, containing these information:</p> <ul> <li>Node name &amp; CPU / RAM capacity</li> <li>All pod names plus <ul> <li>pod's resource requests &amp; limits</li> <li>pod's current cpu &amp; ram usage</li> </ul></li> </ul> <p>This way I could easily identify two problematic kind of services:</p> <p><strong>Case A:</strong> The microservice which just sets huge resource limits, because the developer was just testing stuff or is too lazy to bench/monitor his service</p> <pre><code>resources: requests: cpu: 100m ram: 500Mi limits: cpu: 6 ram: 20Gi </code></pre> <p><strong>Case B:</strong> Too many services on the same node which have set not accurate resource limits (e. g. 500Mi, but the service constantly uses 1.5Gi RAM). This case happened to us, because Java developers didn't notice the Java garbage collector will only start to cleanup when 75% of the available RAM has been used.</p> <p><strong>My question:</strong></p> <p>How could I properly monitor this and therefore identify misconfigured microservices in order to prevent such eviction problems? At a smaller scale I could simply run <code>kubectl describe nodes</code> and <code>kubectl top pods</code> to figure it out manually, but at this scale that doesn't work anymore.</p> <p><em>Note:</em> I couldn't find any existing solution for this problem (including prometheus + grafana boards using kube metrics and similiar). I thought it's possible but visualizing this stuff in Grafana is really hard.</p>
<p>This is a known issue as there is still an open <a href="https://github.com/kubernetes/kubernetes/issues/17512" rel="nofollow noreferrer">github issue</a> and the community is requesting developers to create a new command which would show pod/container total CPU and memory usage. Please check this link as there are some ideas and workarounds provided by the community which look like they could be useful for your case. </p> <p>Did you use proper metrics and you were not able to see the required information? <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md" rel="nofollow noreferrer">Here</a> is a list of pod metrics and I think some of them would be useful for your use case. </p> <p>Even though there is no fully functional solution to this issue thanks to the community and some other resources there are a couple ways of achieving your goal: As advised in this <a href="https://dzone.com/articles/kubernetes-resource-usage-how-do-you-manage-and-mo" rel="nofollow noreferrer">article</a>:</p> <pre><code>kubectl get nodes --no-headers | awk '{print $1}' | xargs -I {} sh -c 'echo {}; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo' </code></pre> <p>Also the author of this article recommends <a href="https://www.coscale.com/" rel="nofollow noreferrer">CoScale</a> I haven't used it but it seems it is worth a try if other solutions fail. </p> <p>I think that another point is that you might never be in control if your developers keep allocating far greater resources than it is needed. Solution recommended by <a href="https://stackoverflow.com/users/2718151/nicola-ben">Nicola Ben</a> would help you mitigate issues like this. </p>
<p>I have a Traefik on Kubernetes that acts as ingress and load balancer. I have a ingress setup like this:</p> <pre><code>{ "apiVersion": "extensions/v1beta1", "kind": "Ingress", "metadata": { "name": "nppl-ingress", "annotations": { "kubernetes.io/ingress.class": "traefik", "traefik.frontend.rule.type": "PathPrefixStrip" } }, "spec": { "rules": [ { "host": "" , "http": { "paths": [ { "path": "/discover-service", "backend": { "serviceName": "discover-service", "servicePort": "discover-port" } }, { "path": "/config-server", "backend": { "serviceName": "config-server", "servicePort": "config-port" } } ] } } ] } } </code></pre> <p>When, for example, I hit "<a href="http://IP/config-server" rel="nofollow noreferrer">http://IP/config-server</a>", because I have a basic auth on that service, the service does a redirect over "<a href="http://IP/login" rel="nofollow noreferrer">http://IP/login</a>" so I get a 404 but I was expecting "<a href="http://IP/config-server/login" rel="nofollow noreferrer">http://IP/config-server/login</a>". What is wrong with my configuration? I also tried "PathPrefix" and "Path" under rule type but it doesn't work. I need to keep my base path when my services perform redirects. </p>
<p>The issue here is that <code>PathPrefixStrip</code> removes <code>/config-server</code> and it initially forwards the request to your backend with just <code>/</code>. However, when your backend redirects it's a new request and the Ingress doesn't understand the request to <code>/</code> or <code>/login</code>. You could create two Ingress(es) for each backend and use single backend in each with a path to <code>/</code>. But this would mean adding a new ingress-class and <a href="https://github.com/containous/traefik/pull/3516" rel="nofollow noreferrer">the good news is that <code>Traefik</code> supports it</a>. You could also try <a href="https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/" rel="nofollow noreferrer">an nginx</a> ingress which has the <code>--ingress-class</code> option.</p> <p>You can find more information on how to create multiple ingress controllers <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/multiple-ingress-controllers" rel="nofollow noreferrer">here</a></p> <p>If you want a single Ingress it will also be tricky since you might have to add logic in your app to understand the <code>/config-server</code> path for example.</p>
<p>I want to mount a volume with <code>kubectl</code> and get a shell in the environment.</p> <p>I've tried this:</p> <pre><code>kubectl run -i --rm --tty alpine --overrides=' { "apiVersion": "v1", "spec": { "template": { "spec": { "containers": [ { "name": "alpine", "image": "alpine:latest", "args": [ "sh" ], "stdin": true, "stdinOnce": true, "tty": true, "volumeMounts": [{ "mountPath": "/home/store", "name": "store" }] } ], "volumes": [{ "name":"store", "emptyDir":{} }] } } } } ' --image=alpine:latest --restart=Never -- sh </code></pre> <p>I'm not getting any errors but the volume is not present at the mount path <code>/home/store</code>:</p> <pre><code>~ # ls -lah /home/ total 8 drwxr-xr-x 2 root root 4.0K Sep 11 20:23 . drwxr-xr-x 1 root root 4.0K Sep 29 09:47 .. </code></pre> <p>I'm looking for the most direct way to use a volume with <code>kubectl run</code> for debugging purposes.</p>
<p><em>TL;DR I don't know what the issue was but I ended up solving this by making the build request very verbose.</em></p> <p>I ended up solving this by setting debug to very verbose (<code>v=0</code>) and noticing that my volume mount was completely ignored by <code>kubectl</code> and not present in the request to the API:</p> <pre><code>I0929 13:31:22.429307 14616 request.go:897] Request Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"alpine","creationTimestamp":null,"labels":{"run":"alpine"}},"spec":{"volumes":[{"name":"store","emptyDir":{}}],"containers":[{"name":"alpine","image":"alpine:latest","args":["sh"],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","stdin":true,"stdinOnce":true,"tty":true}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"},"status":{}} </code></pre> <p>I copy pasted that request, and edited it to add the same volume mount as above, and it worked:</p> <pre><code>kubectl run -i --rm --tty alpine --overrides=' { "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "alpine", "creationTimestamp": null, "labels": { "run": "alpine" } }, "spec": { "containers": [{ "name": "alpine", "image": "alpine:latest", "args": ["sh"], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent", "stdin": true, "stdinOnce": true, "tty": true, "volumeMounts": [{ "mountPath": "/home/store", "name": "store" }] }], "volumes": [{ "name":"store", "emptyDir":{} }], "restartPolicy": "Never", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "securityContext": {}, "schedulerName": "default-scheduler" }, "status": {} } ' --image=alpine:latest -v=9 --restart=Never -- sh </code></pre>
<p>I have deployed <strong>Kong-Ingress-controller</strong> using <strong>helm</strong> </p> <p>And I have Kubernetes's Cluster v1.10 On centos 7 </p> <p>I am using dedicated Server From OVH Provider </p> <p>When I create Ingress </p> <pre><code>cat ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins spec: backend: serviceName: jenkins servicePort: 8080 </code></pre> <hr> <pre><code>kubectl get ing NAME HOSTS ADDRESS PORTS AGE jenkins * 80 3s </code></pre> <hr> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins ClusterIP 10.254.104.80 &lt;none&gt; 8080/TCP 1d </code></pre> <p>Now I Can not access this Ingress from Out Side because I am using OVH Server.</p> <p>Is there a solution? </p>
<p>OVH is not <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer">officially supported by Kubernetes</a>. It was supported then generally you would create a Service <code>jenkins</code> of the type <code>LoadBalancer</code> and that would be your externally facing endpoint with a public IP.</p> <p>Since it's not supported the next best thing is to create a <code>NodePort</code> service. That will create a service that listens on a specific port on all the Kubernetes nodes and forwards the requests to your Pods (only where they are running). So, in this case, you will have to create an <a href="https://www.ovh.com/world/solutions/load-balancer/" rel="nofollow noreferrer">OVH Load Balancer</a> with a public IP and point the backend of that load balancer to the <code>NodePort</code> of the service where your <code>Ingress</code> is listening on.</p>
<p>I am creating a pod in k8 client go and making a watch to get notified for when the pod has completed so that i can read the logs of the pod. The watch interface doesnt seem to provide any events on the channel. Here is the code, how would I get notified that the pod status is now completed and is ready to read the logs</p> <pre class="lang-golang prettyprint-override"><code>func readLogs(clientset *kubernetes.Clientset) { // namespace := &quot;default&quot; // label := &quot;cithu&quot; var ( pod *v1.Pod // watchface watch.Interface err error ) // returns a pod after creation pod, err = createPod(clientset) fmt.Println(pod.Name, pod.Status, err) if watchface, err = clientset.CoreV1().Pods(namespace).Watch(metav1.ListOptions{ LabelSelector: pod.Name, }); err != nil { log.Fatalf(err.Error()) } // How do I get notified when the pod.Status == completed } </code></pre>
<p>The events can be listed using the following snippet. You can then process the pod events as needed.</p> <pre class="lang-golang prettyprint-override"><code>label := &quot;&quot; for k := range pod.GetLabels() { label = k break } watch, err := clientset.CoreV1().Pods(namespace).Watch(metav1.ListOptions{ LabelSelector: label, }) if err != nil { log.Fatal(err.Error()) } go func() { for event := range watch.ResultChan() { fmt.Printf(&quot;Type: %v\n&quot;, event.Type) p, ok := event.Object.(*v1.Pod) if !ok { log.Fatal(&quot;unexpected type&quot;) } fmt.Println(p.Status.ContainerStatuses) fmt.Println(p.Status.Phase) } }() time.Sleep(5 * time.Second) </code></pre>
<p>Creating a pod deployment first </p> <pre><code>if pod, err = clientset.CoreV1().Pods(namespace).Create(&amp;v1.Pod{ TypeMeta: metav1.TypeMeta{ Kind: "Deployment", APIVersion: "apps/v1", }, ObjectMeta: metav1.ObjectMeta{ Name: "mithu", Namespace: namespace, }, Spec: v1.PodSpec{ Containers: []v1.Container{ v1.Container{ Name: "mithu", Image: "test-app", ImagePullPolicy: "IfNotPresent", Ports: []v1.ContainerPort{ v1.ContainerPort{ Name: "local", HostPort: 8081, ContainerPort: 8081, Protocol: v1.ProtocolTCP, }, }, Env: []v1.EnvVar{ v1.EnvVar{ Name: "ASD", Value: "", }, }, }, }, RestartPolicy: v1.RestartPolicyOnFailure, }, }); err == nil { fmt.Println("Pod created succesfuly with status", pod.Name) } else { fmt.Println("Error1: ", err) } time.Sleep(5000) </code></pre> <p>Now the pod has been deployed, I am now reading the logs</p> <pre><code>req := clientset.RESTClient().Get(). Namespace(namespace). Name(pod.Name). Resource("pods"). SubResource("log"). Param("follow", ""). Param("container", ""). Param("previous", ""). Param("timestamps", "") readCloser, err := req.Stream() if err != nil { fmt.Println("Error2: ", err) } else { buf := new(bytes.Buffer) _, err = io.Copy(buf, readCloser) fmt.Println("log : ", buf.String()) } </code></pre> <p>Followed reading of logs from <a href="https://stackoverflow.com/questions/32983228/kubernetes-go-client-api-for-log-of-a-particular-pod">here</a> Pod is being created successfully but the client cant seem to find the resources getting the error as <code>Error2: the server could not find the requested resource (get pods.meta.k8s.io mithu)</code></p> <p>If I try to get logs from command line, there is no problems. I think the major issue is between pods.meta.k8s.io and the command line args pods</p>
<p>This worked for me:</p> <pre><code>req := clientset.CoreV1().Pods(namespace).GetLogs( pod.Name, &amp;v1.PodLogOptions{}, ) readCloser, err := req.Stream() if err != nil { fmt.Println("Error2: ", err) } else { buf := new(bytes.Buffer) _, err = io.Copy(buf, readCloser) fmt.Println("log : ", buf.String()) } </code></pre>
<p>I am using RBAC to restrict a user <code>john</code> to work and create resources only in their namespace <code>test-namespace</code>.</p> <p>Here is what I've done:</p> <p>1) Generate certificates for a user and create <code>set-context</code></p> <pre><code>kubectl config set-credentials john --client-certificate=/home/john/.certs/employee.crt --client-key=/home/john/.certs/employee.key kubectl config set-context john-context --cluster=minikube --namespace=test-namespace --user=john </code></pre> <p>2) Create a Role to manage deployments in the namespace <code>test-namespace</code>.</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: test-namespace name: deployment-authority rules: - apiGroups: ["", "extensions", "apps"] resources: ["deployments", "replicasets", "pods"] verbs: ["get", "create", "update", "patch", "delete"] </code></pre> <p>3) Create a RoleBinding.</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: deployment-authority-binding-john namespace: test-namespace subjects: - kind: User name: john apiGroup: "" roleRef: kind: Role name: deployment-authority apiGroup: "" </code></pre> <p>Now all of the above works and I can run the command <code>kubectl --context=john-context run --image busybox busybox</code> without any issue. </p> <p>Now I also have a Unix user on my Kubernetes Master node whose user name is john. My aim to make sure that when that user is logged in, he (john) can run commands allowed to him in his context <code>john-context</code>. I'm unable to link the UNIX user to the Kubernetes user. </p> <p>Something like :</p> <pre><code>john@kubernet:/$ id uid=1002(john) gid=1002(john) groups=1002(john) john@kubernet:/$ kubectl get po -n test-namespace NAME READY STATUS RESTARTS AGE grafana-67c6585fbd-tlr4n 1/1 Running 2 23h </code></pre> <p>But if I switch over to another user on my Unix machine, he/she should not be able to see anything or do anything in namespace <code>test-namespace</code>.</p> <pre><code>su - tom tom@kubernet:/$ id uid=1004(tom) gid=1004(tom) groups=1004(tom) john@kubernet:/$ kubectl get po -n test-namespace You are not allowed to view resources in this namespace </code></pre> <p>Any suggestions will be much appreciated. </p>
<p>You can just manage this through your <code>~/.kube/config</code> file on your users' home directories. Say in <code>/home/john/.kube/config</code> you would have something like this:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0xxxxxxxxo= server: https://172.1.1.1:6443 name: kubernetes contexts: - context: cluster: kubernetes namespace: default user: john name: john@kubernetes current-context: john@kubernetes kind: Config preferences: {} users: - name: john user: client-certificate-data: LS0txxxxo= client-key-data: LS0xxxxx== </code></pre> <p>And then you'd make sure that file only has <code>600</code> permissions.</p>
<p>I am trying to find a DB (object storage) for my application. The application is really a wrapper over ISTIO Network Routing API. Basically simplifying the ISTIO configuratin for my network. Kubernetes (k8s) Custom Resource Definition (CRD) seems to fit my requirements. Also love the watch and REST API capability provided by CRD.</p> <p>DB requirement</p> <ul> <li>Few 100 MBs data - worst case</li> <li>Ability to watch objects</li> <li>REST API support for object</li> <li>Persistence</li> <li>Around 2k writes/sec and similar/more reads/sec. Although I do have my application acting as a proxy for CRD where things can be cached.</li> </ul> <p>Why using CRD will be a good or a bad idea? Are there any performance implication using CRD. <a href="https://stackoverflow.com/questions/41063238/using-etcd-as-primary-store-database">This 2016 stackflow answer</a> suggest that etcd data is not in RAM. Whereas etcd link suggest that etcd can do <a href="https://github.com/etcd-io/etcd" rel="nofollow noreferrer">10k writes/sec</a> (so even things are not in RAM and purely in disk, who cares). </p> <p>I am seeing multiple application using k8s CRDs.</p> <ul> <li>Helm uses CRDs to store releases</li> <li>Istio uses CRDs to store their networking routing API objects</li> </ul>
<p>Considering that (<a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">CRD page</a>)</p> <ul> <li><p>A resource is an endpoint in the <strong>Kubernetes API</strong> that stores a collection of API objects of a certain kind. For example, the built-in pods resource contains a collection of Pod objects.</p></li> <li><p>A custom resource is an extension of the <strong>Kubernetes API</strong> that is not necessarily available on every Kubernetes cluster. In other words, it represents a customization of a particular Kubernetes installation.</p></li> </ul> <p><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions" rel="nofollow noreferrer">A <strong>CRD</strong></a> is for extending Kubernetes itself, it is not for application data.</p> <p>The <a href="https://github.com/bitnami-labs/helm-crd/blob/7dea271427769f46cd710d5d9a57ae6af811ae23/examples/mariadb.yaml" rel="nofollow noreferrer">helm-crd/examples/mariadb.yaml</a> is about lightweight metadata which will enable Kubernetes to download the right release and through Helm install it.</p> <p>It is not to store data for a random application which could exist <em>without</em> Kubernetes (as opposed to Helm releases, which make sense only in a Kubernetes deployment scenario)</p> <p>Similarly, <a href="https://istio.io/docs/concepts/security/" rel="nofollow noreferrer">Istio CRD</a> makes sense only in a Kubernetes context:</p> <blockquote> <p>Kubernetes currently implements the Istio configuration on Custom Resource Definitions (CRDs). These CRDs correspond to namespace-scope and cluster-scope CRDs and automatically inherit access protection via the Kubernetes RBAC. </p> </blockquote> <p>That approach (using etcd to store any application data) would not scale.</p>
<p>Is it possible to create a volume that is shared between all pods in a deployment but impossible to mount for any other pod?</p> <p>Alternatively that is read/write from one deployment and just read from any other pod?</p>
<p>That could be addressed in <a href="https://kubernetes.io/blog/2018/09/27/kubernetes-1.12-kubelet-tls-bootstrap-and-azure-virtual-machine-scale-sets-vmss-move-to-general-availability/#additional-notable-feature-updates" rel="nofollow noreferrer">Kubernetes 1.12</a> (Q3 2018) with <strong>Topology aware dynamic provisioning</strong>, which is now in beta.<br> That means storage resources can now understand where they live.<br> This also includes beta support to AWS EBS and GCE PD.</p> <p>See <a href="https://github.com/kubernetes/features/issues/561" rel="nofollow noreferrer">kubernetes/feature 561</a> and its <a href="https://github.com/kubernetes/website/pull/9939" rel="nofollow noreferrer">doc PR 9939</a> (<a href="https://github.com/kubernetes/website/commit/e1e655507d91f2eeb882bef6d5091b7f434f867c" rel="nofollow noreferrer">commit e1e6555</a>)</p> <p>See <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode" rel="nofollow noreferrer">Storage / Storage Classes / Volume Binding Mode</a> (beta in K8s 1.12)</p> <blockquote> <p>By default, the <code>Immediate</code> mode indicates that volume binding and dynamic provisioning occurs once the <code>PersistentVolumeClaim</code> is created. For storage backends that are topology-constrained and not globally accessible from all Nodes in the cluster, <code>PersistentVolumes</code> will be bound or provisioned without knowledge of the Pod’s scheduling requirements. This may result in unschedulable Pods.</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies" rel="nofollow noreferrer">Allowed Topologies</a> is how to restrict the topology of provisioned volumes to specific zones.</p> <pre><code>allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - us-central1-a - us-central1-b </code></pre>
<p>On this page in the Kubernetes docs <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/user-guide/pods.md" rel="noreferrer">Pods</a>, it states</p> <blockquote> <p>The context of the pod can be defined as the conjunction of several Linux namespaces:</p> <p>PID namespace (applications within the pod can see each other's processes) network namespace (applications within the pod have access to the same IP and port space)</p> <p>IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate)</p> <p>UTS namespace (applications within the pod share a hostname)</p> </blockquote> <p>However, it then says that</p> <blockquote> <p>In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. PID namespace sharing is not yet implemented with Docker.</p> </blockquote> <p>So does this mean that pods cannot see processes in other containers or perform any kind of IPC between containers running in the same pod? How would I send a signal to a process running in another pod?</p>
<blockquote> <p>does this mean that pods cannot see processes in other containers or perform any kind of IPC between containers running in the same pod?</p> </blockquote> <p>Recent <a href="https://kubernetes.io/blog/2018/09/27/kubernetes-1.12-kubelet-tls-bootstrap-and-azure-virtual-machine-scale-sets-vmss-move-to-general-availability/#additional-notable-feature-updates" rel="nofollow noreferrer">Kubernetes 1.12 (Q3 2018) announcements</a> do include:</p> <blockquote> <p>Configurable pod process namespace sharing is moving to beta, meaning <strong>users can configure containers within a pod to share a common PID namespace by setting an option in the PodSpec</strong>.</p> </blockquote> <p>See <a href="https://github.com/kubernetes/features/issues/495" rel="nofollow noreferrer">kubernetes/feature 495</a> "Configurable Pod Process Namespace Sharing" (and its <a href="https://github.com/kubernetes/kubernetes/pull/66507" rel="nofollow noreferrer">PR 66507</a>, <a href="https://github.com/kubernetes/kubernetes/commit/8ebc84e71d7eb1c45dc296945dc232e811b2b820" rel="nofollow noreferrer">commit 8ebc84e</a>), and <a href="https://github.com/kubernetes/website/pull/9996" rel="nofollow noreferrer">its documentation</a>: "<a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="nofollow noreferrer"><strong>Share Process Namespace between Containers in a Pod</strong></a>".</p> <p>Warning, with this:</p> <blockquote> <ol> <li><p><strong>The container process no longer has PID 1</strong>. Some container images refuse to start without PID 1 (for example, containers using systemd) or run commands like <code>kill -HUP 1</code> to signal the container process. In pods with a shared process namespace, <code>kill -HUP 1</code> will signal the pod sandbox.</p></li> <li><p><strong>Processes are visible to other containers in the pod</strong>. This includes all information visible in <code>/proc</code>, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions.</p></li> <li><p><strong>Container filesystems are visible to other containers in the pod through the <code>/proc/$pid/root</code> link</strong>. This makes debugging easier, but it also means that filesystem secrets are protected only by filesystem permissions.</p></li> </ol> </blockquote>
<p>I'm trying to run my custom Jenkins on Openshift. I'm trying to run dockerized pipelines using privileged containers and scc to be able to run docker using my Jenkins. So far, I managed to run the job and it is creating a new Docker container successfully. But, since my new docker is created by Jenkins it doesn't have access to Nexus service on my project. How can I fix this? I was thinking the solution should be for the Jenkins to run docker in the same namespace as my Jenkins.</p>
<p>I'm assuming that you want to run your container in Kubernetes.</p> <p>On your <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> I would advise using either a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a> or if you want to keep in encrypted in the cluster you can use a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secret</a> to store your Nexus credentials.</p> <p>Then you can mount your ConfigMap or Secret under <code>~/.ivy2/.credentials</code> for example.</p>
<p>I have the following <strong>YAML</strong> file for create a postgres server instance</p> <pre><code>kind: Deployment apiVersion: apps/v1beta1 metadata: name: spring-demo-db labels: app: spring-demo-application spec: replicas: 1 selector: matchLabels: app: spring-demo-db template: metadata: creationTimestamp: null labels: app: spring-demo-db spec: containers: - name: spring-demo-db image: postgres:10.4 ports: - name: spring-demo-db containerPort: 5432 protocol: TCP env: - name: POSTGRES_PASSWORD value: "springdemo" - name: POSTGRES_USER value: "springdemo" - name: POSTGRES_DB value: "springdemo" volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-storage terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent volumes: - name: "postgres-storage" persistentVolumeClaim: claimName: spring-demo-pv-claim restartPolicy: Always </code></pre> <p>But when ssh into the container user <strong>springdemo</strong> not created. I have been struggling all day.What could be the problem for this </p> <p>Anyone who can help me?</p>
<p>You didn't mention what command you're running and what error you're getting, so I'm guessing here, but try this:</p> <pre><code>kind: Deployment apiVersion: apps/v1beta1 metadata: name: spring-demo-db labels: app: spring-demo-application spec: replicas: 1 selector: matchLabels: app: spring-demo-db template: metadata: creationTimestamp: null labels: app: spring-demo-db spec: containers: - name: spring-demo-db image: postgres:10.4 ports: - name: spring-demo-db containerPort: 5432 protocol: TCP env: - name: POSTGRES_USER value: "springdemo" - name: POSTGRES_DB value: "springdemo" - name: POSTGRES_PASSWORD value: "springdemo" volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-storage terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent volumes: - name: "postgres-storage" persistentVolumeClaim: claimName: spring-demo-pv-claim restartPolicy: Always </code></pre> <p>But if it doesn't work, just use the <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="nofollow noreferrer">Helm chart</a>, because, among other issues, you are passing the password in an insecure way, which is a bad idea.</p>
<h1>Dockerfile</h1> <pre><code>FROM ubuntu MAINTAINER [email protected] RUN apt-get update RUN apt-get install -y openjdk-8-jdk ADD build/libs/micro-service-gradle-0.0.1-SNAPSHOT.jar /var/local/ ENTRYPOINT exec java $JAVA_OPTS \ -jar /var/local/micro-service-gradle-0.0.1-SNAPSHOT.jar EXPOSE 8080 </code></pre> <h1>deployment.yaml</h1> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: micro-service-gradle labels: app: micro-service-gradle spec: replicas: 1 selector: matchLabels: app: micro-service-gradle template: metadata: labels: app: micro-service-gradle spec: containers: - name: micro-service-gradle image: micro-service-gradle:latest ports: - containerPort: 8080 </code></pre> <p>Deploying spring boot application in Kubernetes . Pod is not getting created. When i check kubectl get pods. it says CrashLoopBackOff. </p> <pre><code>NAME READY STATUS RESTARTS AGE micro-service-gradle-fc97c97b-8hwhg 0/1 CrashLoopBackOff 6 6m23s </code></pre> <p>I tried to check logs for the same container. Logs are empty</p> <pre><code>kubectl logs -p micro-service-gradle-fc97c97b-8hwhg </code></pre> <p>I created the container manually using docker run. There is no issues in image and containers works fine.</p> <p>How to verify the logs for why the pods in crash status.</p>
<p>You need to use</p> <pre><code>kubectl describe pod micro-service-gradle-fc97c97b-8hwhg </code></pre> <p>to get the relevant logs. This should guide you to your problem.</p>
<p>I'm looking to perform an automatic rolling update of a Kubernetes cluster in a way that's ignorant of the details of the applications running on the cluster. In principle, PodDisruptionBudget should facilitate this.</p> <p>Here's the snag: there's an Elasticsearch cluster running on this Kubernetes cluster, and I can't find a way to correctly express the "OK to evict an ES Pod" signal. Specifically, this seems to be a case where <strong>the "this Pod can receive traffic" and "this Pod can be evicted" signals can't both be represented by <code>readinessProbe</code>.</strong></p> <p>This ES cluster's indices have <code>number_of_replicas: 1</code>, and there's a PDB with <code>maxUnavailable: 1</code>. The ES Pods each specify a readiness probe that requests <code>/_cluster/health?wait_for_status=yellow</code>.</p> <p>As-is, if we evict an ES Pod, the replacement Pod will join the ES cluster, start up, and return to ready status while the ES cluster <em>as a whole</em> is still yellow and replicating shards (and thus it's still unsafe to evict any additional ES Pods).</p> <p>Has anyone worked around this successfully? Am I misunderstanding the semantics of probes/PDBs?</p> <hr> <p>Some options we've considered:</p> <ul> <li>Using <code>wait_for_status=green</code> in the readiness probe would mean that <em>all</em> ES Pods become unready when the ES cluster health is yellow.</li> <li>Increasing the ES indices' <code>number_of_replicas</code> to <code>2</code> only slightly reduces the probability of a rolling update damaging the ES cluster (assume these shards are slow to replicate).</li> <li>Ditto with setting a large <code>initialDelaySeconds</code> on the <code>readinessProbe</code>. It's possible for that to undershoot the time for shard replication to complete.</li> <li>Ditto with using a <code>preStop</code> hook (<a href="https://github.com/helm/charts/blob/0a5ddac35d7e1a8820f3edb5198a648f706146ef/incubator/elasticsearch/templates/configmap.yaml#L122-L143" rel="nofollow noreferrer">this is the approach the community Helm chart appears to take</a>) and a long grace period.</li> <li>Decreasing the PDB's <code>maxUnavailable</code> to <code>0</code> means that the rolling update has to be run by a human who can remove the PDB, evaluate the status of the ES cluster, etc.</li> <li>A hypothetical, um, <code>evictablenessProbe</code> that checked <code>wait_for_status=green</code> would work, but no such API exists.</li> </ul>
<p>First of all, by all means, saves yourself a ton of time and trouble and use the Helm chart: <a href="https://github.com/helm/charts/tree/master/incubator/elasticsearch" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/incubator/elasticsearch</a></p> <p>But just in case you can't, or in case it helps others, I think what you're looking for is <code>/_cluster/health?local=true</code>, e.g:</p> <pre><code> readinessProbe: httpGet: path: /_cluster/health?local=true port: 9200 </code></pre> <p>Hope this helps!</p>
<p>I'm currently testing Traefik in Kubernetes to request and manage over 2.000 Let'sEncrypt certificates. </p> <p>Created 2 test environments:</p> <ol> <li><p>Traefik with acme.json for certificate storage</p></li> <li><p>Traefik HA cluster with custom build Consul (to store more than 512k per KV) as KV store</p></li> </ol> <p>Both setups work really well with up to 400 certificates and I find it really convenient to use it instead of CertBot. After 400 certificates Traefik starts to consume a lot of CPU (100%) and many times it just hangs to process any new certificates until pods are restarted. Also, with any new certificate request, it takes up to 5 minutes to process it and during this process, it becomes slower (sometimes ~20sec to resolve SSL).</p> <p>I'm also considering using multiple ingress classes in Kubernetes and to limit the number of certificates per class (maybe ~300/class) but this will add some complexity to my final setup and I prefer to avoid it. </p> <p>Does anyone have experience using Traefik with a large number of certificates? I would greatly appreciate any suggestion.</p>
<p>400 certificates is a pretty large number for a given Kubernetes cluster, I would recommend having some another entity, other than Traefik manage your certificates, such as the <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">cert-manager</a></p>
<p>As far as I know, Kubernetes is an orchestration framework to deploy and manage containers. On the other hand, Kubernetes Engine is responsible for scaling clusters up and down, and also to manage container images.</p> <p>From looking at the above, they both seem to be the same thing or very similar. From the above definitions, they seem to be practically the same thing. What are there differences?</p>
<ul> <li><p><a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a>: The container orchestration tool that can be run on a Linux and now a Windows server too.</p></li> <li><p>Kubernetes Engine: Name given by cloud providers like GCP <a href="https://cloud.google.com/kubernetes-engine/" rel="noreferrer">(GKE)</a>, AWS <a href="https://aws.amazon.com/eks/" rel="noreferrer">(EKS)</a>, and Azure <a href="https://azure.microsoft.com/en-us/free/kubernetes-service/" rel="noreferrer">(AKS)</a> to their fully manager Kubernetes service.</p></li> </ul> <p>The main difference here is that the cloud providers manage the main aspects of your Kubernetes clusters, such as installing it, scaling the nodes, displaying pods running on their cloud console, provide monitoring, etc.</p>
<p>I cannot access my Cassandra database, deployed on the same namespace in kubernetes.</p> <p>My service has no cluster IP but an internal endpoint cassandra.hosting:9042 but whenever I try to connect from an internal spring application using </p> <pre><code>spring.data.cassandra.contact-points=cassandra.hosting </code></pre> <p>it fails with the error <code>All host(s) tried for query failed</code></p>
<p>How did you configure your endpoint? Generally, all services and pods in a Kubernetes cluster are discoverable through a standard <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS notation</a>. It looks like this:</p> <pre><code>&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local # or &lt;pod-name&gt;.&lt;namespace&gt;.svc.cluster.local # or &lt;pod-name&gt;.&lt;subdomain&gt;.&lt;namespace&gt;.svc.cluster.local </code></pre> <p>If you are within the same namespace this would work too:</p> <pre><code>&lt;service-name&gt; &lt;pod-name&gt; &lt;pod-name&gt;.&lt;subdomain&gt; </code></pre> <p>I would also check either <code>core-dns</code> or <code>kube-dns</code> are running and ready:</p> <pre><code>kubectl -n kube-system get pods | grep dns </code></pre>