prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.</p> <p>I tried this command to get the token and entered it in:</p> <pre><code>gcloud container clusters get-credentials mycluster kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}' kubectl proxy </code></pre> <p>And it shows some things, but not others (services are missing, says it's forbidden).</p> <p>How do I use kubectl proxy or show that dashboard with GKE?</p>
<p>Provided you are authenticated with <code>gcloud auth login</code> and the current project and k8s cluster is configured to the one you need, authenticate <code>kubectl</code> to the cluster (this will write <code>~/.kube/config</code>):</p> <pre><code>gcloud container clusters get-credentials &lt;cluster name&gt; --zone &lt;zone&gt; --project &lt;project&gt; </code></pre> <p>retrieve the auth token that the kubectl itself uses to authenticate as you</p> <pre><code>gcloud config config-helper --format=json | jq -r '.credential.access_token' </code></pre> <p>run</p> <pre><code>kubectl proxy </code></pre> <p>Then open a local machine web browser on</p> <p><a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy" rel="noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy</a> (This will only work if you checked the checkbox Deploy Dashboard in GCP console)</p> <p>and use the token from the second command to log in with your Google Account's permissions.</p>
<p>How to attach a secret to running pod?</p> <p>I have a pod running and i want to attach a secret.</p> <p>I don't want to terminate the running pod instance.</p> <p>I know pod are meant to run as stateless.</p>
<p>According to the <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>Secrets can be mounted as data volumes or be exposed as environment variables to be used by a container in a pod.</p> <p>This is an example of a pod that mounts a secret in a volume:</p> </blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: "/etc/foo" readOnly: true volumes: - name: foo secret: secretName: mysecret </code></pre> <blockquote> <p><strong>Mounted Secrets are updated automatically</strong></p> <p>When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted secret is fresh on every periodic sync. However, it is using its local cache for getting the current value of the Secret. The type of the cache is configurable using the (ConfigMapAndSecretChangeDetectionStrategy field in KubeletConfiguration struct). It can be either propagated via watch (default), ttl-based, or simply redirecting all requests to directly kube-apiserver. As a result, the total delay from the moment when the Secret is updated to the moment when new keys are projected to the Pod can be as long as kubelet sync period + cache propagation delay, where cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero corespondingly).</p> <p>Note: A container using a Secret as a subPath volume mount will not receive Secret updates.</p> <p>This is an example of a pod that uses secrets from environment variables:</p> </blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: secret-env-pod spec: containers: - name: mycontainer image: redis env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password restartPolicy: Never </code></pre> <p>For both cases you need to change pod specification. You can do it by editing Pod or Deployment with kubectl edit:</p> <pre><code>$ kubectl edit pod &lt;pod_name&gt; -n &lt;namespace_name&gt; $ kubectl edit deployment &lt;deployment_name&gt; -n &lt;namespace_name&gt; </code></pre> <p>Alternatively you can make changes in YAML file and apply it:</p> <pre><code>$ vi MyPod.yaml $ kubectl apply -f MyPod.yaml </code></pre> <p>The most important thing you need to know, if you change the Pod specification, your Pod will be restarted to apply changes. In case of Deployment rolling update will happen. In most cases it is okay. If you need to save state of your application, the best way is to store valuable information using Volumes.</p> <p>If you still want to add secrets without Pod restarts, you can use shared storage like NFS. When you change the content of NFS volume that already mounted into the Pod, the changes will be visible inside the pod instantly. In certain cases you can exec shell inside the pod and mount NFS volume manually.</p> <p>Alternatively, you can export the content of the secret to the file using the <a href="https://github.com/ashleyschuett/kubernetes-secret-decode" rel="nofollow noreferrer">ksd</a> program<br> (or <code>base64 -d</code>) to decode base64 encoded values in the Secret:</p> <pre><code>kubectl get secret mysecret -o yaml | ksd &gt; filename.yaml </code></pre> <p>and <a href="https://medium.com/@nnilesh7756/copy-directories-and-files-to-and-from-kubernetes-container-pod-19612fa74660" rel="nofollow noreferrer">copy</a> it to the pod using the following command:</p> <pre><code>kubectl cp filename.yaml &lt;some-namespace&gt;/&lt;some-pod&gt;:/tmp/secret.yaml </code></pre>
<p>I have a production AKS kubernetes cluster that hosted in uk-south that has become unstable and unresponsive: </p> <p><a href="https://i.stack.imgur.com/z37QV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z37QV.png" alt="image 1"></a></p> <p>From the image, you can see that I have several pods in varying states of unready ie terminating/unknown, and the ones the report to be running are inaccessible.</p> <p>I can see from the insights grid that the issue starts at around 9.50pm last night </p> <p><a href="https://i.stack.imgur.com/JQHSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JQHSM.png" alt="image 2"></a></p> <p>I've scoured through the logs in the AKS service itself and through the Kibana logs for the apps running on the cluster around the time of the failure but I am struggling to see anything that looks to have caused this.</p> <p>Luckily I have two clusters serving production under a traffic manager so have routed all traffic to the healthy one but my worry is that I need to understand what caused this, especially if the same happens on the other one as there will be production downtime while I spin up a new cluster.</p> <p>My question is am I missing any obvious places to look for information on what caused the issue? any event logs that may point to what the problem is?</p>
<p>I would suggest examining K8s event log around the time your nodes went "not ready". </p> <p>Try open "Insights" Nodes tab and choose timeframe up top around the time when things went wrong. See what node statuses are. Any pressures? You can see that in the property panel to the right of the node list. Property panel also contains a link to event logs for that timeframe... Note though, link to event logs on the node's property panel constructs a complicated query to show only events tagged with that node. </p> <p>You can get this information with simpler queries (and run more fun queries as well) in the Logs. Open "Logs" tab in the left menu on the cluster and execute query similar to this one (change the time interval to the one you need):</p> <pre><code>let startDateTime = datetime('2019-01-01T13:45:00.000Z'); let endDateTime = datetime('2019-01-02T13:45:00.000Z'); KubeEvents_CL | where TimeGenerated &gt;= startDateTime and TimeGenerated &lt; endDateTime | order by TimeGenerated desc </code></pre> <p>See if you have events indicating what went wrong. Also of interest you can look at node inventory on your cluster. Nodes report K8s status. It was "Ready" prior to the problem... Then something went wrong - what is the status? Out of Disk by chance?</p> <pre><code>let startDateTime = datetime('2019-01-01T13:45:00.000Z'); let endDateTime = datetime('2019-01-02T13:45:00.000Z'); KubeNodeInventory | where TimeGenerated &gt;= startDateTime and TimeGenerated &lt; endDateTime | order by TimeGenerated desc </code></pre>
<p>I want to deploy a single Pod on a Node to host my service (like GitLab for the example). The problem is : a Pod will not be re-created after the Node failure (like a reboot). The solution(s) : Use a StatefulSet, ReplicaSet or DaemonSet to ensure the Pod creation after a Node failure. But what is the best for this case ?</p> <p>This Pod is stateful (I am using volume <code>hostPath</code> to keep the data) and is deployed using <code>nodeSelector</code> to keep it always on the same Node.</p> <p>Here is a simple YAML file for the example : <a href="https://pastebin.com/WNDYTqSG" rel="nofollow noreferrer">https://pastebin.com/WNDYTqSG</a></p> <p>It creates 3 Pods (one for each <code>Set</code>) with a volume to keep the data statefully. In practice, all of these solutions can feet my needs, but I don't know if there are best practices for this case.</p> <p>Can you help me to choose between these solutions to deploy a single stateful Pod please ?</p>
<p>Deployment is the most common option to manage a Pod or set of Pods. These are normally used instead of ReplicaSets as they are more flexible and creating a Deployment results in a ReplicaSet - see <a href="https://www.mirantis.com/blog/kubernetes-replication-controller-replica-set-and-deployments-understanding-replication-options/" rel="noreferrer">https://www.mirantis.com/blog/kubernetes-replication-controller-replica-set-and-deployments-understanding-replication-options/</a></p> <p>You would only need a StatefulSet if you had multiple Pods and needed dedicated persistence per Pod or you had multiple Pods and the Pods need individual names because they relate to each other (e.g. one is a leader) - <a href="https://stackoverflow.com/a/48006210/9705485">https://stackoverflow.com/a/48006210/9705485</a></p> <p>A DaemonSet would be used when you want one Pod/replica per Node</p>
<p>I am trying to run ActiveMQ in Kubernetes. I want to keep the queues even after the pod is terminated and recreated. So far I got the queues to stay even after pod deletion and recreation. But, there is a catch, it seems to be storing the list of queues one previous. </p> <p>Ex: I create 3 queues a, b, and c. I delete the pod and its recreated. The queue list is empty. I then go ahead and create queues x and y. When I delete and the pod gets recreated, it loads queues a, b, and c. If I add a queue d to it and pod is recreated, it shows x and y.</p> <p>I have created a configMap like below and I'm using the config map in my YAML file as well. </p> <pre><code>kubectl create configmap amq-config-map --from-file=/opt/apache-activemq- 5.15.6/data apiVersion: apps/v1 kind: Deployment metadata: name: activemq-deployment-local labels: app: activemq spec: replicas: 1 selector: matchLabels: app: activemq template: metadata: labels: app: activemq spec: containers: - name: activemq image: activemq:1.0 ports: - containerPort: 8161 volumeMounts: - name: activemq-data-local mountPath: /opt/apache-activemq-5.15.6/data readOnly: false volumes: - name: activemq-data-local persistentVolumeClaim: claimName: amq-pv-claim-local - name: config-vol configMap: name: amq-config-map --- apiVersion: v1 kind: Service metadata: name: my-service-local spec: selector: app: activemq ports: - port: 8161 targetPort: 8161 type: NodePort --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: amq-pv-claim-local spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- kind: PersistentVolume apiVersion: v1 metadata: name: amq-pv-claim-local labels: type: local spec: storageClassName: manual capacity: storage: 3Gi accessModes: - ReadWriteOnce hostPath: path: /tmp </code></pre> <p>When the pod is recreated, I want the queues to stay the same. I'm almost there, but I need some help. </p>
<p>You might be missing a setting in you volume claim:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: amq-pv-claim-local labels: type: local spec: storageClassName: manual persistentVolumeReclaimPolicy: Retain capacity: storage: 3Gi accessModes: - ReadWriteOnce hostPath: path: /tmp </code></pre> <p>Also there is still a good change that this does not work due to the use of hostPath: HostPath means it is stored on the server the volume started. It does not migrate along with the restart of the pod, and can lead to very odd behavior in a pv. Look at using NFS, gluster, or any other cluster file system to store your data in a generically accessible path.</p> <p>If you use a cloud provider, you can also have auto disk mounts from kubernetes, so you can use gcloud, AWS, Azure, etc to provide the storage for you and be mounted by kubernetes where kubernetes wants it be.</p>
<p>I'm trying to deploy Elastic and Kibana in a Kubernetes cluster.</p> <p>I have installed Elastic using Helm chart :</p> <pre><code>helm repo add elastic https://helm.elastic.co helm repo update helm install stable/elasticsearch --namespace elastic --name elasticsearch --set imageTag=6.5.4 </code></pre> <p>And Kibana using Helm chart :</p> <pre><code>helm install elastic/kibana --namespace elastic --name kibana --set imageTag=6.5.4,elasticsearchURL=http://elasticsearch-client.elastic.svc.cluster.local:9200 </code></pre> <p>I've checked from my Kibana pod, and this URL is reachable and produce the following result</p> <pre><code>curl -v http://elasticsearch-client:9200 * About to connect() to elasticsearch-client port 9200 (#0) * Trying 10.19.251.82... * Connected to elasticsearch-client (10.19.251.82) port 9200 (#0) &gt; GET / HTTP/1.1 &gt; User-Agent: curl/7.29.0 &gt; Host: elasticsearch-client:9200 &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK &lt; content-type: application/json; charset=UTF-8 &lt; content-length: 519 &lt; { "name" : "elasticsearch-client-8666954ffb-kthcx", "cluster_name" : "elasticsearch", "cluster_uuid" : "-MT_zbKySiad0jDJVc1ViQ", "version" : { "number" : "6.5.4", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "d2ef93d", "build_date" : "2018-12-17T21:17:40.758843Z", "build_snapshot" : false, "lucene_version" : "7.5.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" } </code></pre> <p>The command line used in the Kibana pod to start (generated by the helm chart) is</p> <pre><code>/usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli --cpu.cgroup.path.override=/ --cpuacct.cgroup.path.override=/ --elasticsearch.url=http://elasticsearch-client:9200 </code></pre> <p>So it seems the Elastic cluster url is the right one, and reachable.</p> <p>However, when I show the UI in my browser, I get the following page</p> <p><a href="https://i.stack.imgur.com/xc28H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xc28H.png" alt="My Kibana UI seems to indicates an error and an invalid version"></a></p> <p>So to sum up, both versions are identical :</p> <ul> <li>docker.elastic.co/elasticsearch/elasticsearch-oss:6.5.4</li> <li>docker.elastic.co/kibana/kibana:6.5.4</li> </ul> <p>ElasticSearch url is correct, but Kibana don't want to access ElasticSearch</p>
<p>I tried this myself and there's something with the Kibana docker image and/or Helm chart on how the parameter is passed into Kibana. Basically, the command line shows:</p> <pre><code>--elasticsearch.url=http://elasticsearch-client.elastic.svc.cluster.local:9200 </code></pre> <p>But if you shell into the container/pod you see that Kibana command line expect something different for the elasticsearch URL (<code>-e, --elasticsearch &lt;uri&gt;</code>):</p> <pre><code>$ /usr/share/kibana/bin/kibana --help Usage: bin/kibana [command=serve] [options] Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch. Commands: serve [options] Run the kibana server help &lt;command&gt; Get the help for a specific command "serve" Options: -h, --help output usage information -e, --elasticsearch &lt;uri&gt; Elasticsearch instance -c, --config &lt;path&gt; Path to the config file, can be changed with the CONFIG_PATH environment variable as well. Use multiple --config args to include multiple config files. -p, --port &lt;port&gt; The port to bind to -q, --quiet Prevent all logging except errors -Q, --silent Prevent all logging --verbose Turns on verbose logging -H, --host &lt;host&gt; The host to bind to -l, --log-file &lt;path&gt; The file to log to --plugin-dir &lt;path&gt; A path to scan for plugins, this can be specified multiple times to specify multiple directories --plugin-path &lt;path&gt; A path to a plugin which should be included by the server, this can be specified multiple times to specify multiple paths --plugins &lt;path&gt; an alias for --plugin-dir --optimize Optimize and then stop the server </code></pre> <p>So, something is not translating the elasticsearch URL correctly.</p> <p>It seems like the default is <code>localhost:9200</code> so you could try a sidecar container in your kibana deployment so that forwards everything on port <code>localhost:9200</code> to <code>elasticsearch-client.elastic.svc.cluster.local:9200</code>. Perhaps following <a href="https://serverfault.com/questions/547288/is-it-possible-to-redirect-bounce-tcp-traffic-to-an-external-destination-based">this</a></p>
<p>Hope you can help me with this!</p> <p>What is the best approach to get and set request and limits resource per pods? </p> <p>I was thinking in setting an expected number of traffic and code some load tests, then start a single pod with some "low limits" and run load test until OOMed, then tune again (something like overclocking) memory until finding a bottleneck, then attack CPU until everything is "stable" and so on. Then i would use that "limit" as a "request value" and would use double of "request values" as "limit" (or a safe value based on results). Finally scale them out for the average of traffic (fixed number of pods) and set autoscale pods rules for peak production values. </p> <p>Is this a good approach? What tools and metrics do you recommend? I'm using prometheus-operator for monitoring and vegeta for load testing.</p> <p>What about vertical pod autoscaling? have you used it? is it production ready?</p> <p>BTW: I'm using AWS managed solution deployed w/ terraform module</p> <p>Thanks for reading</p>
<p>I usually start my pods with no limits nor resources set. Then I leave them running for a bit under normal load to collect metrics on resource consumption.</p> <p>I then set memory and CPU requests to +10% of the max consumption I got in the test period and limits to +25% of the requests.</p> <p>This is just an example strategy, as there is no one size fits all approach for this.</p>
<p>I am trying to set up a hadoop single node on kubernetes. The odd thing is that, when i login into the pod via <code>kubectl exec -it &lt;pod&gt; /bin/bash</code> i can happily access e.g. the name node on port 9000.</p> <pre><code>root@hadoop-5dcf94b54d-7fgfq:/hadoop/hadoop-2.8.5# telnet localhost 9000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. </code></pre> <p>I can also <code>bin/hdfs dfs -put</code> files and such, so the cluster seems to be working fine. I can also access the ui via <code>kubectl port-forward &lt;podname&gt; 50070:50070</code> and i see a data node up and running. So the cluster (setup is 'pseudo-distributed' as described <a href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html" rel="nofollow noreferrer">here</a>.) seems to be working fine.</p> <p>However, when i want to access my service via kubernetes dns, i get a <code>Connection refused</code>.</p> <pre><code>telnet hadoop.aca534.svc.cluster.local 9000 Trying 10.32.89.21... telnet: Unable to connect to remote host: Connection refused </code></pre> <p><em>What is the difference when accessing a port via k8s-dns?</em></p> <p>The port must be open, i also can see that hadoop name node is listening on 9000.</p> <pre><code>lsof -i :9000 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 2518 root 227u IPv4 144574393 0t0 TCP localhost:9000 (LISTEN) java 2518 root 237u IPv4 144586825 0t0 TCP localhost:9000-&gt;localhost:58480 (ESTABLISHED) java 2660 root 384u IPv4 144584032 0t0 TCP localhost:58480-&gt;localhost:9000 (ESTABLISHED) </code></pre> <p>For complete reference here is my kubernetes <code>yml</code> service and deployment specification. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: service: hadoop name: hadoop spec: selector: matchLabels: service: hadoop replicas: 1 template: metadata: labels: service: hadoop run: hadoop track: stable spec: containers: - name: hadoop image: falcowinkler/hadoop:2.8.5 imagePullPolicy: Never ports: # HDFS Ports - containerPort: 50010 - containerPort: 50020 - containerPort: 50070 - containerPort: 50075 - containerPort: 50090 - containerPort: 8020 - containerPort: 9000 # Map Reduce Ports - containerPort: 19888 # YARN Ports - containerPort: 8030 - containerPort: 8031 - containerPort: 8032 - containerPort: 8033 - containerPort: 8040 - containerPort: 8042 - containerPort: 8088 - containerPort: 22 # Other Ports - containerPort: 49707 - containerPort: 2122 --- apiVersion: v1 kind: Service metadata: labels: service: hadoop name: hadoop spec: ports: - name: hadoop port: 9000 - name: ssh port: 22 - name: hadoop-ui port: 50070 selector: service: hadoop type: ClusterIP </code></pre>
<blockquote> <p>What is the difference when accessing a port via k8s-dns?</p> </blockquote> <p>When you call a Pod IP address, you directly connect to a pod, not to the service.</p> <p>When you call to DNS name of your service, it resolves to a Service IP address, which forward your request to actual pods using Selectors as a filter to find a destination, so it is 2 different ways of how to access pods. </p> <p>Also, you cal call Service IP address directly instead of using DNS, it will works the same way. Moreover, Service IP address, unlike Pod IPs, is static, so you can use it all the time if you want.</p> <p>For in-cluster communication you are using <a href="https://kubernetes.io/docs/tutorials/services/#source-ip-for-services-with-type-clusterip" rel="noreferrer">ClusterIP</a> service mode, which is default and you set it, so everything is OK here.</p> <p>Current endpoints where your service forwards requests you can get by <code>kubectl get service $servicename -o wide</code> in an "endpoint" column. </p> <p>What about your current problems with connection, I can recommend you:</p> <ul> <li><p>Check endpoint of your service (there should be one or more IP addresses of pods), </p></li> <li><p>Set <code>targetPort</code> parameter for each of service ports, e.g:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: service: hadoop name: hadoop spec: ports: - name: hadoop port: 9000 targetPort: 9000 # here is - name: ssh port: 22 targetPort: 22 # here is - name: hadoop-ui port: 50070 targetPort: 50070 # here is selector: service: hadoop type: ClusterIP </code></pre></li> </ul> <p>P.S. <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82" rel="noreferrer">Here</a> is a nice topic with explanation about how Service works. Also, you can check official <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="noreferrer">documentation</a>. </p>
<p>My vanilla kubernetes cluster running on 'Docker for Mac' was running fine without any real load. Now, I deployed a few services and istio. Now, I am getting this error:</p> <pre><code>$ kubectl get pods --all-namespaces Unable to connect to the server: net/http: TLS handshake timeout </code></pre> <p>Where can I see the kubectl logs?</p> <p>I am on Mac OS High Sierra. Thank you for reading my post.</p>
<p><a href="https://i.stack.imgur.com/mBCRI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mBCRI.png" alt="enter image description here"></a></p> <p>I increased the RAM to 8GB, CPUs to 4 and swap space to 4GB, restarted Docker For Mac. kubectl works fine now.</p>
<p>I would like to track a session across multiple web applications, multiple microservices. All of my web applications are static files and microservices are running under nodejs containers with Kubernetes.</p> <p>I have tracking setup across separate web applications and separate microservices. But it too cumbersome to merge and view everything in a single view. </p> <p>Is there any approach to view all of them under single session?</p>
<p>Kubernetes itself do not support any request tracing, but you can use <strong>Istio</strong> together with Kubernetes, which has <a href="https://istio.io/docs/tasks/telemetry/distributed-tracing/" rel="nofollow noreferrer">Distributed Tracing</a> feature.</p> <p>In short, your application will need to resend attached headers and Istio will detects it, collect information from all your services and show you requests tracing.</p> <p>Also, it supports Mesh, so, theoretically, your application can work a bit faster and secure (because of network rules).</p> <p>You can read about Istio <a href="http://istio.io" rel="nofollow noreferrer">here</a>, about tracing <a href="https://istio.io/docs/tasks/telemetry/distributed-tracing/" rel="nofollow noreferrer">here</a> and <a href="https://istio.io/docs/setup/kubernetes/" rel="nofollow noreferrer">here</a> is an instruction of how to setup it in Kubernetes.</p>
<h2>problem statement</h2> <p>we are planning to use azure api management service as a reverse proxy for our AKS . I took reference of following URL for configuring azure api manager with AKS. Although it gives information about node port but same can be applied through internal load balancer IP address.</p> <p><a href="https://fizzylogic.nl/2017/06/16/how-to-connect-azure-api-management-to-your-kubernetes-cluster/" rel="nofollow noreferrer">https://fizzylogic.nl/2017/06/16/how-to-connect-azure-api-management-to-your-kubernetes-cluster/</a></p> <p>we are currently having multiple environments such as dev1,dev2, dev3, dev, uat,stage, prod. we are trying to automate this configuration step and dont need to bind to specific IP but need to point to dns name associated with internal load balancer fro k8s.</p>
<p>Part of the problem is answered by @Ben. I would caution on using external-dns open source as you may not like to create dependency on this very important function. It requires you to grant additional permission!</p> <p>You will need a virtual private ip and it's achieved by internal load balancer annotation and it works. I recently documented end of end tls/ssl with internal load balancer and can find it at <a href="https://blogs.aspnet4you.com/2019/01/06/end-to-end-tlsssl-offloading-with-application-gateway-and-kubernetes-ingress/" rel="nofollow noreferrer">https://blogs.aspnet4you.com/2019/01/06/end-to-end-tlsssl-offloading-with-application-gateway-and-kubernetes-ingress/</a>.</p> <p>Keep in mind, my solution worked great until I removed http application routing add-on. Why? The add-on came with Azure Dns (public) and public load balancer. Both of them are removed for good when I removed the add-on but the removal broke the dns entry associated with vip of internal load balancer. I didn't intend to remove dns zone. My attempt to create new DNS Zone and add A record with private IP didn't work. Kubernetes can't resolve the fqdn. Tried with Azure Private DNS but it's not able to resolve either! My attempt to use configmap with kube-dns didn't work and it breaks dns resolution of other things if I included upstream! So, investigation continues!</p> <p>I would love to hear how you solved the fqdn problem.</p> <p>On the optimistic note, I think VM based custom dns server can be good option and you would likely have one for hybrid solution.</p>
<p>I'm struggling to understand why my Java application is slowly consuming all memory available to the pod causing Kubernetes to mark the pod as out of memory. The JVM (OpenJDK 8) is started with the following arguments:</p> <pre><code>-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 </code></pre> <p>I'm monitoring the memory used by the pod and also the JVM memory and was expecting to see some correlation e.g. after major garbage collection the pod memory used would also fall. However I don't see this. I've attached some graphs below:</p> <p>Pod memory: <a href="https://i.stack.imgur.com/YMw2q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YMw2q.png" alt="enter image description here"></a> Total JVM memory <a href="https://i.stack.imgur.com/Qz2K4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qz2K4.png" alt="enter image description here"></a> Detailed Breakdown of JVM (sorry for all the colours looking the same...thanks Kibana) <a href="https://i.stack.imgur.com/aiOqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aiOqv.png" alt="enter image description here"></a></p> <p>What I'm struggling with is why when there is a significant reduction in heap memory just before 16:00 does the pods memory not also fall?</p>
<p>It looks like you are creating a pod with a resource limit of 1GB Memory. You are setting <code>-XX:MaxRAMFraction=2</code> which means you are allocating 50% of available memory to the JVM which seem to match what you are graphing as <code>Memory Limit</code>.</p> <p>JVM then reserves around 80% of that which is what you are graphing in <code>Memory Consumed</code>.</p> <p>When you look at <code>Memory Consumed</code> you will not see internal garbage collection (as in your second graph), because that GC memory is released back to JVM but is still reserved by it.</p> <p>Is it possible that there is a memory leak in your java application? it is possibly causing more memory to get reserved over time, until the JVM limit (512MB) is met and your pod gets OOM killed.</p>
<p>I'm getting this error <code>Error: UPGRADE FAILED: ConfigMap "my-service.v130" is invalid: data: Too long: must have at most 1048576 characters</code> when running the command:</p> <pre><code>$ KUBECONFIG=/Users/tomcaflisch/.kube/config.dev helm upgrade --wait --recreate-pods --install my-service --version v0.1.0-gedc3d28-dirty -f ./values.yaml -f ./secrets.yaml -f ./vars/dev/db.yaml -f ./vars/dev/epic.yaml -f ./vars/dev/ingress.yaml -f ./vars/dev/services.yaml -f ./vars/dev/tkn.yaml --set image.tag=v0.1.0-gedc3d28-dirty . </code></pre> <p>I can't image my generated configmap is even close to that limit. How can i go about debugging this?</p>
<p>That is a known problem, you can find several issues regarding it - <a href="https://github.com/helm/helm/issues/1996" rel="noreferrer">that one</a>, for an example.</p> <p>Configmap objects on Kubernetes have 1Mb size limit and unfortunately (or maybe fortunately) you cannot change it. </p> <p>Anyway, that is a bad idea to increase the limit, because K8s store Configmaps in Etcd, which don't like large objects.</p> <p>Helm uses configmap of release for store many things, include chart files and it can be your problem.</p> <p>Try to add all files (like <code>.git</code> directory) in chart dir (except chart files itself) to <code>.helmignore</code> file and push release another one time.</p>
<p>I am using DNS based service discovery to discover services in K8s cluster. From <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">this</a> link it is very clear that to discover a service named my-service we can do name lookup "my-service.my-ns" and the pod should be able to find the services.</p> <p>However in case of port discovery for the service the solution is to use is "_http._tcp.my-service.my-ns" where</p> <p>_http refers to the port named http in my-service. </p> <p>But even after using _http._tcp.my-service it doesn't resolve port number. Below are the details.</p> <p><strong>my-service which needs to be discovered</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: my-service ports: - name: http protocol: TCP port: 5000 targetPort: 5000 </code></pre> <p><strong>client-service yaml snippet trying to discover my-service and its port.</strong></p> <pre><code>spec: containers: - name: client-service image: client-service imagePullPolicy: Always ports: - containerPort: 7799 resources: limits: cpu: "100m" memory: "500Mi" env: - name: HOST value: my-service - name: PORT value: _http._tcp.my-service </code></pre> <p>Now when I make a request it fails and logs following request which is clearly incorrect as it doesn't discover port number.</p> <pre><code> http://my-service:_http._tcp.my-service </code></pre> <p>I am not sure what wrong I am doing here, but I am following the same instructions mentioned in document.</p> <p>Can somebody suggest what is wrong here and how we can discover port using DNS based service discovery? Is my understanding wrong here that it will return the literal value of port?</p> <p><strong>Cluster details</strong></p> <p>K8s cluster version is 1.11.5-gke.5 and Kube-dns is running</p> <p><strong>Additional details trying to discover service from busybox and its not able to discover port value 5000</strong></p> <pre><code>kubectl exec busybox -- nslookup my-service Server: 10.51.240.10 Address: 10.51.240.10:53 Name: my-service.default.svc.cluster.local Address: 10.51.253.236 *** Can't find my-service.svc.cluster.local: No answer *** Can't find my-service.cluster.local: No answer *** Can't find my-service.us-east4-a.c.gdic-infinity-dev.internal: No answer *** Can't find my-service.c.gdic-infinity-dev.internal: No answer *** Can't find my-service.google.internal: No answer *** Can't find my-service.default.svc.cluster.local: No answer *** Can't find my-service.svc.cluster.local: No answer *** Can't find my-service.cluster.local: No answer *** Can't find my-service.us-east4-a.c.gdic-infinity-dev.internal: No answer *** Can't find my-service.c.gdic-infinity-dev.internal: No answer *** Can't find my-service.google.internal: No answer kubectl exec busybox -- nslookup _http._tcp.my-service Server: 10.51.240.10 Address: 10.51.240.10:53 ** server can't find _http._tcp.my-service: NXDOMAIN *** Can't find _http._tcp.my-service: No answer </code></pre>
<p>Since Services come with their own (Kubernetes-internal) IP addresses, the easy answer here is to not pick arbitrary ports for Services. Change to <code>port: 80</code> in your Service definition, and clients will be able to reach it using the default HTTP port. When you set the environment variable, set</p> <pre><code>- name: PORT value: "80" </code></pre> <p>DNS supports several different record types; for example, an A record translates a host name to its IPv4 address, and AAAA to an IPv6 address. The Kubernetes Service documentation you cite notes (emphasis mine)</p> <blockquote> <p>you can do a <strong>DNS SRV query</strong> ... to discover the port number for <code>"http"</code>.</p> </blockquote> <p>While <a href="https://en.wikipedia.org/wiki/SRV_record" rel="nofollow noreferrer">SRV records</a> seems like they solve both halves of this problem (they provide a port and a host name for a service) in practice they seem to get fairly little use. The linked Wikipedia page has a list of services that use it, but "connect to the thing this SRV record points at" isn't an option in mainstream TCP clients that I know of.</p> <p>You should be able to verify this with a command like (running <a href="https://hub.docker.com/r/giantswarm/tiny-tools" rel="nofollow noreferrer">this debugging image</a>)</p> <pre><code>kubectl run debug --rm -it --image giantswarm/tiny-tools sh # dig -t srv _http._tcp.my-service </code></pre> <p>(But notice the <code>-t srv</code> argument; it is not the default record type.)</p> <p>Most things that expect a <code>PORT</code> environment variable or similar expect a number, or if not, a name they can find in an <code>/etc/services</code> file. The syntax you're trying to use here and trying to provide a DNS SRV name instead probably just won't work, unless you know the specific software supports it.</p>
<p>I used NFS for to mount a ReadWriteMany storage on a deployment on Google Kubernetes Engine as described in the following link-</p> <p><a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="nofollow noreferrer">https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266</a></p> <p>However my particular use case(elasticsearch production cluster- for snapshots) requires mounting the ReadWriteMany volume on a stateful set. On using the NFS volume created previously for stateful sets, the volumes are not provisioned for the different replicas of the stateful set.</p> <p>Is there any way to overcome this or any other approach I can use?</p>
<p>The guide makes a small mistake depending on how you follow it. The [ClusterIP] defined in the persistent volume should be "nfs-server.default..." instead of "nfs-service.default...". "nfs-server" is what is used in the service definition.</p> <p>Below is a very minimal setup I used for a statefulset. I deployed the first 3 files from the tutorial to create the PV &amp; PVC, then used the below yaml in place of the busybox bonus yaml the author included. This deployed successfully. Let me know if you have troubles. </p> <pre><code>apiVersion: v1 kind: Service metadata: name: stateful-service spec: ports: - port: 80 name: web clusterIP: None selector: app: thestate --- apiVersion: apps/v1 metadata: name: thestate labels: app: thestate kind: StatefulSet spec: serviceName: stateful-service replicas: 3 selector: matchLabels: app: thestate template: metadata: labels: app: thestate spec: containers: - name: nginx image: nginx:1.8 volumeMounts: - name: my-pvc-nfs mountPath: /mnt ports: - containerPort: 80 name: web volumes: - name: my-pvc-nfs persistentVolumeClaim: claimName: nfs </code></pre>
<p>using a standard istio deployment in a kubernetes cluster I am trying to add an initContainer to my pod deployment, which does additional database setup.</p> <p>Using the cluster IP of the database doesn't work either. But I can connect to the database from my computer using port-forwarding.</p> <p>This container is fairly simple:</p> <pre><code> spec: initContainers: - name: create-database image: tmaier/postgresql-client args: - sh - -c - | psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0" psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "CREATE ROLE user WITH LOGIN PASSWORD 'password';" psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "GRANT ALL PRIVILEGES ON DATABASE fusionauth TO user; ALTER DATABASE fusionauth OWNER TO user;" </code></pre> <p>This kubernetes initContainer according to what I can see runs before the "istio-init" container. Is that the reason why it cannot resolve the db-host:5432 to the ip of the pod running the postgres service?</p> <p>The error message in the init-container is:</p> <pre><code>psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? </code></pre> <p>The same command from fully initialized pod works just fine.</p>
<p>You can't access services inside the mesh without the Envoy sidecar, your init container runs alone with no sidecars. In order to reach the DB service from an init container you need to expose the DB with a ClusterIP service that has a different name to the Istio Virtual Service of that DB. </p> <p>You could create a service named <code>db-direct</code> like:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: db-direct labels: app: db spec: type: ClusterIP selector: app: db ports: - name: db port: 5432 protocol: TCP targetPort: 5432 </code></pre> <p>And in your init container use <code>db-direct:5432</code>.</p>
<p>I'm familiar with the <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">Kubernetes quotas for CPU and memory usage</a>. However, I'm envisaging a scenario in which certain containers are guaranteed to use <em>a lot</em> of network bandwidth, and I can't see any way of warning Kubernetes of this. (e.g. don't put two on the same machine, don't put too much else on this machine even if the other quotas are fine). How can I effect this sort of behaviour? For the purposes of the question I have full control of the cluster and am prepared to write code if necessary.</p>
<p>The only thing you can do is to apply <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/#support-traffic-shaping" rel="noreferrer">Support traffic shaping</a> using <code>kubernetes.io/ingress-bandwidth</code> and <code>kubernetes.io/egress-bandwidth annotations</code>. It can only be applied to your PODS.</p> <p>Example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/ingress-bandwidth: 1M kubernetes.io/egress-bandwidth: 1M .. </code></pre> <p>Also official k8s documentation gives link to <a href="https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth" rel="noreferrer">bandwidth plugin</a>. Try to apply in for your needs.</p> <p>And read <a href="https://github.com/kubernetes/kubernetes/issues/2856" rel="noreferrer">github related topic</a>.</p>
<p>I am going to keep this simple and ask, is there a way to see which pod have an active connection with an endpoint like a database endpoint?</p> <p>My cluster contains a few hundred of namespace and my database provider just told me that the maximum amount of connections is almost reached and I want to pinpoint the pod(s) that uses multiple connections to our database endpoint at the same time.</p> <p>I can see from my database cluster that the connections come from my cluster node's IP... but it won't say which pods... and I have quite lot of pods...</p> <p>Thanks for the help</p>
<p>Each container uses its own network namespace, so to check the network connection inside the container namespace you need to run command inside that namespace. </p> <p>Luckily all containers in a Pod share the same network namespace, so you can add small sidecar container to the pod that print to the log open connections.</p> <p>Alternatively, you can run <code>netstat</code> command inside the pod (if the pod has it on its filesystem):</p> <pre><code>kubectl get pods | grep Running | awk '{ print $1 }' | xargs -I % sh -c 'echo == Pod %; kubectl exec -ti % -- netstat -tunaple' &gt;netstat.txt # or kubectl get pods | grep Running | awk '{ print $1 }' | xargs -I % sh -c 'echo == Pod %; kubectl exec -ti % -- netstat -tunaple | grep ESTABLISHED' &gt;netstat.txt </code></pre> <p>After that you'll have a file on your disk (<code>netstat.txt</code>) with all information about connections in the pods.</p> <p>The third way is most <a href="https://platform9.com/blog/container-namespaces-deep-dive-container-networking/" rel="nofollow noreferrer">complex</a>. You need to find the container ID using <code>docker ps</code> and run the following command to get PID</p> <pre><code>$ pid = "$(docker inspect -f '{{.State.Pid}}' "container_name | Uuid")" </code></pre> <p>Then, you need to create named namespace: (you can use any name you want, or container_name/Uuid/Pod_Name as a replacement to namespace_name)</p> <pre><code>sudo mkdir -p /var/run/netns sudo ln -sf /proc/$pid/ns/net "/var/run/netns/namespace_name" </code></pre> <p>Now you can run commands in that namespace:</p> <pre><code>sudo ip netns exec "namespace_name" netstat -tunaple | grep ESTABLISHED </code></pre> <p>You need to do that for each pod on each node. So, it might be useful to troubleshoot particular containers, but it needs some more automation for your task.</p> <p>It might be helpful for you to install <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> to your cluster. It has several interesting features mentioned in this <a href="https://stackoverflow.com/a/54275422/9521610">answer</a></p>
<p>I have deployed Kafka using Helm and Minikube. I need to build a producer in Scala for that broker IP-address and host are required. I have defined NodePort service to expose Kafka to the outside world. I set up broker as minkube-ip:service-node-port, however, I get connection exception. What is wrong with the configuration I defined? With a docker-compose file, the application works fine.</p> <p>Error stack trace:</p> <pre><code>Exception in thread "main" org.apache.kafka.common.errors.SerializationException: Error serializing Avro message Caused by: java.net.ConnectException: Connection refused (Connection refused) </code></pre> <p>Kafka configurations look like this:</p> <pre><code> val brokers = "192.168.99.100:32400" val props = new Properties() props.put("bootstrap.servers", brokers) props.put("client.id", "AvroKafkaProducer") props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer") props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer") props.put("schema.registry.url", "http://0.0.0.0:8081") </code></pre> <p>Kafka NodePort service definition where labels match Kafka pods labeled produced by Helm:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: kafka-service spec: selector: app: cp-kafka release: my-confluent-oss ports: - protocol: TCP targetPort: 9092 port: 32400 nodePort: 32400 type: NodePort </code></pre> <p>This is the list of all the created services:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-confluent-oss-cp-kafka ClusterIP 10.96.241.37 &lt;none&gt; 9092/TCP 6h25m my-confluent-oss-cp-kafka-connect ClusterIP 10.105.148.181 &lt;none&gt; 8083/TCP 6h25m my-confluent-oss-cp-kafka-headless ClusterIP None &lt;none&gt; 9092/TCP 6h25m my-confluent-oss-cp-kafka-rest ClusterIP 10.99.154.76 &lt;none&gt; 8082/TCP 6h25m my-confluent-oss-cp-ksql-server ClusterIP 10.108.41.220 &lt;none&gt; 8088/TCP 6h25m my-confluent-oss-cp-schema-registry ClusterIP 10.108.182.212 &lt;none&gt; 8081/TCP 6h25m my-confluent-oss-cp-zookeeper ClusterIP 10.97.148.103 &lt;none&gt; 2181/TCP 6h25m my-confluent-oss-cp-zookeeper-headless ClusterIP None &lt;none&gt; 2888/TCP,3888/TCP 6h25m </code></pre>
<p>The error is from the deserialiser trying to connect to the Schema Registry</p> <p><code>props.put("schema.registry.url", "http://0.0.0.0:8081")</code></p> <p>should read</p> <p><code>props.put("schema.registry.url", "http://&lt;hostname of Schema Registry resolvable from Connect node&gt;:8081")</code></p>
<p>I'm new to helm and Kubernetes world. I'm working on a project using Docker, Kubernetes and helm in which I'm trying to deploy a simple Nodejs application using helm chart on Kubernetes.</p> <p>Here's what I have tried:</p> <p><strong>From <code>Dockerfile</code>:</strong></p> <pre><code>FROM node:6.9.2 EXPOSE 30000 COPY server.js . CMD node server.js </code></pre> <p>I have build the image, tag it and push it to the docker hub repository at: <code>MY_USERNAME/myhello:0.2</code></p> <p>Then I run the simple commad to create a helm chart as: <code>helm create mychart</code> It created a mychart directory witll all the helm components.</p> <p>Then i have edited the <code>values.yaml</code> file as:</p> <pre><code>replicaCount: 1 image: repository: MY_USERNAME/myhello tag: 0.2 pullPolicy: IfNotPresent nameOverride: "" fullnameOverride: "" service: type: NodePort port: 80 externalPort: 30000 ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" paths: [] hosts: - chart-example.local tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi nodeSelector: {} tolerations: [] affinity: {} </code></pre> <p>After that I have installed the chart as: <code>helm install --name myhelmdep01 mychart</code></p> <p>and when run <code>kubectl get pods</code> it shows the <code>ErrImagePull</code></p> <p>I have tried with by mentioning the image name as : <code>docker.io/arycloud/myhello</code> in this case the image pulled successfully but there's another error comes up as:</p> <blockquote> <p>Liveness probe failed: Get <a href="http://172.17.0.5:80/" rel="nofollow noreferrer">http://172.17.0.5:80/</a>: dial tcp 172.17.0.5:80: connect: connection refused</p> </blockquote>
<p>Run <code>kubectl describe pod &lt;yourpod&gt;</code> soon after the error occurs and there should be an event near the bottom of the output that tells you exactly what the image pull problem is.</p> <p>Off the top of my head it could be one of these options:</p> <ul> <li>It's a private repo and you haven't provided the service account for the pod/deployment with the proper imagePullSecret</li> <li>Your backend isn't docker or does not assume that non prefixed images are on hub.docker.com. Try this instead: <code>registry-1.docker.io/arycloud/myhello</code></li> </ul> <p>If you can find that error it should be pretty straight forward.</p>
<p>Is it possible to import environment variables from a different .yml file into the deployment file. My container requires environment variables.</p> <p><em>deployment.yml</em></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: api-deployment spec: replicas: 1 template: metadata: labels: app: api spec: containers: - name: api image: &lt;removed&gt; imagePullPolicy: Always env: - name: NODE_ENV value: "TEST" ports: - containerPort: 8080 imagePullSecrets: - name: regcred </code></pre> <p><em>vars.yml</em></p> <pre><code>NODE_ENV: TEST </code></pre> <p>What i'd like is to declare my variables in a seperate file and simply import them into the deployment.</p>
<p>What you describe sounds like a <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> use case. If your deployment were part of a helm chart/template then you could have different values files (which are yaml) and inject the values from them into the template based on your parameters at install time. Helm is a common choice for helping to <a href="https://stackoverflow.com/a/43378990/9705485">manage env-specific config</a>.</p> <p>But note that if you just want to inject an environment variable in your yaml rather than taking it from another yaml then a popular way to do <a href="https://serverfault.com/questions/791715/using-environment-variables-in-kubernetes-deployment-spec">that is <code>envsubst</code></a>. </p>
<p>I have a few quite large UTF-8 data files that pods need to load into memory on start up - from a couple of hundred KBs to around 50 MB. </p> <p>The project (including helm chart) is open source but some of these files are not - otherwise I would probably just include them in the images. My initial thinking was to create configmaps but my understanding is that 50 MB is more than configmaps were intended for, so that might end up being a problem in some circumstances. I think secrets would also be overkill - they aren't secret, they just shouldn't be put on the open internet.</p> <p>For performance reasons I'd rather have a copy in memory in each pod rather than going for a shared cache but I might be wrong on that. At the very least that will likely add more complexity than it's worth.</p> <p>Are configmaps the way to go?</p>
<p>From my point of view, the best solution would be using <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init container</a> to download the files from a secured storage (as it was mentioned by <a href="https://stackoverflow.com/users/284111/andrew-savinykh">Andrew Savinykh</a> in the comments), to the pod's volume and then use it in the pod's main container.</p> <p>Please see the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container" rel="noreferrer">example</a>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: init-demo spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html # These containers are run during pod initialization initContainers: - name: install image: busybox command: - wget - "-O" - "/work-dir/index.html" - http://kubernetes.io volumeMounts: - name: workdir mountPath: "/work-dir" dnsPolicy: Default volumes: - name: workdir emptyDir: {} </code></pre>
<p>Is it possible to know the progress of file transfer with <code>kubectl cp</code> for <strong>Google Cloud</strong>?</p>
<p>No, this doesn't appear to be possible.</p> <p><code>kubectl cp</code> <a href="https://github.com/kubernetes/kubernetes/blob/release-1.13/pkg/kubectl/cmd/cp/cp.go#L292" rel="noreferrer">appears to be implemented</a> by doing the equivalent of</p> <pre class="lang-sh prettyprint-override"><code>kubectl exec podname -c containername \ tar cf - /whatever/path \ | tar xf - </code></pre> <p>This means two things:</p> <ol> <li><p><strong>tar</strong>(1) doesn't print any useful progress information. (You could in principle add a <code>v</code> flag to print out each file name as it goes by to stderr, but that won't tell you how many files in total there are or how large they are.) So <code>kubectl cp</code> as implemented doesn't have any way to get this out.</p></li> <li><p>There's not a richer native Kubernetes API to copy files.</p></li> </ol> <p>If moving files in and out of containers is a key use case for you, it will probably be easier to build, test, and run by adding a simple HTTP service. You can then rely on things like the HTTP <code>Content-Length:</code> header for progress metering.</p>
<p>I'm trying to use Kubernetes to make configurations and deployments explicitly defined and I also like Kubernetes' pod scheduling mechanisms. There are (for now) just 2 apps running on 2 replicas on 3 nodes. But Google's Kubernetes Engine's load balancer is extremely expensive for a small app like ours (at least for the moment) at the same time I'm not willing to change to a single instance hosting solution on a container or deploying the app on Docker swarm etc. </p> <p>Using node's IP seemed like a hack and I thought that it might expose some security issues inside the cluster. Therefore I configured a Træfik ingress and an ingress controller to overcome Google's expensive flat rate for load balancing but turns out an outward facing ingress spins up a standart load balancer or I'm missing something.</p> <p>I hope I'm missing something since at this rates ($16 a month) I cannot rationalize using kubernetes from start up for this app.</p> <p>Is there a way to use GKE without using Google's load balancer?</p>
<p>An <code>Ingress</code> is just a set of rules that tell the cluster how to route to your services, and a <code>Service</code> is another set of rules to reach and load-balance across a set of pods, based on the selector. A service can use 3 different routing types:</p> <ul> <li><code>ClusterIP</code> - this gives the service an IP that's only available inside the cluster which routes to the pods.</li> <li><code>NodePort</code> - this creates a ClusterIP, and then creates an externally reachable port on every single node in the cluster. Traffic to those ports routes to the internal service IP and then to the pods.</li> <li><code>LoadBalancer</code> - this creates a ClusterIP, then a NodePort, and then provisions a load balancer from a provider (if available like on GKE). Traffic hits the load balancer, then a port on one of the nodes, then the internal IP, then finally a pod.</li> </ul> <p>These different types of services are not mutually exclusive but actually build on each other, and it explains why anything public must be using a NodePort. Think about it - how else would traffic reach your cluster? A cloud load balancer just directs requests to your nodes and points to one of the NodePort ports. If you don't want a GKE load balancer then you can already skip it and access those ports directly.</p> <p>The downside is that the ports are limited between 30000-32767. If you need standard HTTP port 80/443 then you can't accomplish this with a <code>Service</code> and instead must specify the port directly in your <code>Deployment</code>. Use the <code>hostPort</code> setting to bind the containers directly to port 80 on the node:</p> <pre><code>containers: - name: yourapp image: yourimage ports: - name: http containerPort: 80 hostPort: 80 ### this will bind to port 80 on the actual node </code></pre> <p>This might work for you and routes traffic directly to the container without any load-balancing, but if a node has problems or the app stops running on a node then it will be unavailable.</p> <p>If you still want load-balancing then you can run a <code>DaemonSet</code> (so that it's available on every node) with Nginx (or any other proxy) exposed via <code>hostPort</code> and then that will route to your internal services. An easy way to run this is with the standard <code>nginx-ingress</code> package, but skip creating the LoadBalancer service for it and use the <code>hostPort</code> setting. The Helm chart can be configured for this:</p> <p><a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a></p>
<p>I have several Java projects running in Docker containers managed with Kubernetes. I want to enable the Horizontal Pod Autoscaling(HPA) based on CPU provided by Kubernetes, but I find it hard to deal with the initial CPU spikes caused by the JVM when initialising the container. </p> <p>I currently have not set a cpu limit in the Kubernetes yaml files for any of the projects which basically means that I let the pods take as much CPU from the environment as they can (I know its a bad practice, but it lets me boot JVM pods in less than 30 seconds).<br> The problem this creates is that during the pod creation in the first 3-4 minutes the CPU usage will spike so much that If I have an autoscale rule set it will trigger it. Autoscaled pod will spin up and cause the same spike and re-trigger the autoscale until the maximum amount of pods are reached and things settle down.<br> I tried setting a cpu limit in the kubernetes yaml file but the amount if cpu that my projects need is not that big so by setting this to an non-overkill amount makes my pods spin up in more than 5min which is unacceptable.<br> I could also increase the autoscale delay to more than 10 minutes but its a global rule that will also affect deployments which I need to scale very fast, so that is also not a viable option for me.</p> <p>This is an example cpu and memory configuration for one of my pods</p> <pre><code> env: resources: requests: memory: "1300Mi" cpu: "250m" limits: memory: "1536Mi" </code></pre> <p>I also migrated to Java 10 recently which is supposed to be optimised for containerisation. Any advice or comment will be much appreciated. Thanks in advance.</p> <p>Edit:<br> I could also set up hpa based on custom prometheus metrics like http_requests, but that option will be harder to maintain since there lots of variables that can affect the amount of requests the pod can handle.</p>
<p>Depends on your K8 version.</p> <p><code>&lt; 1.12</code>:<br> In this version you have, as you are explaining, only the <code>--horizontal-pod-autoscaler-upscale-delay</code> flag for the Kube-Controller or the custom metrics in HPAv2. <a href="https://v1-11.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://v1-11.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p> <p><code>=&gt;1.12</code>:<br> Here we have gotten a new HPA algorithm, which discards <code>unReady</code> pods in its calculation leading to fewer auto correcting. </p> <p><a href="https://github.com/kubernetes/kubernetes/pull/68068" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/68068</a></p> <blockquote> <p>Change CPU sample sanitization in HPA. Ignore samples if:<br> - Pod is beeing initalized - 5 minutes from start defined by flag<br> - pod is unready<br> - pod is ready but full window of metric hasn't been colected since transition<br> - Pod is initialized - 5 minutes from start defined by flag:<br> - Pod has never been ready after initial readiness period.</p> </blockquote> <p>This should help you here. </p>
<p>I have the following Ingress definition that works well (I use docker-for-mac): </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: zwoop-ing annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: localhost http: paths: - path: / backend: serviceName: posts-api-svc servicePort: 8083 </code></pre> <p>Where I'm confused is how I would deal with multiple api microservices that I want to expose. </p> <p>The options that I had in mind: </p> <ul> <li>Multiple ingresses </li> <li>Single ingress with different paths </li> <li>Single ingress with different subdomains (when on the Cloud) </li> </ul> <p>I assume that multiple ingresses would cost more (?).<br> For some reason, I have problems using a subpath segment (ingress-nginx). </p> <p>When I define: <code>- path: /api</code> in the ingress resource, I receive a 404 on GET request.<br> It is unclear how to define a subpath (here I use /api, but that would be posts-api, users-api etc). </p> <p>For a single posts-api, I currently have the following setup: </p> <pre><code>apiVersion: v1 kind: Service metadata: name: posts-api-svc # namespace: nginx-ingress labels: app: posts-api #rel: beta #env: dev spec: type: ClusterIP selector: app: posts-api # rel: beta # env: dev ports: - protocol: TCP port: 8083 </code></pre> <p>With a deployment: </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: posts-api-deployment # namespace: nginx-ingress spec: replicas: 1 selector: matchLabels: app: posts-api template: metadata: labels: app: posts-api # env: dev # rel: beta spec: containers: - name: posts-api image: kimgysen/posts-api:latest ports: - containerPort: 8083 livenessProbe: httpGet: path: /api/v1/posts/health port: 8083 initialDelaySeconds: 120 timeoutSeconds: 1 </code></pre> <p>The health check on the pod works fine for endpoint: /api/v1/posts/health</p>
<blockquote> <p>I assume that multiple ingresses would cost more (?).</p> </blockquote> <ul> <li>Multiple ingress controllers like <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">nginx-ingress</a>: Yes, it would cost more if you are using an external <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="noreferrer">load balancer</a> and a cloud provider like AWS, GCP or Azure because you will be using as many load balancers as ingress controller. It would not cost more if you are using just a ClusterIP (accessing within the cluster) and it will vary if you are using a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">NodePort</a> service to expose it.</li> <li>Multiple <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a> Kubernetes resources: No it would not cost more if you are using the same ingress controller.</li> </ul> <blockquote> <p>When I define: - path: /api in the ingress resource, I receive a 404 on GET request.</p> </blockquote> <p>This means it's going to the default backend and likely because of this annotation <code>nginx.ingress.kubernetes.io/rewrite-target: /</code>. Essentially, that's stripping the <code>/api</code> from your request that is going to your backend. If you want to preserve the path, I suggest you remove the annotation.</p> <p>You can always check the nginx ingress controller <code>nginx.conf</code> file with something like:</p> <pre><code>$ kubectl cp &lt;pod-where-nginx-controller-is-running&gt;:nginx.conf . $ cat nginx.conf </code></pre>
<p>I am on GKE using Istio version 1.0.3 . I try to get my express.js with socket.io (and uws engine) backend working with websockets and had this backend running before on a 'non kubernetes server' with websockets without problems. </p> <p>When I simply enter the external_gke_ip as url I get my backend html page, so http works. But when my client-app makes socketio authentication calls from my client-app I get 503 errors in the browser console:</p> <pre><code>WebSocket connection to 'ws://external_gke_ip/socket.io/?EIO=3&amp;transport=websocket' failed: Error during WebSocket handshake: Unexpected response code: 503 </code></pre> <p>And when I enter the external_gke_ip as url while socket calls are made I get: <code>no healthy upstream</code> in the browser. And the pod gives: <code>CrashLoopBackOff</code>.</p> <p>I find somewhere: 'in node.js land, socket.io typically does a few non-websocket Handshakes to the Server before eventually upgrading to Websockets. If you don't have sticky-sessions, the upgrade never works.' So maybe I need sticky sessions? Or not... as I just have one replica of my app? It seems to be done by setting <code>sessionAffinity: ClientIP</code>, but with istio I do not know how to do this and in the GUI I can edit some values of the loadbalancers, but Session affinity shows 'none' and I can not edit it.</p> <p>Other settings that <a href="https://github.com/kubernetes/kubernetes/issues/53886" rel="nofollow noreferrer">might be relevant</a> and that I am not sure of (how to set using istio) are:</p> <ul> <li>externalTrafficPolicy=Local </li> <li>Ttl</li> </ul> <p>My manifest config file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: selector: app: myapp ports: - port: 8089 targetPort: 8089 protocol: TCP name: http --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: app image: gcr.io/myproject/firstapp:v1 imagePullPolicy: Always ports: - containerPort: 8089 env: - name: POSTGRES_DB_HOST value: 127.0.0.1:5432 - name: POSTGRES_DB_USER valueFrom: secretKeyRef: name: mysecret key: username - name: POSTGRES_DB_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password readinessProbe: httpGet: path: /healthz scheme: HTTP port: 8089 initialDelaySeconds: 10 timeoutSeconds: 5 - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=myproject:europe-west4:osm=tcp:5432", "-credential_file=/secrets/cloudsql/credentials.json"] securityContext: runAsUser: 2 allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: myapp-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: myapp spec: hosts: - "*" gateways: - myapp-gateway http: - match: - uri: prefix: / route: - destination: host: myapp weight: 100 websocketUpgrade: true --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: google-apis spec: hosts: - "*.googleapis.com" ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: cloud-sql-instance spec: hosts: - 35.204.XXX.XX # ip of cloudsql database ports: - name: tcp number: 3307 protocol: TCP location: MESH_EXTERNAL </code></pre> <p>Various output (while making socket calls, when I stop these the deployment restarts and READY returns to 3/3):</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE myapp-8888 2/3 CrashLoopBackOff 11 1h </code></pre> <p><code>$ kubectl describe pod/myapp-8888</code> gives:</p> <pre><code>Name: myapp-8888 Namespace: default Node: gke-standard-cluster-1-default-pool-888888-9vtk/10.164.0.36 Start Time: Sat, 19 Jan 2019 14:33:11 +0100 Labels: app=myapp pod-template-hash=207157 Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container app; cpu request for container cloudsql-proxy sidecar.istio.io/status: {"version":"3c9617ff82c9962a58890e4fa987c69ca62487fda71c23f3a2aad1d7bb46c748","initContainers":["istio-init"],"containers":["istio-proxy"]... Status: Running IP: 10.44.0.5 Controlled By: ReplicaSet/myapp-64c59c94dc Init Containers: istio-init: Container ID: docker://a417695f99509707d0f4bfa45d7d491501228031996b603c22aaf398551d1e45 Image: gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0 Image ID: docker-pullable://gcr.io/gke-release/istio/proxy_init@sha256:e30d47d2f269347a973523d0c5d7540dbf7f87d24aca2737ebc09dbe5be53134 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: -p 15001 -u 1337 -m REDIRECT -i * -x -b 8089, -d State: Terminated Reason: Completed Exit Code: 0 Started: Sat, 19 Jan 2019 14:33:19 +0100 Finished: Sat, 19 Jan 2019 14:33:19 +0100 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: &lt;none&gt; Containers: app: Container ID: docker://888888888888888888888888 Image: gcr.io/myproject/firstapp:v1 Image ID: docker-pullable://gcr.io/myproject/firstapp@sha256:8888888888888888888888888 Port: 8089/TCP Host Port: 0/TCP State: Terminated Reason: Completed Exit Code: 0 Started: Sat, 19 Jan 2019 14:40:14 +0100 Finished: Sat, 19 Jan 2019 14:40:37 +0100 Last State: Terminated Reason: Completed Exit Code: 0 Started: Sat, 19 Jan 2019 14:39:28 +0100 Finished: Sat, 19 Jan 2019 14:39:46 +0100 Ready: False Restart Count: 3 Requests: cpu: 100m Readiness: http-get http://:8089/healthz delay=10s timeout=5s period=10s #success=1 #failure=3 Environment: POSTGRES_DB_HOST: 127.0.0.1:5432 POSTGRES_DB_USER: &lt;set to the key 'username' in secret 'mysecret'&gt; Optional: false POSTGRES_DB_PASSWORD: &lt;set to the key 'password' in secret 'mysecret'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-rclsf (ro) cloudsql-proxy: Container ID: docker://788888888888888888888888888 Image: gcr.io/cloudsql-docker/gce-proxy:1.11 Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy@sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /cloud_sql_proxy -instances=myproject:europe-west4:osm=tcp:5432 -credential_file=/secrets/cloudsql/credentials.json State: Running Started: Sat, 19 Jan 2019 14:33:40 +0100 Ready: True Restart Count: 0 Requests: cpu: 100m Environment: &lt;none&gt; Mounts: /secrets/cloudsql from cloudsql-instance-credentials (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-rclsf (ro) istio-proxy: Container ID: docker://f3873d0f69afde23e85d6d6f85b1f Image: gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0 Image ID: docker-pullable://gcr.io/gke-release/istio/proxyv2@sha256:826ef4469e4f1d4cabd0dc846 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: proxy sidecar --configPath /etc/istio/proxy --binaryPath /usr/local/bin/envoy --serviceCluster myapp --drainDuration 45s --parentShutdownDuration 1m0s --discoveryAddress istio-pilot.istio-system:15007 --discoveryRefreshDelay 1s --zipkinAddress zipkin.istio-system:9411 --connectTimeout 10s --statsdUdpAddress istio-statsd-prom-bridge.istio-system:9125 --proxyAdminPort 15000 --controlPlaneAuthPolicy NONE State: Running Started: Sat, 19 Jan 2019 14:33:54 +0100 Ready: True Restart Count: 0 Requests: cpu: 10m Environment: POD_NAME: myapp-64c59c94dc-8888 (v1:metadata.name) POD_NAMESPACE: default (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) ISTIO_META_POD_NAME: myapp-64c59c94dc-8888 (v1:metadata.name) ISTIO_META_INTERCEPTION_MODE: REDIRECT Mounts: /etc/certs/ from istio-certs (ro) /etc/istio/proxy from istio-envoy (rw) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: cloudsql-instance-credentials: Type: Secret (a volume populated by a Secret) SecretName: cloudsql-instance-credentials Optional: false default-token-rclsf: Type: Secret (a volume populated by a Secret) SecretName: default-token-rclsf Optional: false istio-envoy: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory istio-certs: Type: Secret (a volume populated by a Secret) SecretName: istio.default Optional: true QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m31s default-scheduler Successfully assigned myapp-64c59c94dc-tdb9c to gke-standard-cluster-1-default-pool-65b9e650-9vtk Normal SuccessfulMountVolume 7m31s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk MountVolume.SetUp succeeded for volume "istio-envoy" Normal SuccessfulMountVolume 7m31s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk MountVolume.SetUp succeeded for volume "cloudsql-instance-credentials" Normal SuccessfulMountVolume 7m31s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk MountVolume.SetUp succeeded for volume "default-token-rclsf" Normal SuccessfulMountVolume 7m31s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk MountVolume.SetUp succeeded for volume "istio-certs" Normal Pulling 7m30s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk pulling image "gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0" Normal Pulled 7m25s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Successfully pulled image "gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0" Normal Created 7m24s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Created container Normal Started 7m23s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Started container Normal Pulling 7m4s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk pulling image "gcr.io/cloudsql-docker/gce-proxy:1.11" Normal Pulled 7m3s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Successfully pulled image "gcr.io/cloudsql-docker/gce-proxy:1.11" Normal Started 7m2s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Started container Normal Pulling 7m2s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk pulling image "gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0" Normal Created 7m2s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Created container Normal Pulled 6m54s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Successfully pulled image "gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0" Normal Created 6m51s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Created container Normal Started 6m48s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Started container Normal Pulling 111s (x2 over 7m22s) kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk pulling image "gcr.io/myproject/firstapp:v3" Normal Created 110s (x2 over 7m4s) kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Created container Normal Started 110s (x2 over 7m4s) kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Started container Normal Pulled 110s (x2 over 7m7s) kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Successfully pulled image "gcr.io/myproject/firstapp:v3" Warning Unhealthy 99s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Readiness probe failed: HTTP probe failed with statuscode: 503 Warning BackOff 85s kubelet, gke-standard-cluster-1-default-pool-65b9e650-9vtk Back-off restarting failed container </code></pre> <p>And:</p> <pre><code>$ kubectl logs myapp-8888 myapp &gt; [email protected] start /usr/src/app &gt; node src/ info: Feathers application started on http://localhost:8089 </code></pre> <p>And the database logs (which looks ok, as some 'startup script entries' from app can be retrieved using psql):</p> <pre><code> $ kubectl logs myapp-8888 cloudsql-proxy 2019/01/19 13:33:40 using credential file for authentication; [email protected] 2019/01/19 13:33:40 Listening on 127.0.0.1:5432 for myproject:europe-west4:osm 2019/01/19 13:33:40 Ready for new connections 2019/01/19 13:33:54 New connection for "myproject:europe-west4:osm" 2019/01/19 13:33:55 couldn't connect to "myproject:europe-west4:osm": Post https://www.googleapis.com/sql/v1beta4/projects/myproject/instances/osm/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp 74.125.143.95:443: getsockopt: connection refused 2019/01/19 13:39:06 New connection for "myproject:europe-west4:osm" 2019/01/19 13:39:06 New connection for "myproject:europe-west4:osm" 2019/01/19 13:39:06 Client closed local connection on 127.0.0.1:5432 2019/01/19 13:39:13 New connection for "myproject:europe-west4:osm" 2019/01/19 13:39:14 New connection for "myproject:europe-west4:osm" 2019/01/19 13:39:14 New connection for "myproject:europe-west4:osm" 2019/01/19 13:39:14 New connection for "myproject:europe-west4:osm" </code></pre> <p>EDIT: Here is the serverside log of the 503 of websocket calls to my app:</p> <pre><code>{ insertId: "465nu9g3xcn5hf" jsonPayload: { apiClaims: "" apiKey: "" clientTraceId: "" connection_security_policy: "unknown" destinationApp: "myapp" destinationIp: "10.44.XX.XX" destinationName: "myapp-888888-88888" destinationNamespace: "default" destinationOwner: "kubernetes://apis/extensions/v1beta1/namespaces/default/deployments/myapp" destinationPrincipal: "" destinationServiceHost: "myapp.default.svc.cluster.local" destinationWorkload: "myapp" httpAuthority: "35.204.XXX.XXX" instance: "accesslog.logentry.istio-system" latency: "1.508885ms" level: "info" method: "GET" protocol: "http" receivedBytes: 787 referer: "" reporter: "source" requestId: "bb31d922-8f5d-946b-95c9-83e4c022d955" requestSize: 0 requestedServerName: "" responseCode: 503 responseSize: 57 responseTimestamp: "2019-01-18T20:53:03.966513Z" sentBytes: 164 sourceApp: "istio-ingressgateway" sourceIp: "10.44.X.X" sourceName: "istio-ingressgateway-8888888-88888" sourceNamespace: "istio-system" sourceOwner: "kubernetes://apis/extensions/v1beta1/namespaces/istio-system/deployments/istio-ingressgateway" sourcePrincipal: "" sourceWorkload: "istio-ingressgateway" url: "/socket.io/?EIO=3&amp;transport=websocket" userAgent: "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1" xForwardedFor: "10.44.X.X" } logName: "projects/myproject/logs/stdout" metadata: { systemLabels: { container_image: "gcr.io/gke-release/istio/mixer:1.0.2-gke.0" container_image_id: "docker-pullable://gcr.io/gke-release/istio/mixer@sha256:888888888888888888888888888888" name: "mixer" node_name: "gke-standard-cluster-1-default-pool-88888888888-8887" provider_instance_id: "888888888888" provider_resource_type: "gce_instance" provider_zone: "europe-west4-a" service_name: [ 0: "istio-telemetry" ] top_level_controller_name: "istio-telemetry" top_level_controller_type: "Deployment" } userLabels: { app: "telemetry" istio: "mixer" istio-mixer-type: "telemetry" pod-template-hash: "88888888888" } } receiveTimestamp: "2019-01-18T20:53:08.135805255Z" resource: { labels: { cluster_name: "standard-cluster-1" container_name: "mixer" location: "europe-west4-a" namespace_name: "istio-system" pod_name: "istio-telemetry-8888888-8888888" project_id: "myproject" } type: "k8s_container" } severity: "INFO" timestamp: "2019-01-18T20:53:03.965100Z" } </code></pre> <p>In the browser at first it properly seems to switch protocol but then causes a repeated 503 response and subsequent health issues cause a repeating restart. The protocol switch websocket call:</p> <p>General:</p> <pre><code>Request URL: ws://localhost:8080/sockjs-node/842/s4888/websocket Request Method: GET Status Code: 101 Switching Protocols [GREEN] </code></pre> <p>Response headers:</p> <pre><code>Connection: Upgrade Sec-WebSocket-Accept: NS8888888888888888888 Upgrade: websocket </code></pre> <p>Request headers:</p> <pre><code>Accept-Encoding: gzip, deflate, br Accept-Language: nl-NL,nl;q=0.9,en-US;q=0.8,en;q=0.7 Cache-Control: no-cache Connection: Upgrade Cookie: _ga=GA1.1.1118102238.18888888; hblid=nSNQ2mS8888888888888; olfsk=ol8888888888 Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits Sec-WebSocket-Key: b8zkVaXlEySHasCkD4aUiw== Sec-WebSocket-Version: 13 Upgrade: websocket User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1 </code></pre> <p>Its frames: <a href="https://i.stack.imgur.com/xkbMQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xkbMQ.png" alt="enter image description here"></a></p> <p>Following the above I get multiple of these:</p> <p>Chrome output regarding websocket call:</p> <p>general:</p> <pre><code>Request URL: ws://35.204.210.134/socket.io/?EIO=3&amp;transport=websocket Request Method: GET Status Code: 503 Service Unavailable </code></pre> <p>response headers:</p> <pre><code>connection: close content-length: 19 content-type: text/plain date: Sat, 19 Jan 2019 14:06:39 GMT server: envoy </code></pre> <p>request headers:</p> <pre><code>Accept-Encoding: gzip, deflate Accept-Language: nl-NL,nl;q=0.9,en-US;q=0.8,en;q=0.7 Cache-Control: no-cache Connection: Upgrade Host: 35.204.210.134 Origin: http://localhost:8080 Pragma: no-cache Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits Sec-WebSocket-Key: VtKS5xKF+GZ4u3uGih2fig== Sec-WebSocket-Version: 13 Upgrade: websocket User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1 </code></pre> <p>The frames:</p> <pre><code>Data: (Opcode -1) Length: 63 Time: 15:06:44.412 </code></pre>
<p>Using uws (uWebSockets) as websocket engine causes these errors. When I swap in my backend app this code:</p> <pre><code>app.configure(socketio({ wsEngine: 'uws', timeout: 120000, reconnect: true })) </code></pre> <p>for this:</p> <pre><code>app.configure(socketio()) </code></pre> <p>Everything works as expected. </p> <p>EDIT: Now it also works with uws. I used alpine docker container which is based on node 10, which does not work with uws. After switching to container based on node 8 it works.</p>
<p>I'd like to understand when it's better to favor a Custom Initializer Controller vs a Mutating Webhook.</p> <p>From what I can gather, webhooks are:</p> <ol> <li>More powerful (can trigger on any action).</li> <li>More performant (only persist to etcd once).</li> <li>Easier to write (subjective, but production grade controllers aren’t trivial).</li> <li>Less likely to break during complete redeploy (there seems to be a chicken-and-egg problem requiring the deployment to exist <em>before</em> the initializer is in place, or the initializer will block the deployment).</li> </ol> <p>When would I want an initializer instead? …or are they being abandoned in favor of webhooks?</p>
<p>Always favor webhooks. Initializers are unlikely to ever graduate from alpha, and will probably be removed as the apimachinery team dislikes the approach. They might remain in a few specialized cases like Namespaces, but not in general.</p>
<p>There are a set of proxy environment variables (http_proxy, HTTP_PROXY, https_proxy, HTTPS_PROXY, no_proxy, NO_PROXY) in my OpenShift pods that I did not explicitly include and I do not want them there.</p> <p>For example</p> <pre><code>$ oc run netshoot -it --image docker-registry.default.svc:5000/default/netshoot -- bash If you don't see a command prompt, try pressing enter. bash-4.4$ env | grep -i proxy | sort HTTPS_PROXY=http://xx.xx.xx.xx:8081/ HTTP_PROXY=http://xx.xx.xx.xx:8081/ NO_PROXY=.cluster.local,.mydomain.nz,.localdomain.com,.svc,10.xx.xx.xx,127.0.0.1,172.30.0.1,app01.mydomain.nz,app02.mydomain.nz,inf01.mydomain.nz,inf02.mydomain.nz,mst01.mydomain.nz,localaddress,localhost,.edpay.nz http_proxy=xx.xx.xx.xx:8081 https_proxy=xx.xx.xx.xx:8081 no_proxy=.cluster.local,.mydomain.nz,.localdomain.com,.svc,10.xx.xx.xx,127.0.0.1,172.30.0.1,app01.mydomain.nz,app02.mydomain.nz,inf01.mydomain.nz,inf02.mydomain.nz,mst01.mydomain.nz,localaddress,localhost,.edpay.nz </code></pre> <p>I have yet to track down how those env vars are getting into my pods.</p> <p>I am not <a href="https://docs.openshift.com/container-platform/3.9/install_config/http_proxies.html#setting-environment-variables-in-pods" rel="nofollow noreferrer">Setting Proxy Environment Variables in Pods</a>.</p> <pre><code>$ oc get pod netshoot-1-hjp2p -o yaml | grep -A 10 env [no output] $ oc get deploymentconfig netshoot -o yaml | grep -A 10 env [no output] </code></pre> <p>I am not <a href="https://docs.openshift.com/container-platform/3.9/dev_guide/pod_preset.html#dev-guide-pod-presets-create" rel="nofollow noreferrer">Creating Pod Presets</a></p> <pre><code>$ oc get podpresets --all-namespaces No resources found. </code></pre> <p>Docker on my master/app nodes have no proxy env vars.</p> <pre><code>$ grep -i proxy /etc/sysconfig/docker [no output] </code></pre> <p>Kubelet (openshift-node) on my master/app nodes have no proxy env vars.</p> <pre><code>$ grep -i proxy /etc/sysconfig/atomic-openshift-node [no output] </code></pre> <p>Master components on my master nodes have no proxy env vars.</p> <pre><code>$ grep -i proxy /etc/sysconfig/atomic-openshift-master [no output] $ grep -i proxy /etc/sysconfig/atomic-openshift-master-api [no output] $ grep -i proxy /etc/sysconfig/atomic-openshift-master-controllers [no output] </code></pre> <p>Contents of sysconfig files (not including comments)</p> <pre><code>$ cat /etc/sysconfig/atomic-openshift-master OPTIONS="--loglevel=0" CONFIG_FILE=/etc/origin/master/master-config.yaml $ cat /etc/sysconfig/atomic-openshift-node OPTIONS=--loglevel=2 CONFIG_FILE=/etc/origin/node/node-config.yaml IMAGE_VERSION=v3.9.51 $ cat /etc/sysconfig/docker OPTIONS=' --selinux-enabled --signature-verification=False --insecure-registry 172.30.0.0/16' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi ADD_REGISTRY='--add-registry registry.access.redhat.com' $ cat /etc/sysconfig/atomic-openshift-master-api OPTIONS=--loglevel=2 --listen=https://0.0.0.0:8443 --master=https://mst01.mydomain.nz:8443 CONFIG_FILE=/etc/origin/master/master-config.yaml OPENSHIFT_DEFAULT_REGISTRY=docker-registry.default.svc:5000 $ cat /etc/sysconfig/atomic-openshift-master-controllers OPTIONS=--loglevel=2 --listen=https://0.0.0.0:8444 CONFIG_FILE=/etc/origin/master/master-config.yaml OPENSHIFT_DEFAULT_REGISTRY=docker-registry.default.svc:5000 </code></pre> <p>I'm at a loss as to how those proxy env vars are getting into my pods. </p> <p>Versions:</p> <ul> <li>OpenShift v3.9.51</li> </ul>
<p>We finally figured this out. We had <code>openshift_http_proxy</code>, <code>openshift_https_proxy</code>, and <code>openshift_no_proxy</code> set in our installer inventory variables as per <a href="https://docs.openshift.com/container-platform/3.9/install_config/install/advanced_install.html#advanced-install-configuring-global-proxy" rel="nofollow noreferrer">Configuring Global Proxy Options</a>.</p> <p>We knew that this meant it also implicitly set the <code>openshift_builddefaults_http_proxy</code>, <code>openshift_builddefaults_https_proxy</code>, and <code>openshift_builddefaults_no_proxy</code> installer inventory variables and according to the docs</p> <blockquote> <p>This variable defines the HTTP_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_http_proxy value is used. Set the openshift_builddefaults_http_proxy value to False to disable default http proxy for builds regardless of the openshift_http_proxy value.</p> </blockquote> <p>What we did <em>not</em> know (and I would argue is not at all clear from the description above), is that setting those installer inventory variables sets the <code>HTTP_PROXY</code>, <code>HTTPS_PROXY</code>, and <code>NO_PROXY</code> env vars permanently within your images.</p> <p>It's painfully apparent now when we look back on the build logs and see lines like this</p> <pre><code>... Step 2/19 : ENV "HTTP_PROXY" "xxx.xxx.xxx.xxx" "HTTPS_PROXY" "xxx.xxx.xxx.xxx" "NO_PROXY" "127.0.0.1,localhost,172.30.0.1,.svc,.cluster.local" "http_proxy" "xxx.xxx.xxx.xxx" "https_proxy" "xxx.xxx.xxx.xxx" "no_proxy" "127.0.0.1,localhost,172.30.0.1,.svc,.cluster.local" ... </code></pre> <p>We couldn't exclude proxy env vars from the pods because those env vars were set at build time.</p>
<p>I'm trying to setup AWS IAM Authenticator for my k8s cluster. I have two AWS account: A and B.</p> <p>The k8s account runs in the B account.</p> <p>I have created in the A account the following resources:</p> <p><strong>Policy</strong></p> <pre><code>Description: Grants permissions to assume the kubernetes-admin role Policy: Statement: - Action: sts:* Effect: Allow Resource: arn:aws:iam::&lt;AccountID-B&gt;:role/kubernetes-admin Sid: KubernetesAdmin Version: 2012-10-17 </code></pre> <p>The policy is associated to a group and I add my IAM user to the group.</p> <p>in the B account I have created the following role:</p> <pre><code>AssumeRolePolicyDocument: Statement: - Action: sts:AssumeRole Effect: Allow Principal: AWS: arn:aws:iam::&lt;AccountID-A&gt;:root Version: 2012-10-17 </code></pre> <p>This is the <code>ConfigMap</code> to configure aws-iam-authenticator:</p> <pre><code>apiVersion: v1 data: config.yaml: | # a unique-per-cluster identifier to prevent replay attacks # (good choices are a random token or a domain name that will be unique to your cluster) clusterID: k8s.mycluster.net server: # each mapRoles entry maps an IAM role to a username and set of groups # Each username and group can optionally contain template parameters: # "{{AccountID}}" is the 12 digit AWS ID. # "{{SessionName}}" is the role session name. mapRoles: - roleARN: arn:aws:iam::&lt;AccountID-B&gt;:role/kubernetes-admin username: kubernetes-admin:{{AccountID}}:{{SessionName}} groups: - system:masters kind: ConfigMap metadata: creationTimestamp: 2018-12-13T19:41:39Z labels: k8s-app: aws-iam-authenticator name: aws-iam-authenticator namespace: kube-system resourceVersion: "87401" selfLink: /api/v1/namespaces/kube-system/configmaps/aws-iam-authenticator uid: 1bc39653-ff0f-11e8-a580-02b4590539ba </code></pre> <p>The kubeconfig is:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: &lt;certificate&gt; server: https://api.k8s.mycluster.net name: k8s.mycluster.net contexts: - context: cluster: k8s.mycluster.net namespace: kube-system user: k8s.mycluster.net name: k8s.mycluster.net current-context: k8s.mycluster.net kind: Config preferences: {} users: - name: k8s.mycluster.net user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator env: - name: "AWS_PROFILE" value: "myaccount" args: - "token" - "-i" - "k8s.mycluster.net" - "-r" - "arn:aws:iam::&lt;AccountID-B&gt;:role/kubernetes-admin" </code></pre> <p>The result is:</p> <pre><code>could not get token: AccessDenied: Access denied status code: 403, request id: 6ceac161-ff2f-11e8-b263-2b0e32831969 Unable to connect to the server: getting token: exec: exit status 1 </code></pre> <p>Any idea? I don't get what i'm missing.</p>
<p>to add to this - my solution was to do the following:</p> <p>in ~/.kube directory:</p> <pre><code>aws eks update-kubeconfig --name eks-dev-cluster --role-arn=XXXXXXXXXXXX </code></pre> <p>this creates a file config-my-eks-cluster</p> <pre><code>vi config-my-eks-cluster </code></pre> <p>comment out the two lines mentioned above:</p> <pre><code> apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - eks-dev-cluster #- -r #- arn:aws:iam::XXXXXXXXX:role/eks-dev-role (the role you made for eks) command: aws-iam-authenticator </code></pre> <p>Then make sure you export your user profile with:</p> <p>export AWS_PROFILE=XXXXXXXXX (the user you used to create the cluster in the console or through the cli)</p> <p>The run:</p> <pre><code>kubectl get svc --v=10 </code></pre> <p>this will put output into verbose mode and give you details on any errors that creep up.</p>
<p>I created a Kubernetes cluster using ansible-playbook command below</p> <pre><code>ansible-playbook kubectl.yaml --extra-vars "kubernetes_api_endpoint=&lt;Path to aws load balancer server&gt;" </code></pre> <p>Now I have deleted the cluster using command</p> <pre><code>kubectl config delete-cluster &lt;Name of cluster&gt; </code></pre> <p>But still EC2 nodes are running, I tried to manually stop them but they start again automatically (expected because they are running in a cluster)</p> <p>Is there any way by which I can detach the nodes from the cluster or delete the cluster in total?</p> <p>Kubectl config view shows below message</p> <blockquote> <p>apiVersion: v1 clusters: [] contexts: - context: cluster: "" user: "" name: default-context current-context: default-context kind: Config preferences: {} users: - name: cc3.k8s.local user: token: cc3.k8s.local</p> </blockquote> <p>This means there is no cluster. I want to delete the cluster in total and start fresh.</p>
<p>The <code>delete-cluster</code> command does this :</p> <blockquote> <p>delete-cluster Delete the specified cluster from the kubeconfig</p> </blockquote> <p>It will only delete the context from your <code>~/.kube/config</code> file. Not delete the actual cluster.</p> <p>You will need to write a different script for that or go into the AWS console and simply delete the nodes.</p>
<p>I've installed my kubernetes cluster (two nodes) with <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">kubespray</a>. Now I have added an third node. And I get the error from kubelet server on the new node:</p> <blockquote> <p>Failed to list *v1.Service: Get <a href="https://94.130.25.248:6443/api/v1/services?limit=500&amp;resourceVersion=0" rel="nofollow noreferrer">https://94.130.25.248:6443/api/v1/services?limit=500&amp;resourceVersion=0</a>: x509: certificate is valid for 10.233.0.1, 94.130.25.247, 94.130.25.247, 10.233.0.1, 127.0.0.1, 94.130.25.247, 144.76.14.131, not 94.130.25.248</p> </blockquote> <p>The IP 94.130.25.248 is the ip of new node. </p> <p>I've found <a href="https://stackoverflow.com/questions/46360361/invalid-x509-certificate-for-kubernetes-master">this post</a>, where was wrote about recreating the apicert. But the new version of kubeadm (v1.13.1) don't have this option. </p> <p>Also I've try to renew the certificates with command:</p> <pre><code>kubeadm alpha certs renew all --config /etc/kubernetes/kubeadm-config.yaml </code></pre> <p>This command regenerate the certificates, but with the same ips and dns. </p> <p>My kubeadmin-config.yml (certSANs):</p> <pre><code> certSANs: - kubernetes - kubernetes.default - kubernetes.default.svc - kubernetes.default.svc.cluster.local - 10.233.0.1 - localhost - 127.0.0.1 - heku1 - heku4 - heku2 - 94.130.24.247 - 144.76.14.131 - 94.130.24.248 </code></pre> <p>Can someone tell me how can I added the ip to apicert?</p>
<p>hm... I've removed the apiserver.* and apiserver-kubelet-client.* and recreated this with command:</p> <pre><code>kubeadm init phase certs apiserver --config=/etc/kubernetes/kubeadm-config.yaml kubeadm init phase certs apiserver-kubelet-client --config=/etc/kubernetes/kubeadm-config.yaml systemctl stop kubelet delete the docker container with kubelet systemctl restart kubelet </code></pre>
<p>We have configured to use 2 metrics for HPA</p> <ol> <li>CPU Utilization</li> <li>App specific custom metrics</li> </ol> <p>When testing, we observed the scaling happening, but calculation of no.of replicas is not very clear. I am not able to locate any documentation on this.</p> <p><strong>Questions:</strong></p> <ol> <li>Can someone point to documentation or code on the calculation part?</li> <li>Is it a good practice to use multiple metrics for scaling?</li> </ol> <p>Thanks in Advance!</p>
<p>From <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work</a></p> <blockquote> <p>If multiple metrics are specified in a HorizontalPodAutoscaler, this calculation is done for each metric, and then the largest of the desired replica counts is chosen. If any of those metrics cannot be converted into a desired replica count (e.g. due to an error fetching the metrics from the metrics APIs), scaling is skipped.</p> <p>Finally, just before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the <code>--horizontal-pod-autoscaler-downscale-stabilization-window</code> flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly fluctuating metric values</p> </blockquote>
<p>In the <em><a href="https://rads.stackoverflow.com/amzn/click/com/B072TS9ZQZ" rel="nofollow noreferrer" rel="nofollow noreferrer">Kubernetes Book</a></em>, it says that it's poor form to run pods on the master node.</p> <p>Following this advice, I'd like to create a policy that runs a pod on all nodes, except the master if there are more than one nodes. However, to simplify testing and work in single-node environments, I'd also like to run my pod on the master node if there is just a single node in the entire system.</p> <p>I've been looking around, and can't figure out how to express this policy. I see that <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSets</a> have affinities and anti-affinities. I considered labeling the master node and adding an anti-affinity for that label. However, I didn't see how to require that at least a single pod would always come up (to ensure that things worked for single-node environment). Please let me know if I'm misunderstanding something. Thanks!</p>
<p>How about something like this:</p> <ol> <li>During node provisioning, assign a particular label to each node that should run the job. In a single node cluster, this would be the master. In a multi-node environment, it would be every node except the master(s).</li> <li>Create a deamonset that has tolerations for any nodes</li> </ol> <pre><code>tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule </code></pre> <ol start="3"> <li>As described in that doc you linked, use <code>.spec.template.spec.nodeSelector</code> to select only nodes with your special label. (<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">node selector docs</a>).</li> </ol> <p>How you assign the special label to nodes is probably a fairly manual process heavily dependent on how you are actually deploying your clusters, but that is the general plan I would follow.</p> <p><strong>EDIT:</strong> Or I believe it may be simplest to just remove the master node taint from your single-node cluster. I believe most simple distributions like minikube will come this way by default.</p>
<p>We are setting up an AKS cluster on Azure, following <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls" rel="nofollow noreferrer">this guide</a></p> <p>We are running 5 .Net Core API's behind an ingress controller, everything works fine, requests are being routed nicely. However, in our SPA Frontend, we are sending a custom http header to our API's, this header never seems to make it to the API's, when we inspect the logging in AKS we see the desired http header is empty. In development, everything works fine, we also see the http header is filled in our test environment in AKS, so i'm guessing ingress is blocking these custom headers.</p> <p>Is there any configuration required to make ingress pass through custom http headers?</p> <p>EDIT:</p> <pre><code>{ "kind": "Ingress", "apiVersion": "extensions/v1beta1", "metadata": { "name": "myappp-ingress", "namespace": "myapp", "selfLink": "/apis/extensions/v1beta1/namespaces/myapp/ingresses/myapp-ingress", "uid": "...", "resourceVersion": "6395683", "generation": 4, "creationTimestamp": "2018-11-23T13:07:47Z", "annotations": { "kubernetes.io/ingress.class": "nginx", "nginx.ingress.kubernetes.io/allow-headers": "My_Custom_Header", //this doesn't work "nginx.ingress.kubernetes.io/proxy-body-size": "8m", "nginx.ingress.kubernetes.io/rewrite-target": "/" } }, "spec": { "tls": [ { "hosts": [ "myapp.com" ], "secretName": "..." } ], "rules": [ { "host": "myapp.com", "http": { "paths": [ { "path": "/api/tenantconfig", "backend": { "serviceName": "tenantconfig-api", "servicePort": 80 } }, { "path": "/api/identity", "backend": { "serviceName": "identity-api", "servicePort": 80 } }, { "path": "/api/media", "backend": { "serviceName": "media-api", "servicePort": 80 } }, { "path": "/api/myapp", "backend": { "serviceName": "myapp-api", "servicePort": 80 } }, { "path": "/app", "backend": { "serviceName": "client", "servicePort": 80 } } ] } } ] }, "status": { "loadBalancer": { "ingress": [ {} ] } } } </code></pre>
<p>I ended up using the following configuration snippet:</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header My-Custom-Header $http_my_custom_header; </code></pre> <p>nginx makes all custom http headers available as embedded variable via the <code>$http_</code> prefix, see <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#var_http_" rel="noreferrer">this</a> </p>
<p>We are getting a lot of warnings in our GCP kubernetes cluster event logs from the event-exporter container. </p> <pre><code>event-exporter Jun 4, 2018, 10:45:15 AM W0604 14:45:15.416504 1 reflector.go:323] github.com/GoogleCloudPlatform/k8s-stackdriver/event-exporter/watchers/watcher.go:55: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. event-exporter Jun 4, 2018, 10:37:04 AM W0604 14:37:04.331239 1 reflector.go:323] github.com/GoogleCloudPlatform/k8s-stackdriver/event-exporter/watchers/watcher.go:55: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. event-exporter Jun 4, 2018, 10:28:37 AM W0604 14:28:37.249901 1 reflector.go:323] github.com/GoogleCloudPlatform/k8s-stackdriver/event-exporter/watchers/watcher.go:55: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. event-exporter Jun 4, 2018, 10:21:38 AM W0604 14:21:38.141687 1 reflector.go:323] github.com/GoogleCloudPlatform/k8s-stackdriver/event-exporter/watchers/watcher.go:55: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. event-exporter Jun 4, 2018, 10:15:38 AM W0604 14:15:38.087389 1 reflector.go:323] github.com/GoogleCloudPlatform/k8s-stackdriver/event-exporter/watchers/watcher.go:55: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. event-exporter Jun 4, 2018, 10:04:35 AM W0604 14:04:35.981083 1 reflector.go:323] github.com/GoogleCloudPlatform/k8s-stackdriver/event-exporter/watchers/watcher.go:55: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old. </code></pre> <p>Anyone know why these warnings are appearing and how can I fix them? Thanks.</p>
<p>This means that there are newer version(s) of the watched resource after the time the client api last acquired a list within that watch window.</p> <p>The client needs to re-list to acquire the newest version. This is a somewhat common occurrence when using the client api as logging is buried deep within.</p>
<p>I am doing a lab about <em>kubernetes</em> in <em>google cloud</em>, so my task is deploy two <em>nginx</em> servers in one pod, however I have a issue.</p> <p>One of the pods can not starts, as PORT or IP is using buy another nginx container, I need to change it in yaml file, please give me a solution, thank you in advance</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: two-containers spec: restartPolicy: Never volumes: - name: shared-data emptyDir: {} containers: - name: first-container image: nginx - name: second-container image: nginx E nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) E 2019/01/21 11:04:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) E nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) E 2019/01/21 11:04:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) E nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) E 2019/01/21 11:04:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) E nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) E 2019/01/21 11:04:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) E nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) E 2019/01/21 11:04:47 [emerg] 1#1: still could not bind() E nginx: [emerg] still could not bind() </code></pre>
<p>In kubernetes the container in pods share single network namespace. To simplify, two container cannot listen to same port, in same pod.</p> <p>So in order to two nginx container within same pod, you need to run them on different port. One nginx can run on 80 and other on 81.</p> <p>So we will run <code>first-container</code> with default nginx config and for <code>second-container</code> we will be running with below config. </p> <ul> <li>default.conf</li> </ul> <pre><code>server { listen 81; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } </code></pre> <ul> <li>Create a configmap from this <code>default.conf</code></li> </ul> <pre><code>kubectl create configmap nginx-conf --from-file default.conf </code></pre> <ul> <li>Create a pod as following.</li> </ul> <pre><code> apiVersion: v1 kind: Pod metadata: name: two-containers spec: restartPolicy: Never volumes: - name: config configMap: name: nginx-conf containers: - name: first-container image: nginx ports: - containerPort: 80 - name: second-container image: nginx ports: - containerPort: 81 volumeMounts: - name: config mountPath: /etc/nginx/conf.d </code></pre> <ul> <li><p>Deploy the pod.</p></li> <li><p>Now exec into the pod and try to ping on <code>localhost:80</code> and <code>localhost:81</code> it will work. Let me know, if you need any more help in it.</p></li> </ul>
<p>I am running my docker containers with the help of kubernetes cluster on AWS EKS. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.</p> <p>I created an EFS volume and mounted. I am following link to create <code>PersistentVolumeClaim</code>. But I am getting timeout error when <code>efs-provider</code> pod trying to attach mounted EFS volume space. <code>VolumeId</code>, region are correct only. </p> <p>Detailed Error message for Pod describe: </p> <blockquote> <p>timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw] <br> MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32</p> </blockquote>
<p>The problem for me was that I was specifying a different path in my PV than <code>/</code>. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.</p>
<p>I am new with Kubernetes and am trying to setup a Kubernetes cluster on local machines. Bare metal. No OpenStack, No Maas or something.</p> <p>After <code>kubeadm init ...</code> on the master node, <code>kubeadm join ...</code> on the slave nodes and <a href="https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">applying flannel</a> at the master I get the message from the slaves:</p> <blockquote> <p>runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized</p> </blockquote> <p>Can anyone tell me what I have done wrong or missed any steps? Should flannel be applied to all the slave nodes as well? If yes, they do not have a <code>admin.conf</code>...</p> <p>Thanks a lot!</p> <p>PS. All the nodes do not have internet access. That means all files have to be copied manually via ssh.</p>
<p>I think this problem cause by kuberadm first init coredns but not init flannel,so it throw "network plugin is not ready: cni config uninitialized".<br> Solution:<br> 1. Install flannel by <code>kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</code><br> 2. Reset the coredns pod<br> <code>kubectl delete coredns-xx-xx</code><br> 3. Then run <code>kubectl get pods</code> to see if it works.</p> <p>if you see this error "cni0" already has an IP address different from 10.244.1.1/24". follow this:</p> <pre><code>ifconfig cni0 down brctl delbr cni0 ip link delete flannel.1 </code></pre> <p>if you see this error "Back-off restarting failed container", and you can get the log by</p> <pre><code>root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system .:53 2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6 2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c CoreDNS-1.2.6 linux/amd64, go1.11.2, 756749c [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 [FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1599094102175870692.6819166615156126341.". </code></pre> <p>Then you can see the file "/etc/resolv.conf" on the failed node, if the nameserver is localhost there will be a loopback.Change to:</p> <pre><code>#nameserver 127.0.1.1 nameserver 8.8.8.8 </code></pre>
<p>I have a problem that I cannot access service with curl althought I have external IP.I meet a timeout request. Here is my services </p> <p><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE crawler-manager-1 NodePort 10.103.18.210 192.168.0.10 3001:30029/TCP 2h redis NodePort 10.100.67.138 192.168.0.11 6379:30877/TCP 5h</code> and here my yaml service file:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert -f docker-compose.yml kompose.version: 1.17.0 (a74acad) creationTimestamp: null labels: io.kompose.service: crawler-manager-1 name: crawler-manager-1 namespace: cbpo-example spec: type: NodePort externalIPs: - 192.168.0.10 ports: - name: "3001" port: 3001 targetPort: 3001 selector: io.kompose.service: crawler-manager-1 run: redis status: loadBalancer: {} </code></pre> <p>Here my deployment yml file</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert -f docker-compose.yml kompose.version: 1.17.0 (a74acad) creationTimestamp: null labels: io.kompose.service: crawler-manager-1 name: crawler-manager-1 namespace: cbpo-example spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: crawler-manager-1 spec: hostNetwork: true containers: - args: - npm - start env: - name: DB_HOST value: mysql - name: DB_NAME - name: DB_PASSWORD - name: DB_USER - name: REDIS_URL value: redis://cbpo-redis image: localhost:5000/manager name: crawler-manager-1 ports: - containerPort: 3001 resources: {} restartPolicy: Always status: {} </code></pre> <p>Anyone have a problem like me when work with kubernetes? I need to access to check if 2 service in my namespace can connect each other, Thanks so much. </p>
<p>Instead of communication through ip addresses for your services you can communicate with their DNS names.</p> <blockquote> <p>“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.</p> <p>“Headless” (without a cluster IP) Services are also assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. Unlike normal Services, this resolves to the set of IPs of the pods selected by the Service. Clients are expected to consume the set or else use standard round-robin selection from the set.</p> </blockquote> <p>For more info, please check <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">Kubernetes DNS for Services</a></p>
<p>My command <code>helm list</code> is failing with the message:</p> <pre><code>Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system" </code></pre> <p>And I found some results that tell me how to set up RBAC roles and rolebindings, like for example:</p> <p><a href="https://stackoverflow.com/questions/46672523/helm-list-cannot-list-configmaps-in-the-namespace-kube-system">helm list : cannot list configmaps in the namespace &quot;kube-system&quot;</a></p> <p>and </p> <p><a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/</a></p> <p>but these fail for me as well with this error:</p> <pre><code>Error from server (Forbidden): error when creating "tiller-clusterrolebinding.yaml": clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "$USER" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required "container.clusterRoleBindings.create" permission. </code></pre> <p>Now after some searching I found this answer:</p> <p><a href="https://stackoverflow.com/questions/49770624/cannot-create-clusterrolebinding-on-fresh-gke-cluster">Cannot create clusterrolebinding on fresh GKE cluster</a></p> <p>Which gives this error:</p> <pre><code>ERROR: (gcloud.projects.add-iam-policy-binding) User [$USER] does not have permission to access project [$PROJECT:setIamPolicy] (or it may not exist): The caller does not have permission </code></pre> <p>This last error finally seems to give me a good tip, I seem to not be an administrator/owner of this project, so I'm asking the owner of the project if he can give me those permissions. He's non-technical, so he'll have to do it through the GUI.</p>
<p>You would need one of the following roles to create clusterrolebindings in k8s:</p> <ul> <li>Owner </li> <li>Kubernetes Engine Admin</li> <li>Composer Worker</li> </ul> <p>You can check permissions and roles in Roles tab in GCP</p> <p><a href="https://i.stack.imgur.com/HFwWy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/HFwWy.png" alt="roles"></a></p> <p>And you can assign one of these roles (or create a custom role) in IAM &amp; Admin Tab</p> <p><a href="https://i.stack.imgur.com/0m9QF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0m9QF.png" alt="enter image description here"></a></p> <p>You need one of the following roles with <strong>resourcemanager.projects.setIamPolicy</strong> permission to set IAM policy roles for somebody else (which I believe your admin does) </p> <ul> <li>Organization Administrator</li> <li>Owner</li> <li>Project IAM Admin</li> </ul> <p><a href="https://i.stack.imgur.com/FmFZw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FmFZw.png" alt="enter image description here"></a></p> <p>Your project owner, organization administrator, or somebody with project IAM admin role will need to give your <strong>$USER</strong> one of the following roles, or create a custom role: Owner (less likely), Kubernetes Engine Admin (most likely, not following the security principle of <strong>least privilege</strong> though) or Composer Worker. Then your <strong>$USER</strong> will be able to create <code>clusterrolebindings.rbac.authorization.k8s.io at the cluster scope</code></p>
<p>I'm trying to follow <a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">this tutorial</a> to setup an nginx-ingress controller.</p> <p>It seems it was written before RBAC was fully integrated into k8s. When I get to the final step of running the <a href="https://gist.github.com/gokulchandra/78b7f0bf0b3a3c9d4434138cd6c6d769#file-nginx-controller-yaml" rel="nofollow noreferrer">nginx-controller.yaml</a> I get back an authorization error:</p> <pre><code>no service with name default/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:default:default" cannot get services in the namespace "default" </code></pre> <p>What do I need to do to make this work with RBAC?</p>
<p>That hackernoon post (like most of them) is incorrent. Specifically there are no RBAC objects and the deployment is not assigned a service account (i.e.: serviceAccountName: ).</p> <p>To ensure that you have the right (or enough) RBAC objects created check out the RBAC-* objects at <a href="https://github.com/mateothegreat/k8-byexamples-ingress-controller/tree/master/manifests" rel="nofollow noreferrer">https://github.com/mateothegreat/k8-byexamples-ingress-controller/tree/master/manifests</a>.</p>
<p>We are on Kubernetes 1.9.0 and wonder if there is way to access an "ordinal index" of a pod with in its statefulset configuration file. We like to dynamically assign a value (that's derived from the ordinal index) to the pod's label and later use it for setting pod affinity (or antiaffinity) under spec.</p> <p>Alternatively, is the pod's instance name available with in statefulset configfile? If so, we can hopefully extract ordinal index from it and dynamically assign to a label (for later use for affinity).</p>
<p>You could essentially get the unique name of your pod in statefulset as an environment variable, you have to extract the ordinal index from it though</p> <p>In container's spec:</p> <pre><code>env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name </code></pre>
<p>I am wondering if it is possible to store a key-value pair in Secret.yml. I want to be able to store an encryption key as a value and an id as its key which I can use to retrieve the encryption key stored in Secret.yml. </p> <p>Is such functionality available with Kubernetes?</p> <p><strong>EDIT</strong> I should have said in my original message, I want to be able to store multiple pairs and add pairs during the lifespan of my application and use different encryption key on the fly without/minimal update to my application.</p>
<p>Thank you guys for your answers. I've found using the below format in Secret.yaml works well:</p> <pre><code>encryptionKey | KeyName: 123456abcdef SecondKeyName: abcdef123456 </code></pre> <p>I would then get the key value pairs as a Map in my Java application and treat it as such so I can, for example, search for key <code>KeyName</code> and get the value I need.</p>
<p>I don't understand why I'm receiving this error. A new node should definitely be able to accommodate the pod. As I'm only requesting <strong>768Mi</strong> of memory and <strong>450m</strong> of CPU, and the instance group that would be autoscaled is of type <code>n1-highcpu-2</code> - <strong>2 vCPU, 1.8GB</strong>.</p> <p>How could I diagnose this further?</p> <p><strong>kubectl describe pod:</strong></p> <pre><code>Name: initial-projectinitialabcrad-697b74b449-848bl Namespace: production Node: &lt;none&gt; Labels: app=initial-projectinitialabcrad appType=abcrad-api pod-template-hash=2536306005 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/initial-projectinitialabcrad-697b74b449 Containers: app: Image: gcr.io/example-project-abcsub/projectinitial-abcrad-app:production_6b0b3ddabc68d031e9f7874a6ea49ee9902207bc Port: &lt;none&gt; Host Port: &lt;none&gt; Limits: cpu: 1 memory: 1Gi Requests: cpu: 250m memory: 512Mi Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro) nginx: Image: gcr.io/example-project-abcsub/projectinitial-abcrad-nginx:production_6b0b3ddabc68d031e9f7874a6ea49ee9902207bc Port: 80/TCP Host Port: 0/TCP Limits: cpu: 1 memory: 1Gi Requests: cpu: 100m memory: 128Mi Readiness: http-get http://:80/api/v1/ping delay=5s timeout=10s period=10s #success=1 #failure=3 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro) cloudsql-proxy: Image: gcr.io/cloudsql-docker/gce-proxy:1.11 Port: 3306/TCP Host Port: 0/TCP Command: /cloud_sql_proxy -instances=example-project-abcsub:us-central1:abcfn-staging=tcp:0.0.0.0:3306 -credential_file=/secrets/cloudsql/credentials.json Limits: cpu: 1 memory: 1Gi Requests: cpu: 100m memory: 128Mi Mounts: /secrets/cloudsql from cloudsql-instance-credentials (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro) Conditions: Type Status PodScheduled False Volumes: cloudsql-instance-credentials: Type: Secret (a volume populated by a Secret) SecretName: cloudsql-instance-credentials Optional: false default-token-srv8k: Type: Secret (a volume populated by a Secret) SecretName: default-token-srv8k Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NotTriggerScaleUp 4m (x29706 over 3d) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added) Warning FailedScheduling 4m (x18965 over 3d) default-scheduler 0/4 nodes are available: 3 Insufficient memory, 4 Insufficient cpu. </code></pre>
<p>It's not the hardware requests (confusingly the error message made me assume this) but it's due to my pod affinity rule defined:</p> <pre><code>podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: appType operator: NotIn values: - example-api topologyKey: kubernetes.io/hostname </code></pre>
<p>According to <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#scaling-a-statefulset" rel="noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#scaling-a-statefulset</a>, I would like to ask how to achieve zero-downtime rolling update? I guess here are the minimum requirements:</p> <p>(1) .spec.updateStrategy set to RollingUpdate</p> <p>(2) .spec.podManagementPolicy set to OrderedReady</p> <p>(3) .spec.replicas set to 2</p> <p>Is that right? And I assume that when update is happening in reverse order, all traffic to the StatefulSet is served by the pods with lower ordinal number?</p>
<p>Yes to have have zero downtime for <code>statefulsets</code> upgrade, you should have all the points mentioned:</p> <ol> <li><code>.spec.updateStrategy</code> set to <code>RollingUpdate</code></li> <li><code>.spec.podManagementPolicy</code> set to <code>OrderedReady</code> which is by default <code>OrderedReady</code></li> <li><code>.spec.replicas</code> set to minimum 2. </li> </ol> <p>Another, thing you need to make sure your statefulset doesn't have downtime is proper <code>readiness</code> probe set. The <code>readiness</code> probe tells kubernetes controller manager that this pod is ready to serve request and you can start sending the requests to it.</p> <p>The reason it is very important while doing zero downtime upgrade is lets say you have two replica's of statefulset and you started rolling upgrade without readiness probe set. The kubernetes will delete the pod in reverse order and make it come to running state and mark it as ready and terminate another pod. Now lets say your container process didn't come up in that time there will be no pod to serve the requests, because one pod is not completely ready yet and kubernetes has terminated another pod for upgrade process and hence the data loss.</p> <pre><code>readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 </code></pre> <p>EDIT: The following json snippet I use for rolling update of statefulsets in my case:</p> <pre><code> "spec": { "containers": [ { "name": "md", "image": "", "imagePullPolicy": "IfNotPresent", "command": [ "/bin/sh", "-c" ], "args": [ "chmod -R 777 /logs/; /on_start.sh" ], "readinessProbe": { "exec": { "command": [ "cat", "/tmp/ready.txt" ] }, "failureThreshold": 10, "initialDelaySeconds": 5, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 1 }, "securityContext": { "privileged": true } } </code></pre> <p>This is how you can setup readiness probe in your statefulset containers. I am setting readiness probe as <code>linux command</code>, if you have http probe then it will be different.</p>
<p>Folks, I am using Google Cloud Kubernetes Engine. I want to browse through some of the logs that should be available namely kube-controller-manager logs. I am certain I have done this recently on the same setup but I can't figure it out now. So here's the thing:</p> <ol> <li>There's no component anyhow related to <code>kube-controller-manager</code> in the <code>kube-system</code> namespace. I have tried: <code>kubectl get pods -namespace=kube-system</code></li> <li>There's no logs if I am connecting to the VM running k8s node (any of them, I tried all) in <code>/var/log</code> related to <code>kube-controller-manager</code>. Connected to all nodes (VMs) via SSH and tried to browse <code>/var/logs/</code></li> <li>There seem to be only one manifest in <code>/etc/kubernetes/manifests</code> and it's <code>kube-proxy</code> one. I was expecting to have <code>kube-controller-manager</code> and a few others to be in that directory.</li> </ol> <p>Can someone point me to a place where I should be looking? Has this been changed recently on GKE?</p>
<p>The kube-controller-manager runs as a pod on the master and is managed by Google, therefore it is not accessible to the public. I do not believe that has been changed recently if ever.</p>
<p>I have a simple node api service, mongo service and load balancer.</p> <p>I am trying to deploy my application using kubernetes I get the following error when i run the command.</p> <pre><code>- kubectl describe ing -n my-service </code></pre> <blockquote> <p>Warning ERROR 5s (x2 over 8s) aws-alb-ingress-controller error instantiating load balancer: my-service-api service is not of type NodePort and target-type is instance</p> </blockquote> <pre><code>kind: Namespace apiVersion: v1 metadata: name: my-service labels: name: my-service --- #MongoDB apiVersion: v1 kind: Service metadata: name: mongo namespace: my-service labels: run: mongo spec: ports: - port: 27017 targetPort: 27017 protocol: TCP selector: run: mongo --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mongo namespace: my-service spec: template: metadata: labels: run: mongo spec: containers: - name: mongo image: mongo ports: - containerPort: 27017 --- apiVersion: v1 kind: Service metadata: name: my-service-api namespace: my-service labels: app: my-service-api spec: selector: app: my-service-api ports: - port: 3002 protocol: TCP nodePort: 30002 type: LoadBalancer --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-service-api-deployment namespace: my-service spec: replicas: 1 template: metadata: labels: app: my-service-api spec: containers: - name: my-service-api image: &lt;removed&gt; imagePullPolicy: Always env: &lt;removed&gt; ports: - containerPort: 3002 imagePullSecrets: - name: regcred --- #LoadBalancer apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-service-api namespace: my-service annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/tags: Name=my-service-api,Owner=devops,Project=my-service,Stage=development spec: rules: - host: &lt;removed&gt; http: paths: - path: / backend: serviceName: my-service-api servicePort: 3002 </code></pre> <p>Can someone tell me what i am doing wrong here, thank you.</p>
<p>It seems that your <code>my-service-api</code> service type is "LoadBalancer", and you need to use <strong>"NodePort"</strong> in order to use <code>instance</code> for <code>alb.ingress.kubernetes.io/target-type:</code></p> <p>May be this closed <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/458" rel="nofollow noreferrer">github issue</a> can be useful for you</p>
<p>What is the best way to handle the results returned (in this case, json) from a liveness/readiness probe to indicate success or failure?</p> <p>returned json: {"status":"ok","data":[],"count":0}</p> <p>thanks.</p>
<p>It is better if http status code can be used to indicate health (can be used in addition to response body). Then you can use an http probe. I've not seen the body used in an http probe so to do that and can't see anything in the API for the httpget action for parsing the response body ( <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#httpgetaction-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#httpgetaction-v1-core</a>). So you probably have to use an exec/command probe instead and perform a curl. </p> <p>So <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command</a> rather than <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-http-request</a></p>
<p>I have a running Gitlab CI pipeline in my Kubernetes cluster.</p> <p>When the tests are failing, I need to grab the app screenshots and logs from the pod where it ran so that they are available where the Gitlab Runner is expecting them.</p> <p>I tried the <code>kubectl cp &lt;namespace&gt;/&lt;podname&gt;:/in-pod-path /local/path</code> to copy the files from a stopped pod (having the <code>tar</code> command installed in my Docker image), but <a href="https://github.com/kubernetes/kubectl/issues/454" rel="nofollow noreferrer">it isn't yet supported</a>.</p> <p>Until this is available, I guess I need a volume mounted in the pod at the path where are saved my artefacts so that I can grab them from this volume after the tests execution is finished.</p> <p>I'm wondering <strong>what kind of volume should I use</strong> knowing that I have 3 kube workers, I don't need that volume to be persistent over time, more to be shared across the nodes?</p> <p>I'm expecting to deploy this volume before deploying the pod running my tests mounting this volume. When tests failure is detected, I would extract the artefacts to the right place and delete the pod and the volume.</p>
<p>You could try and define a PVC with access mode <code>ReadWriteMany</code>, in order to get a volume shared between multiple pods.<br> See "<a href="https://stackoverflow.com/a/52564314/6309">How to share storage between Kubernetes pods?</a>"</p> <p>It would still be a persistent volume (to support that), with all the pods scheduled to the node with that volume.</p> <blockquote> <p>There are several volume types that are suitable for that and not tied to any cloud provider:</p> <ul> <li>NFS</li> <li>RBD (Ceph Block Device)</li> <li>CephFS</li> <li>Glusterfs</li> <li>Portworx Volumes</li> </ul> </blockquote> <p>But:</p> <blockquote> <p>I don't really need to share the volume between many pods, I'm fine to create a volume per pods.<br> I'd like to avoid installing/configuring a node shared volume service from the list you gave.<br> I'm looking for an <strong>ephemeral volume</strong> if that is possible? </p> </blockquote> <p>Then an <strong><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#local-ephemeral-storage" rel="nofollow noreferrer">ephemeral <em>storage</em></a></strong> is possible:</p> <blockquote> <p>Kubernetes version 1.8 introduces a new resource, ephemeral-storage for managing local ephemeral storage. In each Kubernetes node, kubelet’s root directory (<code>/var/lib/kubelet</code> by default) and log directory (<code>/var/log</code>) are stored on the root partition of the node.<br> This partition is also shared and consumed by Pods via <code>emptyDir</code> volumes, container logs, image layers and container writable layers.</p> </blockquote> <p>In your case, you need a <a href="https://docs.openshift.com/container-platform/3.10/architecture/additional_concepts/ephemeral-storage.html#section-type-runtime" rel="nofollow noreferrer">runtime ephemeral storage</a>.</p>
<p>i saw some statement that clusterIP can not be accessed by external machine outside the cluster.</p> <p>but i am not sure what the cluster mean here, does it mean cluster of pods or cluster of nodes.</p> <p>ultimate question, can clusterIP (without node port) be accessed by other nodes?</p> <p>Thanks</p>
<p>Yes, ClusterIP services can be reached by nodes in the cluster. Part of their purpose is to load-balance across replicas for Pods (which may be in different Nodes) so that traffic isn't all going to particular Pods. See <a href="https://stackoverflow.com/questions/54150887/clarify-ingress-load-balancer">Clarify Ingress load balancer</a></p> <p>A ClusterIP service doesn't really live on any single Node but rather on each Node. The kube-proxy on each Node ensures that the load-balancing across instances takes place by updating each Nodes iptables</p>
<p><strong>problem statement</strong> <br> we want to take backup of mongodb running in k8s cluster in azure and export it in some other mongodb running in differnt k8s cluster. does any one provide pointers related to this</p>
<p>One option is to create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Kubernetes Cronjob</a> with Azure file share as the persistent volume. In the cronjob you can run a mongodump command.</p> <p>You can also use <a href="https://github.com/stefanprodan/mgob" rel="nofollow noreferrer">MGOB</a> which can help to configure scheduled backups as well.</p> <p>If you have multiple MongoDB instances on kubernetes, I would recommend you to try MGOB. They would greatly simplify the setup.</p> <p>If you need a solution which you need to implement, you can choose Kubernetes Cronjob.</p>
<p>I am deploying HA kubernetes master(stacked etcd) with kubeadm ,I followed the instructions on official website : <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/</a><br> four nodes are planned in my cluster for now: </p> <ol> <li>One HAProxy server node used for master loadbalance. </li> <li>three etcd stacked master nodes.</li> </ol> <p>I deployed haproxy with following configuration:</p> <pre><code>global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms frontend haproxy_kube bind *:6443 mode tcp option tcplog timeout client 10800s default_backend masters backend masters mode tcp option tcplog balance leastconn timeout server 10800s server master01 &lt;master01-ip&gt;:6443 check </code></pre> <p>my kubeadm-config.yaml is like this:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration nodeRegistration: name: "master01" --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration apiServer: certSANs: - "&lt;haproxyserver-dns&gt;" controlPlaneEndpoint: "&lt;haproxyserver-dns&gt;:6443" networking: serviceSubnet: "172.24.0.0/16" podSubnet: "172.16.0.0/16" </code></pre> <p>my initial command is:</p> <pre><code>kubeadm init --config=kubeadm-config.yaml -v 11 </code></pre> <p>but after I running the command above on the master01, it kept logging the following information:</p> <pre><code>I0122 11:43:44.039849 17489 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0122 11:43:44.041038 17489 local.go:57] [etcd] wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" I0122 11:43:44.041068 17489 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy I0122 11:43:44.042665 17489 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0122 11:43:44.044971 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s' I0122 11:43:44.120973 17489 round_trippers.go:438] GET https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s in 75 milliseconds I0122 11:43:44.120988 17489 round_trippers.go:444] Response Headers: I0122 11:43:44.621201 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s' I0122 11:43:44.703556 17489 round_trippers.go:438] GET https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s in 82 milliseconds I0122 11:43:44.703577 17489 round_trippers.go:444] Response Headers: I0122 11:43:45.121311 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s' I0122 11:43:45.200493 17489 round_trippers.go:438] GET https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s in 79 milliseconds I0122 11:43:45.200514 17489 round_trippers.go:444] Response Headers: I0122 11:43:45.621338 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s' I0122 11:43:45.698633 17489 round_trippers.go:438] GET https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s in 77 milliseconds I0122 11:43:45.698652 17489 round_trippers.go:444] Response Headers: I0122 11:43:46.121323 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s' I0122 11:43:46.199641 17489 round_trippers.go:438] GET https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s in 78 milliseconds I0122 11:43:46.199660 17489 round_trippers.go:444] Response Headers: </code></pre> <p>after quitting the loop with Ctrl-C, I run the curl command mannually, but every thing seems ok:</p> <pre><code>curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://&lt;haproxyserver-dns&gt;:6443/healthz?timeout=32s' * About to connect() to &lt;haproxyserver-dns&gt; port 6443 (#0) * Trying &lt;haproxyserver-ip&gt;... * Connected to &lt;haproxyserver-dns&gt; (10.135.64.223) port 6443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * NSS: client certificate not found (nickname not specified) * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=kube-apiserver * start date: Jan 22 03:43:38 2019 GMT * expire date: Jan 22 03:43:38 2020 GMT * common name: kube-apiserver * issuer: CN=kubernetes &gt; GET /healthz?timeout=32s HTTP/1.1 &gt; Host: &lt;haproxyserver-dns&gt;:6443 &gt; Accept: application/json, */* &gt; User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab &gt; &lt; HTTP/1.1 200 OK &lt; Date: Tue, 22 Jan 2019 04:09:03 GMT &lt; Content-Length: 2 &lt; Content-Type: text/plain; charset=utf-8 &lt; * Connection #0 to host &lt;haproxyserver-dns&gt; left intact ok </code></pre> <p>I don't know how to find out the essential cause of this issue, hoping someone who know about this can give me some suggestion. Thanks!</p>
<p>After several days of finding and trying, again, I can solve this problem by myself. In fact, the problem perhaps came with a very rare situation: </p> <blockquote> <p>I set proxy on master node in both <code>/etc/profile</code> and <code>docker.service.d</code>, which made the request to haproxy don't work well. </p> </blockquote> <p>I don't know which setting cause this problem. But after adding a no proxy rule, the problem solved and kubeadm successfully initialized a master after the haproxy load balancer. Here is my proxy settings : </p> <p>/etc/profile: </p> <pre><code>... export http_proxy=http://&lt;my-proxy-server-dns:port&gt;/ export no_proxy=&lt;my-k8s-master-loadbalance-server-dns&gt;,&lt;my-proxy-server-dns&gt;,localhost </code></pre> <p>/etc/systemd/system/docker.service.d/http-proxy.conf:</p> <pre><code>[Service] Environment="HTTP_PROXY=http://&lt;my-proxy-server-dns:port&gt;/" "NO_PROXY&lt;my-k8s-master-loadbalance-server-dns&gt;,&lt;my-proxy-server-dns&gt;,localhost, 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16" </code></pre>
<p>We have Kubernetes setup hosted on premises and are trying to allow clients outside of K8s to connect to services hosted in the K8s cluster.</p> <p>In order to make this work using HA Proxy (which runs outside K8s), we have the HAProxy backend configuration as follows -</p> <pre><code> backend vault-backend ... ... server k8s-worker-1 worker1:32200 check server k8s-worker-2 worker2:32200 check server k8s-worker-3 worker3:32200 check </code></pre> <p>Now, this solution works, but the worker names and the corresponding nodePorts are hard-coded in this config, which obviously is inconvenient as and when more workers are added (or removed/changed).</p> <p>We came across the HAProxy Ingress Controller (<a href="https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/" rel="nofollow noreferrer">https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/</a>) which sounds promising, but (we feel) effectively adds another HAProxy layer to the mix..and thus, adds another failure point.</p> <p>Is there a better solution to implement this requirement?</p>
<blockquote> <p>Now, this solution works, but the worker names and the corresponding nodePorts are hard-coded in this config, which obviously is inconvenient as and when more workers are added (or removed/changed).</p> </blockquote> <p>You can explicitly configure the NodePort for your Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> so it doesn't pick a random port and you always use the same port on your external HAProxy:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: &lt;my-nodeport-service&gt; labels: &lt;my-label-key&gt;: &lt;my-label-value&gt; spec: selector: &lt;my-selector-key&gt;: &lt;my-selector-value&gt; type: NodePort ports: - port: &lt;service-port&gt; nodePort: 32200 </code></pre> <blockquote> <p>We came across the HAProxy Ingress Controller (<a href="https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/" rel="nofollow noreferrer">https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/</a>) which sounds promising, but (we feel) effectively adds another HAProxy layer to the mix..and thus, adds another failure point.</p> </blockquote> <p>You could run the HAProxy ingress inside the cluster and remove the HAproxy outside the cluster, but this really depends on what type of service you are running. The Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> is Layer 7 resource, for example. The DR here would be handled by having multiple replicas of your HAProxy ingress controller.</p>
<p>I want to setup web app using three components that i already have:</p> <ol> <li>Domain name registered on domains.google.com</li> <li>Frontend web app hosted on Firebase Hosting and served from <code>example.com</code></li> <li>Backend on Kubernetes cluster behind Load Balancer with external static IP <code>1.2.3.4</code></li> </ol> <p>I want to serve the backend from <code>example.com/api</code> or <code>api.example.com</code></p> <p>My best guess is to use Cloud DNS to connect IP adress and subdomain (or URL)</p> <ul> <li><code>1.2.3.4</code> -> <code>api.exmple.com</code></li> <li><code>1.2.3.4</code> -> <code>example.com/api</code></li> </ul> <p>The problem is that Cloud DNS uses custom name servers, like this:</p> <pre><code>ns-cloud-d1.googledomains.com </code></pre> <p>So if I set Google default name servers I can reach Firebase hosting only, and if I use custom name servers I can reach only Kubernetes backend.</p> <p>What is a proper way to be able to reach both api.example.com and example.com?</p> <p>edit: As a temporary workaround i'm combining two default name servers and two custom name servers from cloud DNS, like this:</p> <ul> <li><code>ns-cloud-d1.googledomains.com</code> (custom)</li> <li><code>ns-cloud-d2.googledomains.com</code> (custom)</li> <li><code>ns-cloud-b1.googledomains.com</code> (default)</li> <li><code>ns-cloud-b2.googledomains.com</code> (default)</li> </ul> <p>But if someone knows the proper way to do it - please post the answer.</p>
<p><strong>Approach 1:</strong></p> <pre><code>example.com --&gt; Firebase Hosting (A record) api.example.com --&gt; Kubernetes backend </code></pre> <p>Pro: Super-simple</p> <p>Con: CORS request needed by browser before API calls can be made.</p> <p><strong>Approach 2:</strong></p> <pre><code>example.com --&gt; Firebase Hosting via k8s ExternalName service example.com/api --&gt; Kubernetes backend </code></pre> <p>Unfortunately from my own efforts to make this work with service <code>type: ExternalName</code> all I could manage is to get infinitely redirected, something which I am still unable to debug.</p> <p><strong>Approach 3:</strong></p> <pre><code>example.com --&gt; Google Cloud Storage via NGINX proxy to redirect paths to index.html example.com/api --&gt; Kubernetes backend </code></pre> <p>You will need to deploy the static files to Cloud Storage, with an NGINX proxy in front if you want SPA-like redirection to index.html for all routes. This approach does not use Firebase Hosting altogether.</p> <p>The complication lies in the /api redirect which depends on which Ingress you are using.</p> <p>Hope that helps.</p>
<p>Everything worked fine when I ran it on Docker, but after I migrated it to Kubernetes it stopped connecting to the DB. It says:</p> <pre><code>pymongo.errors.ServerSelectionTimeoutError pymongo.errors.ServerSelectionTimeoutError: connection closed </code></pre> <p>whenever I try to access a page that uses the DB.</p> <p>I connect like this:</p> <pre><code>app.config['MONGO_DBNAME'] = 'pymongo_db' app.config['MONGO_URI'] = 'mongodb://fakeuser:[email protected]:63984/pymongo_db' </code></pre> <p>Any way to get it to connect?</p> <p>Edit:</p> <p>I think it has more so to do with the Istio sidecars as when deployed on Kubernetes minus Istio, it runs normally. The issue only appears when running Istio.</p>
<p>Most likely Istio (the Envoy sidecar) is controlling egress traffic. You can check if you have any <code>ServiceEntry</code> and <code>VirtuaService</code> in your cluster for your specific application:</p> <pre><code>$ kubectl -n &lt;your-namespace&gt; get serviceentry $ kubectl -n &lt;your-namespace&gt; get virtualservice </code></pre> <p>If they exist, check if they are allowing traffic to <code>ds1336984.mlab.com</code>. If they don't exist you will have to <a href="https://istio.io/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer">create</a> them.</p>
<p>I have created a Kubernetes Service with type ExternalName, I understand this service act as a proxy and redirect the request to the external service sitting outside the cluster. I am able to create the service but not able to curl it i.e I get 500 error. I wanna understand how this ExternalName Kubernetes service work.</p>
<p>Services with type <code>ExternalName</code> work as other regular services, but when you want to access to that service name, instead of returning cluster-ip of this service, it returns CNAME record with value that mentioned in <code>externalName:</code> parameter of service.</p> <p>As example mentioned in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="noreferrer">Kubernetes Documentation</a>:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-service spec: type: ExternalName externalName: my.database.example.com </code></pre> <p>When you want to do <code>curl -v http://my-service</code> or <code>curl -v http://my-service.default.svc.cluster.local</code> according your namespace(in this example it is default), it will redirect you at the DNS level to <code>http://my.database.example.com</code></p> <p>I hope it was useful</p>
<p>I'm trying to write a controller and I'm having a few issues writing tests. </p> <p>I've used some code from the k8s HPA in my controller and I'm seeing something weird when using the <code>testrestmapper</code>.</p> <p>basically when running this <a href="https://github.com/kubernetes/kubernetes/blob/c6ebd126a77e75e6f80e1cd59da6b887e783c7c4/pkg/controller/podautoscaler/horizontal_test.go#L852" rel="nofollow noreferrer">test</a> with a breakpoint <a href="https://github.com/kubernetes/kubernetes/blob/7498c14218403c9a713f9e0747f2c6794a0da9c7/pkg/controller/podautoscaler/horizontal.go#L512" rel="nofollow noreferrer">here</a> I see the mappings are returned. </p> <p>When I do the same the mappings are not returned. </p> <p>What magic is happening here?</p> <p>The following test fails</p> <pre class="lang-golang prettyprint-override"><code>package main import ( "github.com/stretchr/testify/assert" "k8s.io/apimachinery/pkg/api/meta/testrestmapper" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/kubernetes/pkg/api/legacyscheme" "testing" ) func TestT(t *testing.T) { mapper := testrestmapper.TestOnlyStaticRESTMapper(legacyscheme.Scheme) gk := schema.FromAPIVersionAndKind("apps/v1", "Deployment").GroupKind() mapping, err := mapper.RESTMapping(gk) assert.NoError(t, err) assert.NotNil(t, mapping) } </code></pre>
<p>I think this is because you are missing an import of <code>_ "k8s.io/kubernetes/pkg/apis/apps/install"</code>.</p> <p>Without importing this path, there are no API groups or versions registered with the <code>schema</code> you are using to obtain the REST mapping.</p> <p>By importing the path, the API group will be registered, allowing the call to <code>schema.FromAPIVersionAndKind("apps/v1", "Deployment").GroupKind()</code> to return a valid GroupKind.</p>
<p>I have been trying Spark 2.4 deployment on k8s and want to establish a secured RPC communication channel between driver and executors. Was using the following configuration parameters as part of <code>spark-submit</code></p> <pre><code>spark.authenticate true spark.authenticate.secret good spark.network.crypto.enabled true spark.network.crypto.keyFactoryAlgorithm PBKDF2WithHmacSHA1 spark.network.crypto.saslFallback false </code></pre> <p>The driver and executors were not able to communicate on a secured channel and were throwing the following errors.</p> <pre><code>Exception in thread "main" java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:281) at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala) Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:201) at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:65) at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:64) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) ... 4 more Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Unknown challenge message. at org.apache.spark.network.crypto.AuthRpcHandler.receive(AuthRpcHandler.java:109) at org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:181) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:103) at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) </code></pre> <p>Can someone guide me on this?</p>
<p>Disclaimer: I do not have a very deep understanding of spark implementation, so, be careful when using the workaround described below.</p> <p>AFAIK, spark does not have support for auth/encryption for k8s in 2.4.0 version. </p> <p>There is a ticket, which is already fixed and likely will be released in a next spark version: <a href="https://issues.apache.org/jira/browse/SPARK-26239" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/SPARK-26239</a></p> <p>The problem is that spark executors try to open connection to a driver, and a configuration will be sent only using this connection. Although, an executor creates the connection with default config AND system properties started with "spark.". For reference, here is the place where executor opens the connection: <a href="https://github.com/apache/spark/blob/5fa4384/core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala#L201" rel="nofollow noreferrer">https://github.com/apache/spark/blob/5fa4384/core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala#L201</a></p> <p>Theoretically, if you would set <code>spark.executor.extraJavaOptions=-Dspark.authenticate=true -Dspark.network.crypto.enabled=true ...</code>, it should help, although driver checks that there are no spark parameters set in <code>extraJavaOptions</code>.</p> <p>Although, there is a workaround (a little bit hacky): you can set <code>spark.executorEnv.JAVA_TOOL_OPTIONS=-Dspark.authenticate=true -Dspark.network.crypto.enabled=true ...</code>. Spark does not check this parameter, but JVM uses this env variable to add this parameter to properties.</p> <p>Also, instead of using <code>JAVA_TOOL_OPTIONS</code> to pass secret, I would recommend to use <code>spark.executorEnv._SPARK_AUTH_SECRET=&lt;secret&gt;</code>.</p>
<p>I have this exception while changing parent for entity (@OneToMany relationship).</p> <blockquote> <p>Entity parent update - org.hibernate.HibernateException: identifier of an instance of {Entity} was altered from 1 to 2</p> </blockquote> <p>This exception occurs and can be reproduce only for service running in Kubernetes after some time. I mean that it isn't reproduced from the very begging of container life and some number of updates completed successfully. </p> <p>The method that does the update on the entities looks like this: </p> <pre><code>@Transactional @Override public Optional&lt;EntityT&gt; update(EntityT entity) { entity.setIsConfirmed(true); return getRepository().findById(entity.getId()) .map(entityToUpdate -&gt; updateEntity(entity, entityToUpdate)); } private EntityT updateEntity(EntityT entity, EntityT entityToUpdate) { modelMapper.map(entity, entityToUpdate); getParentRepository().ifPresent(parentRepository -&gt; entity.getParent().ifPresent(parentEntity -&gt; parentRepository.findById(parentEntity.getId()).ifPresent(entityToUpdate::setParent)) ); entityToUpdate.setVersionTs(getCurrentTime()); return getRepository().save(entityToUpdate); } </code></pre> <p>Spring boot version - 2.1.2 Hibernate 5.3.7 also try 5.4.1 - the same result.</p> <p>Also set spring jpa properties to</p> <pre><code>spring: jpa: database-platform: org.hibernate.dialect.MySQL5InnoDBDialect generate-ddl: true hibernate: ddl-auto: update properties: hibernate: jdbc: batch_size: 100 flushMode: "ALWAYS" order_inserts: true order_updates: true </code></pre> <p>Also tried different images for container open-jdk8 / oracle-jdk8</p> <p>Could anybody advice some solution?</p> <p>Thanks in advice.</p>
<p>The main issue was in mapper. Instead of parent replacing it change only id for fetched parent. Then we replace parent but fetched parent remains in cache (with new id) and Hibernate after some time try to flush this changes into DB. </p>
<p>I want to describe my services in kubernetes template files. Is it possible to parameterise values like number or <code>replicas</code>, so that I can set this at deploy time.</p> <p>The goal here is to be able to run my services locally in minikube (where I'll only need one replica) and have them be as close to those running in staging/live as possible.</p> <p>I'd like to be able to change the number of replicas, use locally mounted volumes and make other minor changes, without having to write a seperate template files that would inevitably diverge from each other.</p>
<h1>Helm</h1> <p>Helm is becoming the standard for templatizing kubernetes deployments. A helm chart is a directory consisting of yaml files with golang variable placeholders</p> <pre><code>--- kind: Deployment metadata: name: foo spec: replicas: {{ .Values.replicaCount }} </code></pre> <p>You define the default value of a 'value' in the 'values.yaml' file</p> <pre><code>replicaCount: 1 </code></pre> <p>You can optionally overwrite the value using the <code>--set</code> command line</p> <pre><code>helm install foo --set replicaCount=42 </code></pre> <p>Helm can also point to an external answer file</p> <pre><code>helm install foo -f ./dev.yaml helm install foo -f ./prod.yaml </code></pre> <p>dev.yaml</p> <pre><code>--- replicaCount: 1 </code></pre> <p>prod.yaml</p> <pre><code>--- replicaCount: 42 </code></pre> <p>Another advantage of <code>Helm</code> over simpler solutions like <code>envbsubst</code> is that <code>Helm</code> supports plugins. One powerful plugin is the <code>helm-secrets</code> plugin that lets you encrypt sensitive data using pgp keys. <a href="https://github.com/futuresimple/helm-secrets" rel="noreferrer">https://github.com/futuresimple/helm-secrets</a></p> <p>If using <code>helm</code> + <code>helm-secrets</code> your setup may look like the following where your code is in one repo and your data is in another.</p> <p>git repo with helm charts</p> <pre><code>stable |__mysql |__Values.yaml |__Charts |__apache |__Values.yaml |__Charts incubator |__mysql |__Values.yaml |__Charts |__apache |__Values.yaml |__Charts </code></pre> <p>Then in another git repo that contains the environment specific data</p> <pre><code>values |__ mysql |__dev |__values.yaml |__secrets.yaml |__prod |__values.yaml |__secrets.yaml </code></pre> <p>You then have a wrapper script that references the values and the secrets files</p> <pre><code>helm secrets upgrade foo --install -f ./values/foo/$environment/values.yaml -f ./values/foo/$environment/secrets.yaml </code></pre> <hr /> <h1>envsubst</h1> <p>As mentioned in other answers, <code>envsubst</code> is a very powerful yet simple way to make your own templates. An example from <a href="https://github.com/kubernetes/kubernetes/issues/52787#issuecomment-355694085" rel="noreferrer">kiminehart</a></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment # ... architecture: ${GOOS} </code></pre> <pre><code>GOOS=amd64 envsubst &lt; mytemplate.tmpl &gt; mydeployment.yaml </code></pre> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment # ... architecture: amd64 </code></pre> <hr /> <h1>Kubectl</h1> <p>There is a <a href="https://github.com/kubernetes/kubernetes/issues/52787" rel="noreferrer">feature request</a> to allow <code>kubectl</code> to do some of the same features of helm and allow for variable substitution. There is a <a href="https://docs.google.com/document/d/1cLPGweVEYrVqQvBLJg6sxV-TrE5Rm2MNOBA_cxZP2WU" rel="noreferrer">background document</a> that strongly suggest that the feature will never be added, and instead is up to external tools like <code>Helm</code> and <code>envsubst</code> to manage templating.</p> <hr /> <p>(edit)</p> <h1>Kustomize</h1> <p><a href="https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/" rel="noreferrer">Kustomize</a> is a new project developed by google that is very similar to helm. Basically you have 2 folders <code>base</code> and <code>overlays</code>. You then run <code>kustomize build someapp/overlays/production</code> and it will generate the yaml for that environment.</p> <pre><code> someapp/ ├── base/ │ ├── kustomization.yaml │ ├── deployment.yaml │ ├── configMap.yaml │ └── service.yaml └── overlays/ ├── production/ │ └── kustomization.yaml │ ├── replica_count.yaml └── staging/ ├── kustomization.yaml └── cpu_count.yaml </code></pre> <p>It is simpler and has less overhead than helm, but does not have plugins for managing secrets. You could combine <code>kustomize</code> with <a href="https://github.com/mozilla/sops" rel="noreferrer">sops</a> or <code>envsubst</code> to manage secrets.</p> <p><a href="https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/" rel="noreferrer">https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/</a></p>
<p>I try to stack up my <em>kubeadm</em> cluster with <strong>three</strong> masters. I receive this problem from my <em>init</em> command...</p> <pre><code>[kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster </code></pre> <p>But I do not use no cgroupfs but <strong>systemd</strong> And my kubelet complain for not knowing his nodename.</p> <pre><code>Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.251885 5620 kubelet.go:2266] node "master01" not found Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.352932 5620 kubelet.go:2266] node "master01" not found Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.453895 5620 kubelet.go:2266] node "master01" not found </code></pre> <p>Please let me know where is the issue.</p>
<p>The issue can be because of docker version, as docker version &lt; 18.6 is supported in latest kubernetes version i.e. v1.13.xx. </p> <p>Actually I also got the same issue but it get resolved after downgrading the docker version from 18.9 to 18.6.</p> <p><a href="https://i.stack.imgur.com/UYIMC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UYIMC.png" alt=""></a></p>
<p>I install Istio on GKE and run the application.</p> <p>Although there is no problem when accessing with curl, Ingressgateway returns a status code different from the status code of Pod's proxy by some image request when accessing from the browser. Specifically, 200 and 302 etc. are returned as 500 or 504. Resources to become 500 or 504 differ every time, but it is 1 or 2 out of about 100 image requests. And if you request another 500 or 504 request again, the correct response will come back without problems.</p> <p>Do you know what is causing this kind of reason?</p> <p>The environment is like this.<br> GKE 1.10.11-gke.1<br> Istio 1.0.4 </p> <pre><code>helm install install/kubernetes/helm/istio --name istio --namespace istio-system --set tracing.enabled=true --set kiali.enabled=true --set global.proxy.includeIPRanges="10.0.0.0/8" </code></pre> <p>Below is the log obtained from Stackdriver Logging. </p> <p>Ingressgateway log.</p> <pre><code>"[2019-01-22T09:16:17.048Z] \"GET /my/app/image.pngHTTP/2\" 504 UT 0 24 60001 - \"xxx.xxx.xxx.xxx\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36\" \"c0abe3be-1153-45c5-bd8e-067ab597feb4\" \"my.app.com\" \"10.128.0.116:80\" outbound|80|ga|myapp.default.svc.cluster.local - 10.128.0.16:443 xxx.xxx.xxx.xxx:62257\n" </code></pre> <p>Application Pod's istio-proxy log.</p> <pre><code>"[2019-01-22T09:16:17.048Z] \"GET /my/spp/images.pngHTTP/1.1\" 200 - 0 3113 0 0 \"xxx.xxx.xxx.xxx\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36\" \"c0abe3be-1153-45c5-bd8e-067ab597feb4\" \"my.app.com\" \"127.0.0.1:80\" inbound|80||myapp.default.svc.cluster.local - 10.128.0.116:80 xxx.xxx.xxx.xxx:0\n" </code></pre> <p>nginx log.</p> <pre><code>{ "uri": "/my/app/image.png", "host": "my.app.com", "requestTime": "0.000", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36", "xForwardedProto": "https", "user": "", "protocol": "HTTP/1.1", "bodyByteSize": "3113", "method": "GET", "remoteAddress": "127.0.0.1", "upstreamResponseTime": "", "request": "GET /my/app/images.png HTTP/1.1", "referrer": "https://my.app.com/", "status": "200", "xForwardedFor": "xxx.xxx.xxx.xxx" } </code></pre> <p>Looking at this log I think that Ingressgateway is dropping the response from Pod.</p>
<p><code>UT</code> in the <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/access_log#config-access-log" rel="nofollow noreferrer">proxy's log</a> means that a timeout occurred:</p> <blockquote> <p>UT: Upstream request timeout in addition to 504 response code.</p> </blockquote> <p>Try to increase connection timeout by specifying <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#ConnectionPoolSettings" rel="nofollow noreferrer">Connection Pool Settings</a> in a <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#DestinationRule" rel="nofollow noreferrer">Destination Rule</a>:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: myapp namespace: default spec: host: myapp.default.svc.cluster.local trafficPolicy: connectionPool: tcp: connectTimeout: 10s </code></pre>
<p>I created a Kubernetes cluster a few days ago with 1 Master and 1 worker Node. Now I want to add another node to the cluster, but the token printed by the original "kubeadm init" on the master has expired (by default after 24 hours).</p> <p>The "kubeadm join" command have a "--discovery-file". It takes a config file and I have tried with the format I found here:</p> <p><a href="https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.8.md" rel="noreferrer">https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.8.md</a></p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: &lt;really long certificate data&gt; server: https://10.138.0.2:6443 name: "" contexts: [] current-context: "" kind: Config preferences: {} users: [] </code></pre> <p>I copied the corresponding data from my working kubectl config file and created a local file "a.config".</p> <p>But, when I try the command "sudo kubeadm join --discovery-file a.conf" it fails with the following error messages:</p> <pre><code>[discovery: Invalid value: "": token [""] was not of form ["^([a-z0-9]{6})\\.([a-z0-9]{16})$"], discovery: Invalid value: "": token must be of form '[a-z0-9]{6}.[a-z0-9]{16}'] </code></pre> <p>What am I missing here?</p> <p>What is a procedure know to work in my situation? I prefer not to tear down the cluster and re-join it again.</p>
<p>The easiest way i know to join new nodes to existing cluster is </p> <pre><code>kubeadm token create --print-join-command </code></pre> <p>this will give output like this.</p> <pre><code>kubeadm join 192.168.10.15:6443 --token l946pz.6fv0XXXXX8zry --discovery-token-ca-cert-hash sha256:e1e6XXXXXXXXXXXX9ff2aa46bf003419e8b508686af8597XXXXXXXXXXXXXXXXXXX </code></pre>
<p>I want to use Helm with Gitlab to deploy my services to OpenShift.</p> <p>I have a Gitlab Runner deployed in OpenShift. </p> <p>I already have Tiller installed in Openshift under the <code>tiller</code> namespace and am using the docker image <code>docker.greater.com.au/platform/images/dtzar/helm-kubectl:latest</code></p> <p>My system is also behind a proxy which I won't be able to get past.</p> <p>As part of one of my Gitlab CI build steps I have the following:</p> <pre><code>$ helm init --client-only Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: Proxy Authorization Required </code></pre> <p>My main question is I am wondering if it's possible to disable Helm from trying to add <code>https://kubernetes-charts.storage.googleapis.com</code> as a repostiory as part of <code>helm init</code>?</p> <p>It might be worth noting that I do not know if helm init --client-only is a required step in using helm with this setup.</p> <p>I have also tried a simple <code>helm version</code> and the server is responding with a Proxy Authorization Required error.</p> <pre><code>Client: &amp;version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"} Error: Get https://---.---.---.---:---/api/v1/namespaces/tiller/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: Proxy Authorization Required </code></pre> <p>I've removed the IP address but it's trying to resolve the Tiller server from the wrong IP address when running this <code>helm version</code> command.</p>
<p>You can define which stable repository you would like to use with option <code>-o --stable-repo-url url</code>.</p> <p>Example: <code>helm init --client-only --stable-repo-url https://path.to.my.repo</code></p> <p>You could found more info <a href="https://docs.helm.sh/helm/#helm-init" rel="noreferrer">here</a></p>
<p>I need to create a Multibranch Jenkins job to deploy a .war file in Tomcat that should run on Kubernetes. Basically, I need the following:</p> <ol> <li>A way to install Tomcat on Kubernetes platform. </li> <li>Deploy my war file on this newly installed Tomcat.</li> </ol> <p>I need to make use of <code>Dockerfile</code> to make this happen.</p> <p>PS: I am very new to Kubernetes and Docker stuff and need basic details as well. I tried finding tutorials but couldn't get any satisfactory article. </p> <p>Any help will be highly highly appreciated. </p>
<h2>Docker part</h2> <p>You can use the <a href="https://hub.docker.com/_/tomcat" rel="noreferrer">tomcat docker official image</a></p> <p>In your <code>Dockerfile</code> just copy your war file in <code>/usr/local/tomcat/webapps/</code> directory :</p> <pre><code>FROM tomcat COPY app.war /usr/local/tomcat/webapps/ </code></pre> <p>Build it :</p> <p><code>docker build --no-cache -t &lt;REGISTRY&gt;/&lt;IMAGE&gt;:&lt;TAG&gt; .</code></p> <p>Once your image is built, push it into a Docker registry of your choice.</p> <p><code>docker push &lt;REGISTRY&gt;/&lt;IMAGE&gt;:&lt;TAG&gt;</code></p> <h2>Kubernetes part</h2> <p>1) Here is a simple kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> for your tomcat image </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: &lt;REGISTRY&gt;/&lt;IMAGE&gt;:&lt;TAG&gt; ports: - containerPort: 8080 </code></pre> <p>This Deployment definition will create a pod based on your tomcat image. </p> <p>Put it in a yml file and execute <code>kubectl create -f yourfile.yml</code> to create it.</p> <p>2) Create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Service</a> :</p> <pre><code>kind: Service apiVersion: v1 metadata: name: tomcat-service spec: selector: app: tomcat ports: - protocol: TCP port: 80 targetPort: 8080 </code></pre> <p>You can now access your pod inside the cluster with <a href="http://tomcat-service.your-namespace/app" rel="noreferrer">http://tomcat-service.your-namespace/app</a> (because your war is called <code>app.war</code>)</p> <p>3) If you have <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="noreferrer">Ingress controller</a>, you can create an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="noreferrer">Ingress ressource</a> to expose the application outside the cluster :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: tomcat-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /app backend: serviceName: tomcat-service servicePort: 80 </code></pre> <p>Now access the application using <a href="http://ingress-controller-ip/app" rel="noreferrer">http://ingress-controller-ip/app</a></p>
<p>I was setting up a laravel and socketcluster install on kubernetes and when try to add redis to laravel got an error about an env variable that i doesn't have defined, so when i print all the env variables in any container i get all the variables from others services like this:</p> <pre><code>SCC_STATE_PORT_7777_TCP_PORT=7777 KUBERNETES_SERVICE_PORT=443 PHP_PORT_9000_TCP_ADDR=10.35.246.141 SOCKETCLUSTER_SERVICE_PORT=8000 RDB_SERVICE_PORT_DB=28015 REDIS_SERVICE_PORT=6379 SCC_BROKER_PORT_8888_TCP_PROTO=tcp MARIADB_PORT_3306_TCP=tcp://10.35.247.244:3306 KUBERNETES_PORT_443_TCP_PORT=443 RDB_SERVICE_PORT_WEB=8080 RDB_PORT=tcp://10.35.250.91:28015 RDB_PORT_28015_TCP=tcp://10.35.250.91:28015 KUBERNETES_SERVICE_HOST=10.35.240.1 NGINX_PORT_80_TCP_PORT=80 PHP_SERVICE_PORT=9000 RDB_SERVICE_PORT=28015 RDB_PORT_8080_TCP_ADDR=10.35.250.91 SCC_STATE_PORT_7777_TCP_ADDR=10.35.254.120 SOCKETCLUSTER_PORT=tcp://10.35.244.112:8000 RDB_PORT_28015_TCP_ADDR=10.35.250.91 PHP_PORT=tcp://10.35.246.141:9000 PHP_PORT_9000_TCP=tcp://10.35.246.141:9000 RDB_PORT_28015_TCP_PROTO=tcp REDIS_PORT_6379_TCP_ADDR=10.35.254.59 MARIADB_PORT_3306_TCP_PORT=3306 SCC_STATE_PORT_7777_TCP_PROTO=tcp MARIADB_SERVICE_PORT=3306 PHP_SERVICE_HOST=10.35.246.141 PHP_PORT_9000_TCP_PROTO=tcp RDB_PORT_8080_TCP=tcp://10.35.250.91:8080 RDB_PORT_8080_TCP_PROTO=tcp REDIS_PORT_6379_TCP_PROTO=tcp MARIADB_PORT_3306_TCP_ADDR=10.35.247.244 KUBERNETES_PORT_443_TCP_ADDR=10.35.240.1 NGINX_PORT_80_TCP_ADDR=10.35.247.125 REDIS_SERVICE_HOST=10.35.254.59 SCC_BROKER_SERVICE_HOST=10.35.243.129 SCC_STATE_PORT_7777_TCP=tcp://10.35.254.120:7777 NGINX_PORT=tcp://10.35.247.125:80 SOCKETCLUSTER_PORT_8000_TCP_PROTO=tcp SCC_STATE_SERVICE_PORT=7777 SCC_STATE_PORT=tcp://10.35.254.120:7777 NGINX_PORT_80_TCP_PROTO=tcp SOCKETCLUSTER_PORT_8000_TCP=tcp://10.35.244.112:8000 RDB_SERVICE_HOST=10.35.250.91 NGINX_SERVICE_PORT_DB=80 MARIADB_PORT_3306_TCP_PROTO=tcp PHP_PORT_9000_TCP_PORT=9000 SOCKETCLUSTER_PORT_8000_TCP_PORT=8000 SOCKETCLUSTER_PORT_8000_TCP_ADDR=10.35.244.112 REDIS_PORT_6379_TCP=tcp://10.35.254.59:6379 NGINX_PORT_80_TCP=tcp://10.35.247.125:80 SCC_BROKER_PORT_8888_TCP=tcp://10.35.243.129:8888 KUBERNETES_PORT=tcp://10.35.240.1:443 NGINX_SERVICE_PORT=80 RDB_PORT_28015_TCP_PORT=28015 RDB_PORT_8080_TCP_PORT=8080 SCC_BROKER_SERVICE_PORT=8888 SCC_STATE_SERVICE_HOST=10.35.254.120 MARIADB_SERVICE_HOST=10.35.247.244 KUBERNETES_SERVICE_PORT_HTTPS=443 REDIS_PORT=tcp://10.35.254.59:6379 REDIS_PORT_6379_TCP_PORT=6379 SCC_BROKER_PORT=tcp://10.35.243.129:8888 NGINX_SERVICE_HOST=10.35.247.125 SCC_BROKER_PORT_8888_TCP_PORT=8888 MARIADB_PORT=tcp://10.35.247.244:3306 KUBERNETES_PORT_443_TCP_PROTO=tcp SOCKETCLUSTER_SERVICE_HOST=10.35.244.112 SCC_BROKER_PORT_8888_TCP_ADDR=10.35.243.129 KUBERNETES_PORT_443_TCP=tcp://10.35.240.1:443 </code></pre> <p>when i have this deployments: <img src="https://i.ibb.co/yXbCc77/Captura.png" alt="gke"></p> <p>Any idea if this is a feature, a miss config or what? UPDATE: or if i could disable it?</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">It's a feature.</a> For every Service in the same namespace, you get <code>OTHERSVC_SERVICE_HOST</code> and <code>OTHERSVC_SERVICE_PORT</code> environment variables, plus some others that come from the legacy Docker links feature. I don't know of any way to turn these off.</p> <p>Actually using these is problematic in practice, because it depends on the consuming pod starting after the producing service is up, which is hard to guarantee; DNS lookups <code>othersvc.default.svc.cluster.local</code> may not resolve at runtime but won't have a missing environment variable. Conversely, if you might configure your pod with an environment variable named something like <code>MICRO_SERVICE_HOST</code> or <code>DATABASE_PORT</code>, those names are liable to be "stepped on" by the generated environment variables.</p>
<p>Imagine in a Master-Node-Node setup where you deploy a service with pod anti-affinity on the Nodes: An update of the Deployment will cause another pod being created but the scheduler not being able to schedule, because both Nodes have the anti-affinity.</p> <p><strong>Q:</strong> How could one more flexibly set the anti-affinity to allow the update?</p> <pre><code>affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - api topologyKey: kubernetes.io/hostname </code></pre> <p>With an error </p> <pre><code>No nodes are available that match all of the following predicates:: MatchInterPodAffinity (2), PodToleratesNodeTaints (1). </code></pre>
<p>Look at <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge" rel="noreferrer">Max Surge</a></p> <p>If you set Max Surge = 0, you are telling Kubernetes that you won't allow it to create more pods than the number of replicas you have setup for the deployment. This basically forces Kubernetes to remove a pod before starting a new one, and thereby making room for the new pod first, getting you around the podAntiAffinity issue. I've utilized this mechanism myself, with great success.</p> <p><strong>Config example</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment ... spec: replicas: &lt;any number larger than 1&gt; ... strategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 type: RollingUpdate ... template: ... spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - api topologyKey: kubernetes.io/hostname </code></pre> <p><strong>Warning:</strong> Don't do this if you only have one replica, as it will cause downtime because the only pod will be removed before a new one is added. If you have a huge number of replicas, which will make deployments slow because Kubernetes can only upgrade 1 pod at at a time, you can crank up <em>maxUnavailable</em> to enable Kubernetes to remove a higher number of pods at a time.</p>
<p>I am trying to create a file with in a POD from kubernetes sceret, but i am facing one issue like, i am not able to change permission of my deployed files.</p> <p>I am getting below error, <strong>chmod: changing permissions of '/root/.ssh/id_rsa': Read-only file system</strong></p> <p>I have already apply defaultmode &amp; mode for the same but still it is not working.</p> <pre><code>volumes: - name: gitsecret secret: secretName: git-keys VolumeMounts: - mountPath: "/root/.ssh" name: gitsecret readOnly: false </code></pre> <p>thank you</p>
<p>As you stated, your version of Kubernetes is 1.10 and documentation for it is available <a href="https://v1-10.docs.kubernetes.io/docs/" rel="noreferrer">here</a></p> <p>You can have a look at the github link <a href="https://stackoverflow.com/users/9705485/ryan-dawson">@RyanDawson</a> provided, there you will be able to find that this <code>RO</code> flag for <code>configMap</code> and <code>secrets</code> was intentional. It can be disabled using feature gate <code>ReadOnlyAPIDataVolumes</code>. You can follow this guide on how to <a href="https://docs.okd.io/latest/admin_guide/disabling_features.html" rel="noreferrer">Disabling Features Using Feature Gates</a>.</p> <p>As a workaround, you can try this approach:</p> <pre><code>containers: - name: apache image: apache:2.4 lifecycle: postStart: exec: command: ["chown", "www-data:www-data", "/var/www/html/app/etc/env.php"] </code></pre> <p>You can find explanation inside Kubernetes docs <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="noreferrer">Attach Handlers to Container Lifecycle Events</a></p>
<p>I am going to use GKE to build containers with rubyonrails, I want to do db: create and db: migrate automatically</p> <p>Dockerfile made this way</p> <p>Dockerfile</p> <pre><code>FROM ruby: 2.4.1-alpine RUN apk update &amp;&amp; apk upgrade &amp;&amp; apk add --update - no - cache bash alpine - sdk tzdata postgresql - dev nodejs RUN mkdir / app WORKDIR / app ADD Gemfile / app / Gemfile ADD Gemfile.lock /app/Gemfile.lock RUN bundle install - path vendor / bundle ADD. / App RUN bundle exec rake assets: precompile RUN chmod + x /app/post-start.sh EXPOSE 3000 </code></pre> <p>Setting db: create and db: migrate /app/post-start.sh</p> <pre><code>#!/bin/bash RAILS_ENV = $ RAILS_ENV bundle exec rake db: create RAILS_ENV = $ RAILS_ENV bundle exec rake db: migrate </code></pre> <p>It is deployment rails.yml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: rails labels: app: rails spec: replicas: 1 selector: matchLabels: app: rails template: metadata: labels: app: rails spec: containers: - image: asia.gcr.io/balmy-geography-216916/railsinitpg:v1 name: rails env: - name: RAILS_ENV value: "production" - name: DATABASE_HOST value: postgresql - name: DATABASE_USERNAME valueFrom: secretKeyRef: name: rails key: database_user - name: DATABASE_PASSWORD valueFrom: secretKeyRef: name: rails key: database_password - name: SECRET_KEY_BASE valueFrom: secretKeyRef: name: rails key: secret_key_base - name: DATABASE_PORT value: "5432" lifecycle: postStart: exec: command: - /bin/bash - -c - /app/post-start.sh ports: - containerPort: 3000 name: rails command: ["bundle", "exec", "rails", "s", "-p", "3000", "-b", "0.0.0.0"] </code></pre> <p>kubectl create -f /kubernetes/rails.yml I did it</p> <p>Errors - Contents</p> <pre><code>error: valid validating "STDIN": error validating data: ValidationError (Deployment.spec.template.spec.containers [0]): unknown field "postStart" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate = false </code></pre> <p>The above error ceased to exist, but there was an error again</p> <pre><code>Warning FailedPostStartHook 5m1s (x4 over 5m49s) kubelet, gke-cluster-1-default-pool-04b4ba0b-680k Exec lifecycle hook ([/bin/bash -c /app/post-start.sh]) for Container "rails" in Pod "rails-5c5964445-6hncg_default(1d6a3479-1fcc-11e9-8139-42010a9201ec)" failed - error: command '/bin/bash -c /app/post-start.sh' exited with 126: , message: "rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"/bin/bash\\\": stat /bin/bash: no such file or directory\"\n\r\n" </code></pre> <p>Fix the Dockerfile Fixed Deployment.yml It went well. Thank you very much.</p>
<p>The issue is in indentation in your yaml file, the correct yaml is:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: rails labels: app: rails spec: replicas: 1 selector: matchLabels: app: rails template: metadata: labels: app: rails spec: containers: - image: asia.gcr.io/balmy-geography-216916/railsinitpg:v1 name: rails env: - name: RAILS_ENV value: "production" - name: DATABASE_HOST value: postgresql - name: DATABASE_USERNAME valueFrom: secretKeyRef: name: rails key: database_user - name: DATABASE_PASSWORD valueFrom: secretKeyRef: name: rails key: database_password - name: SECRET_KEY_BASE valueFrom: secretKeyRef: name: rails key: secret_key_base - name: DATABASE_PORT value: "5432" lifecycle: postStart: exec: command: - /bin/bash - -c - /app/kubernetes-post-start.sh ports: - containerPort: 3000 name: rails command: ["bundle", "exec", "rails", "s", "-p", "3000", "-b", "0.0.0.0"] </code></pre> <p>This should resolve the above error.</p>
<p>I have a (working) test program that sends and receives messages across UDP multicast. I've successfully deployed it to kubernetes cluster and demonstrated two pods communicating with one another. The only catch with this is that I need to add <code>hostNetwork: true</code> to the pod specs. As I understand it, this disables all the network virtualization that would otherwise be available. I've also tried</p> <pre><code> - containerPort: 12345 hostPort: 12345 protocol: UDP </code></pre> <p>but when I use that without <code>hostNetwork</code> communication fails.</p> <p>Is there a way to get this working whilst still being able to use the normal network for everything else? (We're unlikely to want to switch network layer to something like Weave.)</p>
<p>Using <code>hostNetwork: true</code> is good when you expect to get direct access from the nested pod to the Node network interface, however it brings some restrictions when you have application hosted on a few Nodes, because every time Kubernetes restarts the Pod, it can be spun on different Node as a result IP address for your application might be changed. Moreover, using <code>hostNetwork</code> makes some problem with port collisions when you are planning to scale your application within Kubernetes cluster and therefore not recommended to implement when you are bootstrapping Kubernetes cluster on Cloud environments. </p> <p>If you wouldn't consider using overlay network for Pods communication as a significant part of <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Cluster Networking</a> model, then you can lose some essential benefits like DNS resolving feature (<a href="https://github.com/coredns/coredns" rel="nofollow noreferrer">CoreDNS</a>, <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/kube-dns" rel="nofollow noreferrer">Kube-DNS</a>). </p> <p>I suppose you can try to use <code>NodePort</code> as a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">Service</a> object. Due to the fact that <code>NodePort</code> service proxies target application port on the corresponded Node it might be worth to check if it fits your requirement, however I don't know anything about your application deployment composition and network specification for a more advanced solution.</p>
<p>In order to access the Kubernetes dashboard you have to run kubectl proxy on your local machine, then point your web browser to the proxy. Similarly, if you want to submit a Spark job you again run kubectl proxy on your local machine then run spark-submit against the localhost address.</p> <p>My question is, why does Kubernetes have this peculiar arrangement? The dashboard service is running on the Kubernetes cluster, so why am I not pointing my web browser at the cluster directly? Why have a proxy? In some cases the need for proxy is inconvenient. For example, from my Web server I want to submit a Spark job. I can't do that--I have to run a proxy first, but this ties me to a specific cluster. I may have many Kubernetes clusters.</p> <p>Why was Kubernetes designed such that you can only access it through a proxy?</p>
<p>You can access your application in the cluster in different ways:</p> <ol> <li>by using <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls" rel="noreferrer">apiserver as a proxy</a>, but you need to pass authentication and authorization stage.</li> <li>by using <a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="noreferrer">hostNetwork</a>. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started.</li> <li>by using <a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="noreferrer">hostPort</a>. The container port will be exposed to the external network at <code>hostIP:hostPort</code>, where the <code>hostIP</code> is the IP address of the Kubernetes node where the container is running and the <code>hostPort</code> is the port requested by the user.</li> <li>by using Services with type: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="noreferrer">ClusterIP</a>. ClusterIP Services accessible only for pods in the cluster and cluster nodes.</li> <li>by using Services with type: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">NodePort</a>. In addition to ClusterIP, this service gets random or specified by user port from range of <code>30000-32767</code>. All cluster nodes listen to that port and forward all traffic to corresponding Service.</li> <li>by using Services with type: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="noreferrer">LoadBalancer</a>. It works only with supported Cloud Providers and with <a href="https://github.com/google/metallb" rel="noreferrer">Metallb</a> for On Premise clusters. In addition to opening NodePort, Kubernetes creates cloud load balancer that forwards traffic to <code>NodeIP:Nodeport</code> for that service. </li> </ol> <p>So, basically: <code>[[[ Kubernetes Service type:ClusterIP] + NodePort ] + LoadBalancer ]</code></p> <ol start="7"> <li>by using Ingress (ingress-controller+Ingress object). Ingress-controller is exposed by Nodeport or LoadBalancer service and works as L7 reverse-proxy/LB for the cluster Services. It has access to ClusterIP Services so, you don't need to expose Services if you use Ingress. You can use it for SSL termination and for forwarding traffic based on URL path. The most popular ingress-controllers are: <ul> <li><a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">kubernetes/nginx-ingress</a>, </li> <li><a href="https://github.com/nginxinc/kubernetes-ingress" rel="noreferrer">nginxinc/kubernetes-ingress</a>, </li> <li><a href="https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/" rel="noreferrer">HAProxy-ingress</a>, </li> <li><a href="https://docs.traefik.io/user-guide/kubernetes/" rel="noreferrer">Traefik</a>.</li> </ul></li> </ol> <p>Now, about <code>kubectl proxy</code>. It uses the first way to connect to the cluster. Basically, it reads the cluster configuration in .kube/config and uses credentials from there to pass cluster API Server authentication and authorization stage. Then it creates communication channel from local machine to API-Server interface, so, you can use local port to send requests to Kubernetes cluster API without necessity to specify credentials for each request.</p>
<p>I am going to use GKE to build containers with rubyonrails, I want to do db: create and db: migrate automatically</p> <p>Dockerfile made this way</p> <p>Dockerfile</p> <pre><code>FROM ruby: 2.4.1-alpine RUN apk update &amp;&amp; apk upgrade &amp;&amp; apk add --update - no - cache bash alpine - sdk tzdata postgresql - dev nodejs RUN mkdir / app WORKDIR / app ADD Gemfile / app / Gemfile ADD Gemfile.lock /app/Gemfile.lock RUN bundle install - path vendor / bundle ADD. / App RUN bundle exec rake assets: precompile RUN chmod + x /app/post-start.sh EXPOSE 3000 </code></pre> <p>Setting db: create and db: migrate /app/post-start.sh</p> <pre><code>#!/bin/bash RAILS_ENV = $ RAILS_ENV bundle exec rake db: create RAILS_ENV = $ RAILS_ENV bundle exec rake db: migrate </code></pre> <p>It is deployment rails.yml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: rails labels: app: rails spec: replicas: 1 selector: matchLabels: app: rails template: metadata: labels: app: rails spec: containers: - image: asia.gcr.io/balmy-geography-216916/railsinitpg:v1 name: rails env: - name: RAILS_ENV value: "production" - name: DATABASE_HOST value: postgresql - name: DATABASE_USERNAME valueFrom: secretKeyRef: name: rails key: database_user - name: DATABASE_PASSWORD valueFrom: secretKeyRef: name: rails key: database_password - name: SECRET_KEY_BASE valueFrom: secretKeyRef: name: rails key: secret_key_base - name: DATABASE_PORT value: "5432" lifecycle: postStart: exec: command: - /bin/bash - -c - /app/post-start.sh ports: - containerPort: 3000 name: rails command: ["bundle", "exec", "rails", "s", "-p", "3000", "-b", "0.0.0.0"] </code></pre> <p>kubectl create -f /kubernetes/rails.yml I did it</p> <p>Errors - Contents</p> <pre><code>error: valid validating "STDIN": error validating data: ValidationError (Deployment.spec.template.spec.containers [0]): unknown field "postStart" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate = false </code></pre> <p>The above error ceased to exist, but there was an error again</p> <pre><code>Warning FailedPostStartHook 5m1s (x4 over 5m49s) kubelet, gke-cluster-1-default-pool-04b4ba0b-680k Exec lifecycle hook ([/bin/bash -c /app/post-start.sh]) for Container "rails" in Pod "rails-5c5964445-6hncg_default(1d6a3479-1fcc-11e9-8139-42010a9201ec)" failed - error: command '/bin/bash -c /app/post-start.sh' exited with 126: , message: "rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"/bin/bash\\\": stat /bin/bash: no such file or directory\"\n\r\n" </code></pre> <p>Fix the Dockerfile Fixed Deployment.yml It went well. Thank you very much.</p>
<p>The handlers for the postStart is not formatted properly, see below the correct indentation </p> <pre><code> lifecycle: postStart: exec: command: ["/bin/bash", "-c", "/app/kubernetes-post-start.sh"] </code></pre> <p>You can check the documentation <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">here</a></p>
<p>I am trying to setup a kubernetes cluster (two nodes, 1 master, 1 worker) on VirtualBox. My host computer runs Windows 10 and on the VirtualBox I have installed Ubuntu 18.10, Codename cosmic.</p> <p>I have configured two adapters on each VirtualBox, one NAT and one Host-Only adapter. I did that because I need to access some internal resources using the host IP (NAT) and I also need a stable network between the host and the virtual machines (Host-only network).</p> <p>I have installed Kubernetes v1.12.4 and successfully joined the worker to the master node. </p> <pre><code>NAME STATUS ROLES AGE VERSION kubernetes-master Ready master 36m v1.12.4 kubernetes-slave Ready &lt;none&gt; 25m v1.12.4 </code></pre> <p>I am using Flannel for networking.</p> <p>All pods seems to be ok.</p> <pre><code> NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-server-7bb6997d9c-kdcld 1/1 Running 0 27m kube-system coredns-576cbf47c7-btrvb 1/1 Running 1 38m kube-system coredns-576cbf47c7-zfscv 1/1 Running 1 38m kube-system etcd-kubernetes-master 1/1 Running 1 38m kube-system kube-apiserver-kubernetes-master 1/1 Running 1 38m kube-system kube-controller-manager-kubernetes-master 1/1 Running 1 38m kube-system kube-flannel-ds-amd64-29p96 1/1 Running 1 28m kube-system kube-flannel-ds-amd64-sb2fq 1/1 Running 1 37m kube-system kube-proxy-59v6b 1/1 Running 1 38m kube-system kube-proxy-bfd78 1/1 Running 0 28m kube-system kube-scheduler-kubernetes-master 1/1 Running 1 38m </code></pre> <p>I have deployed nginx to verify that everything is working</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 41m nginx-http ClusterIP 10.111.151.28 &lt;none&gt; 80/TCP 29m </code></pre> <p>However when I try to reach nginx I am getting a timeout. describe pod gives me the following events.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned default/nginx-server-7bb6997d9c-kdcld to kubernetes-slave Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dbb2595628fc2579c29779e31e27e27eaeff2dbcf2bdb68467c47f22a3590bd0" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "801e0f3f8ca4a9b7cc21d87d41141485e1b1da357f2d89e1644acf0ecf634016" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "77214c757449097bfbe05b24ebb5fd3c7f1d96f7e3e9a3cd48f3b37f30224feb" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ebffdd723083d916c0910489e12368dc4069dd99c24a3a4ab1b1d4ab823866ff" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d87b93815380246a05470e597a88d50eb31c132a50e30000ab41a456d1e65107" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3ef233ef0a6c447134c7b027747a701d6576a80e76c9cc8ffd8287e8ee5f02a4" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6b621aab3c57154941b37360240228fe939b528855a5fe8cd9536df63d41ed93" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fa992bde90e0a1839180666bedaf74965fb26f3dccb33a66092836a25882ab44" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "81f74f687e17d67bd2853849f84ece33a118744278d78ac7af3bdeadff8aa9c7" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Warning FailedCreatePodSandBox 32m (x2 over 32m) kubelet, kubernetes-slave (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "29188c3e73d08e81b08b2258254dc2691fcaa514ecc96e9df86f2e61ba455b76" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory Normal SandboxChanged 32m (x11 over 32m) kubelet, kubernetes-slave Pod sandbox changed, it will be killed and re-created. Normal Pulling 32m kubelet, kubernetes-slave pulling image "nginx" Normal Pulled 32m kubelet, kubernetes-slave Successfully pulled image "nginx" Normal Created 32m kubelet, kubernetes-slave Created container </code></pre> <p>I have tried to do the same exactly installation with a bridge adapter only configured to the virtual machines and then everything works as expected. </p> <p>I believe that its a configuration issue however I am unable to solve it. Can someone advise me.</p>
<p>As I have mentioned in deleted comment, I recreated this on my Ubuntu 18.04 host. Created two Ubuntu 18.10 VM, with two adapters (NAT and one Host-Only adapter). I have the same configuration as you have specified here. Everything works fine. </p> <p>What I had to do was to add the second adapter manually, I did it by using <code>netplan</code> before running <code>kubeadm init</code> and <code>kubeadm join</code> on node. </p> <p>Just in case you did not do that - add the host only adapter network to the yaml file in <code>/etc/netplan/50-cloud-init.yaml</code> and run <code>sudo netplan generate</code> and <code>sudo netplan apply</code>. For nginx I have used <a href="https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment" rel="nofollow noreferrer">deployment</a> from official Kubernetes documentation. Then I have exposed the service:</p> <p><code>kubectl create service nodeport nginx --tcp=80:80</code> Curling my node IP address on NodePort from host machine works fine.</p> <p>This was just to demonstrate what I did so it works in my environment. Judging from the described pod error it seems like there is something wrong with Flannel itself:</p> <p><code>/run/flannel/subnet.env: no such file or directory</code></p> <p>I checked this directory on master and it looks like this:</p> <p><strong>/run/flannel/subnet.env</strong></p> <pre><code>FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true </code></pre> <p>Check if the file is there, and if this will not help you, we can try to further troubleshoot if you provide more information. However there are too many unknowns so I had to guess in some places, my advice would be to destroy it all and try again with the information I have provided, and run the nginx with NodePort and not ClusterIP type. ClusterIP will only be reachable from inside of the cluster - for example Node. </p>
<p>It is known that if a pod consumed more resource than <code>request</code>, it is likely to be evicted or terminated. What is the purpose of <code>resource limit</code> then? Is it like a grace period? </p> <pre><code> resources: requests: cpu: "100m" limits: cpu: "200m" </code></pre> <p>I didn't see a clear documentation for this in Kubernetes official doc. Can anyone clarify this?</p>
<p>Request guarantees a <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled" rel="nofollow noreferrer">minimum amount of resource</a>, which the scheduler does by ensuring the node that the Pod is scheduled to has space for it. Limit is a <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">maximum over which a Pod is like to be killed</a>.</p> <p>I personally find the <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">google kubernetes documentation clearer on this</a> than the official kubernetes one.</p>
<p><strong>UPDATE:</strong></p> <p>With @Tanktalus 's answer, I realized that it was the left most <code>kubectl</code> command is buffered. </p> <pre class="lang-sh prettyprint-override"><code># will hang forever, because RHS pipe is broken, and LHS pipe need to send # the output to the pipe to realize the broken pipe, but as the buffer is # never filled, it's never broken kubectl logs -f pod -n NAMESPACE | grep -q "Indicator" # put LHS to the background, because I don't care if it hang, I just need the log. (kubectl logs -f pod -n NAMESPACE &amp;) | grep -q "Indicator" </code></pre> <p>But I have a new problem, the following now hang forever:<br> <code>(kubectl logs -f pod -n NAMESPACE &amp;)| tee log &gt;(grep -q "Indicator")</code></p> <hr> <p><strong>ORIGINAL QUESTION:</strong><br> First of all this is not repeated with other similar questions, I have read them all. The subtle difference is that my streamed log is inactive right after the string indicator I am trying to grep.</p> <p>I have a continuous streamed log output from kubernetes pod. The indicator string "Indicator" will appear in the end of the log generator application, and the log generator goes <code>sleep infinity</code>. So the log will still be streamed, but gives no new output.</p> <p>I am trying use a pipe <code>|</code> to redirect my kubernetes' streamed log, then grep each line of the log, until I find the "Indicator", then I want to (immediately) exit. The commands I have tried are like:</p> <pre><code># none of them worked, they all show the Indicator line, and then hangs forever. kubectl logs -f pod -n NAMESPACE | tee test.log &gt;(grep -q "Indicator") stdbuf -o 0 kubectl logs -f pod -n NAMESPACE | tee test.log &gt;(grep -m1 "Indicator") stdbuf -o 0 kubectl logs -f pod -n NAMESPACE | tee test.log &gt;(grep -q --line-buffered "Indicator") stdbuf -o 0 kubectl logs -f pod -n NAMESPACE | grep -q --line-buffered "Indicator" </code></pre> <p>But because after the "Indicator", there will be only one more line of log "+ Sleep infinity". I guess the output buffer from the leftmost end of the pipe is not full, and thus it's not passed to grep?</p> <p>Is there any way to solve this issue ?</p>
<p>I suspect it's because <code>kubectl</code> hasn't exited that the shell doesn't continue on. If you look at the <code>ps</code> output, you'll notice that <code>grep -m1 ...</code> does actually exit, and doesn't exist anymore, but the rest of the pipe still exists.</p> <p>So I suspect you'll need to invert this. In perl, for example, I would use <code>open</code> to open a pipe to kubectl, read the output until I found what I wanted, kill the child, and exit. In C, the same thing with <code>popen</code>. I'm not sure if bash gives quite that level of control.</p> <p>For example:</p> <pre><code> perl -E 'my $pid = open my $fh, "-|", qw(perl -E), q($|++; say for 1..10; say "BOOM"; say "Sleep Infinity"; sleep 50) or die "Cannot run: $!"; while(&lt;$fh&gt;) { if (/BOOM/) { say; kill "INT", $pid; exit 0 } }' </code></pre> <p>You'll have to replace the stuff in the <code>open</code> after <code>"-|"</code> with your own command, and the <code>if (/BOOM/)</code> with your own regex, but otherwise it should work.</p>
<p>I'm trying to clean up some leftover data from a failed deployment of rabbitmq. As such, I have 3 secrets that were being used by rabbit services that never fully started. Whenever I try to delete these using kubectl delete secret they get recreated with a similar name instantly (even when using --force).</p> <p>I do not see any services or pods that are using these secrets, so there shouldn't be any reason they are persisting. </p> <p>Example of what happens when I delete: <a href="https://i.stack.imgur.com/EL8vJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EL8vJ.png" alt="enter image description here"></a></p>
<p>The reason they wouldn't delete is because they were associated with a service account.</p> <p>I found this by looking at their yaml files, which mentioned they were for a service account.</p> <p>I then ran</p> <pre><code>kubectl get serviceaccounts </code></pre> <p>which returned a list of accounts that had identical names. After running</p> <pre><code>kubectl delete serviceaccounts &lt;accountName&gt; </code></pre> <p>The secrets removed themselves.</p> <p>However, if they do not, you can still get and delete them with</p> <pre><code>kubectl get secrets kubectl delete secret &lt;secret name&gt; </code></pre> <p>If you do not see the item in question, you may want to append --all-namespaces to see "all" of them, as by default it looks at the top level of your kubernetes environment.</p>
<p>I have a list of namespaces created under the same k8s cluster and I'd like to find out the resource (CPU, memory) usage per namespace. Is there any command I can use?</p>
<p>Yes. You can use</p> <pre><code>$ kubectl -n &lt;nampespace&gt; top pod </code></pre> <p>For example:</p> <pre><code>$ kubectl top pod -n kube-system NAME CPU(cores) MEMORY(bytes) calico-node-xxxxx 17m 166Mi coredns-xxxxxxxxxx-xxxxx 2m 11Mi coredns-xxxxxxxxxx-xxxxx 3m 11Mi etcd-ip-x-x-x-x.us-west-2.compute.internal 19m 149Mi kube-apiserver-ip-x-x-x-x.us-west-2.compute.internal 39m 754Mi kube-controller-manager-ip-x-x-x-x.us-west-2.compute.internal 20m 138Mi kube-proxy-xxxxx 5m 12Mi kube-scheduler-ip-x-x-x-x.us-west-2.compute.internal 6m 17Mi metrics-server-xxxxxxxxxx-xxxxx 0m 15Mi </code></pre> <p>You need to add up all the entries on the CPU and MEMORY columns if you want the total.</p> <p>Note that for <code>kubectl top</code> to work you need to have the <a href="https://github.com/kubernetes-incubator/metrics-server" rel="noreferrer">metrics-server</a> set up and configured appropriately. (Older clusters use the <a href="https://github.com/kubernetes-retired/heapster" rel="noreferrer">heapster</a>)</p>
<p>I have uninstalled minikube through <a href="https://stackoverflow.com/a/49253264/6039697">this</a> post and now every time I open the terminal this message appears:</p> <pre><code>kubectl: command not found Command 'minikube' not found, did you mean: command 'minitube' from deb minitube Try: sudo apt install &lt;deb name&gt; </code></pre> <hr> <p>I have no idea what is going on. Can someone help me to stop this message from appearing?</p>
<p>You probably have entries in your <code>~/.bashrc</code> file that are both calling <code>minikube</code> or/and <code>kubectl</code>. Just edit/remove those entries. For example:</p> <pre><code>vi ~/.bashrc </code></pre> <p>In the <code>vi</code> editor:</p> <pre><code>/kubectl dd /minikube dd :wq </code></pre>
<p>I can taint a node but not an instance group.</p> <pre><code>kubectl taint nodes nodeA kops.k8s.io/instancegroup=loadbalancer:NoSchedule </code></pre> <p>can we do below</p> <pre><code>kubectl taint instanceGroup loadbalancer:NoSchedule error: at least one taint update is required </code></pre>
<p>Instance Group is <a href="https://cloud.google.com/compute/docs/instance-groups/" rel="nofollow noreferrer">GCP thing</a> and not a Kubernetes thing. Taints can only be done on nodes. So you will have to taint the nodes manually. </p> <p>If you named the nodes in your instance group to something that matches <code>mygroup</code>. For example, you could do something like this:</p> <pre><code>$ kubectl taint nodes `kubectl get nodes -o name | grep mygroup` key=value:NoSchedule </code></pre>
<p>One of our Google Kubernetes Engine clusters has lost access to Google Cloud Platform via it's main service account. It was not using the service account 'default', but a custom one, but it's now gone. Is there a way to restore or change the service account for a GKE cluster after it has been created? Or are we just out of luck and do we have to re-create the cluster?</p>
<p>Good news! We found a way to solve the issue without having to re-create the entire cluster.</p> <ol> <li>Create a new node-pool and make sure it has the default permissions to Google Cloud Platform (this is the case if you create the pool via the Console UI).</li> <li>'Force' all workloads on the new node pool (e.g. by using node labels).</li> <li>Re-deploy the workloads.</li> <li>Remove the old (broken ) node pool.</li> </ol> <p>Hope this helps anyone with the same issue in the future.</p>
<p>I am currently working on a template for OpenShift and my ImageChange trigger gets deleted when I initally instantiate the application. My Template contains the following objects</p> <ul> <li>ImageStream</li> <li>BuildConfig</li> <li>Service</li> <li>Route</li> <li>Deploymentconfig</li> </ul> <p>I guess the route is irrelevant but this is what it looks like so far (for better overview I will post the objects seperated, but they are all items in my Template)</p> <p><strong>ImageStream</strong></p> <pre><code>- kind: ImageStream apiVersion: v1 metadata: labels: app: my-app name: my-app namespace: ${IMAGE_NAMESPACE} </code></pre> <p><strong>BuildConfig</strong></p> <pre><code>- kind: BuildConfig apiVersion: v1 metadata: labels: app: my-app deploymentconfig: my-app name: my-app namespace: ${IMAGE_NAMESPACE} selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/buildconfigs/my-app spec: runPolicy: Serial source: git: ref: pre-prod uri: 'ssh://[email protected]:port/project/my-app.git' sourceSecret: name: git-secret type: Git strategy: type: Source sourceStrategy: env: - name: HTTP_PROXY value: 'http://user:[email protected]:8080' - name: HTTPS_PROXY value: 'http://user:[email protected]:8080' - name: NO_PROXY value: .something.net from: kind: ImageStreamTag name: 'nodejs:8' namespace: openshift output: to: kind: ImageStreamTag name: 'my-app:latest' namespace: ${IMAGE_NAMESPACE} </code></pre> <p><strong>Service</strong></p> <pre><code>- kind: Service apiVersion: v1 metadata: name: my-app labels: app: my-app spec: selector: deploymentconfig: my-app ports: - name: 8080-tcp port: 8080 protocol: TCP targetPort: 8080 sessionAffinity: None type: ClusterIP </code></pre> <p><strong>DeploymentConfig</strong></p> <p>Now what is already weird in the DeploymentConfig is that under spec.template.spec.containers[0].image I have to specify the full path to the repository to make it work, otherwise I get an error pulling the image. (even though documentation says my-app:latest would be correct)</p> <pre><code>- kind: DeploymentConfig apiVersion: v1 metadata: labels: app: my-app deploymentconfig: my-app name: my-app namespace: ${IMAGE_NAMESPACE} selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/deploymentconfigs/my-app spec: selector: app: my-app deploymentconfig: my-app strategy: type: Rolling rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailability: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 replicas: 1 template: metadata: labels: app: my-app deploymentconfig: my-app spec: containers: - name: my-app-container image: "${REPOSITORY_IP}:${REPOSITORY_PORT}/${IMAGE_NAMESPACE}/my-app:latest" imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP - containerPort: 8081 protocol: TCP env: - name: MONGODB_USERNAME valueFrom: secretKeyRef: name: my-app-database key: database-user - name: MONGODB_PASSWORD valueFrom: secretKeyRef: name: my-app-database key: database-password - name: MONGODB_DATABASE value: "myapp" - name: ROUTE_PATH value: /my-app - name: MONGODB_AUTHDB value: "myapp" - name: MONGODB_PORT value: "27017" - name: HTTP_PORT value: "8080" - name: HTTPS_PORT value: "8082" restartPolicy: Always dnsPolicy: ClusterFirst triggers: - type: ImageChange imageChangeParams: automatic: true from: kind: ImageStreamTag name: 'my-app:latest' namespace: ${IMAGE_NAMESPACE} containerNames: - my-app-container - type: ConfigChange </code></pre> <p>I deploy the application using</p> <pre><code>oc process -f ./openshift/template.yaml ..Parameters... | oc apply -f - </code></pre> <p>But the outcome is the same when I use oc new-app.</p> <p>The weird thing is. The application gets deployed and is running fine, but image changes will have no effect. So I exported DeploymentConfig and found that it was missing the ImageChangeTrigger leaving the trigger part being</p> <pre><code>triggers: - type: ConfigChange </code></pre> <p>At first I thought this was due to the fact that maybe the build was not ready when I tried to apply the DeploymentConfig so I created a build first and waited for it to finish. Afterwards I deployed the rest of the application (Service, Route, DeploymentConfig). The outcome was the same however. If I use the Webgui and change the DeploymentConfig there from</p> <p><a href="https://i.stack.imgur.com/RVozP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RVozP.png" alt="enter image description here"></a></p> <p>to this, fill out namespace, app and tag (latest) and hit apply everything works as it should. I just can't figure out why the trigger is beeing ignored initially. Would be great if someone has an idea where I'm wrong</p> <p><a href="https://i.stack.imgur.com/niNlY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/niNlY.png" alt="enter image description here"></a></p> <p>Versions I am using are</p> <p>oc: v3.9.0</p> <p>kubernetes: v1.6.1</p> <p>openshift v3.6.173.0.140</p>
<p>OK the answer was pretty simple. Turned out it was just an indentation error in the yaml file for the DeploymentConfig. Instead of</p> <pre><code> dnsPolicy: ClusterFirst restartPolicy: Always terminationGracePeriodSeconds: 30 triggers: - type: ImageChange imageChangeParams: automatic: true containerNames: - alpac-studio-container from: kind: ImageStreamTag name: alpac-studio:latest - type: ConfigChange </code></pre> <p>It has to be</p> <pre><code> dnsPolicy: ClusterFirst restartPolicy: Always terminationGracePeriodSeconds: 30 triggers: - type: ImageChange imageChangeParams: automatic: true containerNames: - alpac-studio-container from: kind: ImageStreamTag name: alpac-studio:latest - type: ConfigChange </code></pre> <p>So the triggers have to be on the same level as e.g. template and strategy</p>
<p>We are trying to run an instance of the RabbitMQ chart with Helm from the <a href="https://github.com/helm/charts/tree/master/stable/rabbitmq" rel="nofollow noreferrer">helm/charts/stable/rabbit</a> project. I had it running perfect but then I had to restart k8s for some maintenance. Now we are completely unable to launch the RabbitMQ chart in any way shape or form. I am not even trying to run the chart with any variables, i.e. just the default values. </p> <p>Here is all I am doing:</p> <pre><code>helm install stable/rabbitmq </code></pre> <p>I have confirmed I can simply run the default right on my local k8s which I'm running with Docker for Desktop. When we run the rabbit chart on our shared k8s the exact same way as on desktop and what we did before the restart, the following error is thrown:</p> <pre><code>Failed to get nodes from k8s - 503 </code></pre> <p>I have also posted an issue on the Helm charts repo as well. <a href="https://github.com/helm/charts/issues/10811" rel="nofollow noreferrer">Click here to see the issue on Github.</a> </p> <p>We are suspecting the DNS but are unable to confirm anything yet. What is very frustrating is after the restart every single other chart we installed restarted perfectly except Rabbit which now will not start at all. </p> <p>Anyone know what I could do to get Rabbits peer discovery to work? Anyone seen issue like this after restarting k8s? </p>
<p>So I actually got rabbit to run. Turns out my issue was the k8s peer discovery could not connect over the default port 443 and I had to use the external port 6443 because <code>kubernetes.default.svc.cluster.local</code> resolved to the public port and could not find the internal, so yeah our config is messed up too. </p> <p>It took me a while to realize the variable below was not overriding when I overrode it with <code>helm install . -f server-values.yaml</code>. </p> <pre><code>rabbitmq: configuration: |- ## Clustering cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s cluster_formation.k8s.host = kubernetes.default.svc.cluster.local cluster_formation.k8s.port = 6443 cluster_formation.node_cleanup.interval = 10 cluster_formation.node_cleanup.only_log_warning = true cluster_partition_handling = autoheal # queue master locator queue_master_locator=min-masters # enable guest user loopback_users.guest = false </code></pre> <p>I had to add <code>cluster_formation.k8s.port = 6443</code> to the main <code>values.yaml</code> file instead of my own. Once the port was changed specifically in the <code>values.yaml</code>, rabbit started right up. </p>
<p>Is there a way through which I can run an existing Job using CronJob resource. In CronJob Spec template can we apply a selector using labels. Something like this:</p> <p>Job Spec: (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job" rel="nofollow noreferrer">Link to job docs</a>)</p> <pre><code>apiVersion: batch/v1 kind: Job label: name: pi spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4 </code></pre> <p>Cron Spec:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: pi-cron spec: schedule: "*/1 * * * *" jobTemplate: spec: labelSelector: name: pi # refer to the job created above </code></pre> <p>I came across this. I want to try inverse of this. <a href="https://stackoverflow.com/questions/40401795/how-can-i-trigger-a-kubernetes-scheduled-job-manually">Create-Job-From-Cronjob</a></p>
<p>No, you can not do this in the way you want. <code>kubectl</code> only allows you to create jobs based on cronjob, but not vise-versa.</p> <pre><code> kubectl create job NAME [--image=image --from=cronjob/name] -- [COMMAND] [args...] [flags] [options] </code></pre> <p>Available commands right now for kubectl create:</p> <pre><code> clusterrole Create a ClusterRole. clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole configmap Create a configmap from a local file, directory or literal value deployment Create a deployment with the specified name. job Create a job with the specified name. namespace Create a namespace with the specified name poddisruptionbudget Create a pod disruption budget with the specified name. priorityclass Create a priorityclass with the specified name. quota Create a quota with the specified name. role Create a role with single rule. rolebinding Create a RoleBinding for a particular Role or ClusterRole secret Create a secret using specified subcommand service Create a service using specified subcommand. serviceaccount Create a service account with the specified name </code></pre>
<p>I want to convert below line to run as a bash script so that i can call it using jenkins job.</p> <pre><code>kubectl create -f tiller-sa-crb.yaml </code></pre> <p>The tiller-sa-crb.yaml is below. How can i convert my above command in bash script. such that my jenkins job calls ./tiller-sa-crb.sh and it does all below. Basically, my end goal is to have a pure shell script and invoke that on jenkins job.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system </code></pre>
<p>You can also make use of stdin to kubectl create command, like this:</p> <pre><code>#!/usr/bin/env bash cat &lt;&lt;EOF | kubectl create -f - apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system EOF </code></pre>
<p>I have installed Grafana in Kubernetes. I am trying to do everything automaticaly by scripts. I am able to intall grafana, import datasouce and dashobards. But i would like to also add a Notification channel to slack BUT not in web UI but somewhere in the config. It there any possibiluty to do that?</p> <p>Jakub</p>
<p>The easiest way for now to use Grafana API:</p> <pre><code>POST /api/alert-notifications { "name": "new alert notification", //Required "type": "email", //Required "isDefault": false, "sendReminder": false, "settings": { "addresses": "[email protected];[email protected]" } } </code></pre> <p>Docs <a href="http://docs.grafana.org/http_api/alerting/#get-alert-notifications" rel="nofollow noreferrer">http://docs.grafana.org/http_api/alerting/#get-alert-notifications</a></p>
<p>So I am trying to run node conformance test for kubernetes. <a href="https://kubernetes.io/docs/setup/node-conformance/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/node-conformance/</a> Heres the thing , its complaining about unsupported docker version. After a look into the go code it requires a 1.7.x version of docker however I cant pull that version down as its unsupported , anyone solved this problem? here is my output:</p> <pre><code>~# docker run -it --privileged --net=host -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result k8s.gcr.io/node-test:0.2 Running Suite: E2eNode Suite ============================ Random Seed: 1547126906 - Will randomize all specs Will run 88 of 162 specs Running in parallel across 8 nodes OS: Linux KERNEL_VERSION: 4.15.0-43-generic CONFIG_NAMESPACES: enabled CONFIG_NET_NS: enabled CONFIG_PID_NS: enabled CONFIG_IPC_NS: enabled CONFIG_UTS_NS: enabled CONFIG_CGROUPS: enabled CONFIG_CGROUP_CPUACCT: enabled CONFIG_CGROUP_DEVICE: enabled CONFIG_CGROUP_FREEZER: enabled CONFIG_CGROUP_SCHED: enabled CONFIG_CPUSETS: enabled CONFIG_MEMCG: enabled CONFIG_INET: enabled CONFIG_EXT4_FS: enabled CONFIG_PROC_FS: enabled CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module) CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module) CONFIG_OVERLAY_FS: enabled (as module) CONFIG_AUFS_FS: enabled (as module) CONFIG_BLK_DEV_DM: enabled CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled DOCKER_VERSION: 18.06.0-ce F0110 13:28:26.746125 129 e2e_node_suite_test.go:96] system validation failed: unsupported docker version: 18.06.0-ce </code></pre>
<p>Couple of thing I can suggest you to run Conformance Test:</p> <p>1) Forget about <a href="https://kubernetes.io/docs/setup/node-conformance/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/node-conformance/</a></p> <p>2) Install <a href="https://github.com/kubernetes/test-infra/tree/master/kubetest" rel="nofollow noreferrer">kubetest</a> and follow <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/conformance-tests.md" rel="nofollow noreferrer">Conformance Testing in Kubernetes</a> instruction</p> <p>3) Use <a href="https://github.com/heptio/sonobuoy" rel="nofollow noreferrer">sonobuoy</a> solution from Heptio. More information you can find here: <a href="https://github.com/heptio/sonobuoy/blob/master/docs/conformance-testing.md" rel="nofollow noreferrer">Conformance Testing - 1.11+</a></p> <p>Good Luck!</p>
<p>If I expose a (single) web service (say <code>http://a.b.c.d</code> or <code>https://a.b.c.d</code>) on a (small) Kubernetes 1.13 cluster, what is the benefit of using <code>Ingress</code> over a <code>Service</code> of type <code>ClusterIP</code> with <code>externalIPs [ a.b.c.d ]</code> alone?</p> <p>The address <code>a.b.c.d</code> is routed to one of my cluster nodes. <code>Ingress</code> requires installing and maintaining an <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress controller</a>, so I am wondering when this is justified.</p>
<ul> <li>Each service of type ClusterIP has its own public IP address, whereas an Ingress only requires single IP even if you want to provide access to dozens of services.</li> <li>You can also forward the client requests to the corresponding service based on the host and path based routing provided by Ingress.</li> <li>As Ingresses operate at layer 7 (application layer), it can also provide features like cookie-based session, which is not possible via services.</li> </ul>
<p>I am running Traefik on Kubernetes and I have create an Ingress with the following configuration: </p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: whitelist-ingress annotations: kubernetes.io/ingress.class: traefik traefik.frontend.rule.type: PathPrefix traefik.ingress.kubernetes.io/whitelist-source-range: "10.10.10.10/32, 10.10.2.10/23" ingress.kubernetes.io/whitelist-x-forwarded-for: "true" traefik.ingress.kubernetes.io/preserve-host: "true" spec: rules: - host: http: paths: - path: /endpoint backend: serviceName: endpoint-service servicePort: endpoint-port --- </code></pre> <p>When I do a POST on the above endpoint, Traefik logs that the incoming IP is 172.16.0.1 and so my whitelist is not triggered. Doing an ifconfig I see that IP belongs to Docker </p> <pre><code>docker0: flags=4099&lt;UP,BROADCAST,MULTICAST&gt; mtu 1500 inet 172.26.0.1 netmask 255.255.0.0 broadcast 172.26.255.255 </code></pre> <p>How can I keep the original IP instead of the docker one? </p> <p>EDIT</p> <p>Traefik is exposed as LoadBalancer and the port is 443 over SSL</p> <p>This is its yml configuration</p> <pre><code>--- kind: Service apiVersion: v1 metadata: name: traefik annotations: {} # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 spec: selector: k8s-app: traefik-ingress ports: - protocol: TCP port: 80 targetPort: 80 name: http - protocol: TCP port: 443 targetPort: 443 name: https type: LoadBalancer externalTrafficPolicy: Local externalIPs: - &lt;machine-ip&gt; --- kind: Deployment apiVersion: apps/v1 metadata: name: traefik-ingress-controller namespace: default labels: k8s-app: traefik-ingress spec: replicas: 2 selector: matchLabels: k8s-app: traefik-ingress template: metadata: labels: k8s-app: traefik-ingress name: traefik-ingress spec: hostNetwork: true serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 35 volumes: - name: proxy-certs secret: secretName: proxy-certs - name: traefik-configmap configMap: name: traefik-configmap containers: - image: traefik:1.7.6 name: traefik-ingress imagePullPolicy: IfNotPresent resources: limits: cpu: 200m memory: 900Mi requests: cpu: 25m memory: 512Mi livenessProbe: failureThreshold: 2 httpGet: path: /ping port: 80 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 5 readinessProbe: failureThreshold: 2 httpGet: path: /ping port: 80 scheme: HTTP periodSeconds: 5 volumeMounts: - mountPath: "/ssl" name: "proxy-certs" - mountPath: "/config" name: "traefik-configmap" ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: dashboard containerPort: 8080 args: - --logLevel=DEBUG - --configfile=/config/traefik.toml --- </code></pre> <p>As you can see here is the output of kubectl get svc </p> <pre><code>traefik LoadBalancer 10.100.116.42 &lt;machine-ip&gt; 80:30222/TCP,443:31578/TCP &lt;days-up&gt; </code></pre> <p>Note that Traefik is running in a single node kubernetes cluster (master/worker on the same node). </p>
<p>In LoadBalancer service type doc <a href="https://kubernetes.io/docs/concepts/services-networking/#ssl-support-on-aws" rel="nofollow noreferrer">ssl support on aws</a> you can read the following statement:</p> <blockquote> <p>HTTP and HTTPS will select layer 7 proxying: the ELB will terminate the connection with the user, parse headers and inject the X-Forwarded-For header with the user’s IP address (pods will only see the IP address of the ELB at the other end of its connection) when forwarding requests.</p> </blockquote> <p>So if you add the following annotation to you traeffik service:</p> <pre><code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https </code></pre> <p>It should work with the <code>ingress.kubernetes.io/whitelist-x-forwarded-for: "true"</code> annotation present in your ingress config and the forwarded header is added by the aws loadbalancer.</p> <p>Disclaimer: I have not tested that solution.</p> <p>Regards.</p>
<p>I've just successfully followed <a href="https://docs.aws.amazon.com/en_us/eks/latest/userguide/getting-started.html" rel="nofollow noreferrer">AWS EKS Getting Started Guide</a>, and now I have operational Kubernetes cluster of 3 worker nodes.</p> <p>Worker node EC2 instances have auto-assigned public IPs:</p> <pre><code>IPv4 Public IP: 18.197.201.199 Private IPs: 192.168.180.57, 192.168.148.90 Secondary private IPs: 192.168.170.137, 192.168.180.185, 192.168.161.170, 192.168.133.109, 192.168.182.189, 192.168.189.234, 192.168.166.204, 192.168.156.144, 192.168.133.148, 192.168.179.151 </code></pre> <p>In order to connect to private off-AWS resources, firewall rules require node public IPs be from specific pool of Elastic IPs. (More specifically, worker nodes must access private Docker registry behind the corporate firewall, which white-lists several AWS Elastic IPs.) The simplest seems to override auto-assigned public node IPs with pre-defined Elastic IPs; however AWS allows to associate Elastic IP only with a specific private IP.</p> <p>How do I proceed to replace auto-assigned public IPs with Elastic IPs?</p>
<p>Remember that nodes can come and go. </p> <p>You wouldn't want a specific node in your cluster configured to an Elastic IP that was cleared for your off-AWS resource(s).</p> <p>Instead you would have a NAT Gateway assigned an Elastic IP and cluster node(s) in a private subnet that use that NAT Gateway for outbound communication.</p> <p>This configuration is described beginning on page 85 of this pdf. <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf</a></p>
<p>I would like to set auto-scaling feature in Azure Pipelines for deployment of containers with Kubernetes. How can I do that if I want to auto-scaling by below rules:</p> <blockquote> <ol> <li>depeneds on no. of messages in ServiceBus message queue</li> <li>it will scale inside a node first (better to set maximum pods in node as it depends on thread no.?)</li> <li>if reach maximum pods in a node, then it will scale out to use another node (can scale to use maximum pods of node as well?i.e. 3 pods for 1 node, 6 pods for 2 nodes without total 4/5 pods)</li> </ol> </blockquote> <p>Finally, how to set by using Azure Pipelines? (above only set in Yaml file, right?)</p>
<p>I dont think there is a way to scale based on number of message in the ServiceBus queue (at least native to kubernetes), that being said - you should use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> with <a href="https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling" rel="nofollow noreferrer">Cluster Autoscaler</a>. These are native kubernetes mechanisms to scale pods\cluster based on the load on the pods.</p> <p>Its in preview with AKS: <a href="https://learn.microsoft.com/en-us/azure/aks/autoscaler" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/autoscaler</a></p> <p>Another approach I've seen: using cronJobs. Just start your message processors every minute with cronJobs and make them scale. this is an easy approach that requires not a lot of configuration. You can drop pod autoscaler with this approach and only use cluster autoscaler, I haven't used this approach, but it looks promising.</p>
<p>I have a kubernetes service running on Azure. After the deployment and service are created, the service publishes an External-IP address and I am able to access the service on that IP:Port.</p> <p>However, I want to access the service through a regular domain name. I know that the kubernetes cluster running on Azure has its own DNS, but how can I figure out what the service DNS name is???</p> <p>I am running multiple services, and they refer to one another using the &lt;_ServiceName>.&lt;_Namespace>.svc.cluster.local naming convention, but if I attempt to access the Service using &lt;_ServiceName>.&lt;_Namespace>.svc.&lt;_kubernetesDNS>.&lt;_location>.azureapp.com, it doesnt work.</p> <p>Any help would be greatly appreciated.</p>
<p>Firstly in order to use DNS you should have Service type LoadBalancer. It will create ExternalIp for your service. So if you have and Service which type is LoadBalancer you may take your external IP address of service with below command:</p> <pre><code>kubectl get services --all-namespaces </code></pre> <p>Then copy external ip and give below commands via PowerShell</p> <p>P.S change IP adress with your own external ip address and service name with your own service name</p> <pre><code>$IP="23.101.60.87" $DNSNAME="yourservicename-aks" $PUBLICIPID=az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME </code></pre> <p>So after this commands just give command below via PowerShell again, it will show you your DNS:</p> <pre><code>az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[dnsSettings.fqdn]" -o table </code></pre> <p>Reference: <a href="http://alakbarv.azurewebsites.net/2019/01/25/azure-kubernetes-service-aks-get-dns-of-your-service/" rel="nofollow noreferrer">http://alakbarv.azurewebsites.net/2019/01/25/azure-kubernetes-service-aks-get-dns-of-your-service/</a></p>
<p>I have a k8s service/deployment in a minikube cluster (name <code>amq</code> in <code>default</code> namespace:</p> <pre><code>D20181472:argo-k8s gms$ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argo argo-ui ClusterIP 10.97.242.57 &lt;none&gt; 80/TCP 5h19m default amq LoadBalancer 10.102.205.126 &lt;pending&gt; 61616:32514/TCP 4m4s default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5h23m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 5h23m </code></pre> <p>I spun up infoblox/dnstools, and tried <code>nslookup</code>, <code>dig</code> and <code>ping</code> of <code>amq.default</code> with the following results:</p> <pre><code>dnstools# nslookup amq.default Server: 10.96.0.10 Address: 10.96.0.10#53 Name: amq.default.svc.cluster.local Address: 10.102.205.126 dnstools# ping amq.default PING amq.default (10.102.205.126): 56 data bytes ^C --- amq.default ping statistics --- 28 packets transmitted, 0 packets received, 100% packet loss dnstools# dig amq.default ; &lt;&lt;&gt;&gt; DiG 9.11.3 &lt;&lt;&gt;&gt; amq.default ;; global options: +cmd ;; Got answer: ;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NXDOMAIN, id: 15104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;amq.default. IN A ;; Query time: 32 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Sat Jan 26 01:58:13 UTC 2019 ;; MSG SIZE rcvd: 29 dnstools# ping amq.default PING amq.default (10.102.205.126): 56 data bytes ^C --- amq.default ping statistics --- 897 packets transmitted, 0 packets received, 100% packet loss </code></pre> <p>(NB: pinging the ip address directly gives the same result)</p> <p>I admittedly am not very knowledgable about the deep workings of DNS, so I am not sure why I can do a lookup and dig for the hostname, but not ping it. </p>
<blockquote> <p>I admittedly am not very knowledgable about the deep workings of DNS, so I am not sure why I can do a lookup and dig for the hostname, but not ping it.</p> </blockquote> <p>Because <code>Service</code> IP addresses are figments of your cluster's imagination, caused by either iptables or ipvs, and don't actually exist. You can see them with <code>iptables -t nat -L -n</code> on any Node that is running <code>kube-proxy</code> (or <code>ipvsadm -ln</code>), as is described by the helpful <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#iptables" rel="noreferrer">Debug[-ing] Services</a> page</p> <p>Since they are not real IPs bound to actual NICs, they don't respond to any traffic other than the port numbers registered in the <code>Service</code> resource. The correct way of testing connectivity against a service is with something like <code>curl</code> or <code>netcat</code> and using the port number upon which you are expecting application traffic to travel.</p>
<p>I'm trying to deploy GitLab on Kubernetes using minikube through <a href="https://docs.gitlab.com/ce/install/kubernetes/gitlab_chart.html#deployment-of-gitlab-to-kubernetes" rel="nofollow noreferrer">this</a> tutorial, but I don't know what values to put in the fields <code>global.hosts.domain</code>, <code>global.hosts.externalIP</code> and <code>certmanager-issuer.email</code>.</p> <p>The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?</p>
<blockquote> <p>I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields <code>global.hosts.domain</code>, <code>global.hosts.externalIP</code> and <code>certmanager-issuer.email</code>.</p> </blockquote> <p>For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like <a href="http://www.thekelleys.org.uk/dnsmasq/doc.html" rel="nofollow noreferrer"><code>dnsmasq</code></a> or editing <code>/etc/hosts</code>, if it's important to you</p> <p>For the externalIP, that will be what <code>minikube ip</code> emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a <code>Service</code> of type <code>NodePort</code>, you're in for some more hoop-jumping to expose those ports via minikube's IP</p> <p>The <code>certmanager-issuer.email</code> you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.</p> <blockquote> <p>The tutorial is very poor in explanations.</p> </blockquote> <p>That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.</p>
<p>I have an Nginx ingress controller set up on my kubernetes cluster, which by default does an https redirect for any requests that it receives, so <code>http://example.com</code> is automatically forwarded on to <code>https://example.com</code>.</p> <p>I now have a host that I need to serve over http and not https, essentially excluding it from the ssl redirect. What I have found is that I can disable the ssl redirect across the whole ingress, but not for a specific host. </p> <p>My Ingress yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.class: nginx spec: tls: - hosts: - mysslsite.co.uk secretName: tls-secret rules: - host: my-ssl-site.co.uk http: paths: - path: / backend: serviceName: my-service servicePort: 80 - host: my-non-ssl-site.co.uk http: paths: - path: / backend: serviceName: my-other-service servicePort: 80 </code></pre> <p>My Config Map:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: labels: app: nginx-ingress chart: nginx-ingress-0.28.3 component: controller heritage: Tiller release: nginx-ingress name: undercooked-moth-nginx-ingress-controller namespace: default data: proxy-buffer-size: "512k" client-header-buffer-size: "512k" proxy-body-size: "100m" large-client-header-buffers: "4 512k" http2-max-field-size: "512k" http2-max-header-size: "512k" fastcgi_buffers: "16 16k" fastcgi_buffer_size: "32k" </code></pre> <p>What I have tried:</p> <ol> <li><p>Attempt to turn off ssl redirect across the board and set a rule to redirect to the site requiring ssl to https by setting the annotation <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> and adding the following config snippet:</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'my-ssl-site.co.uk' ) { rewrite ^ https://my-ssl-site.co.uk$request_uri permanent; } </code></pre> <p>This does remove the https redirect but results in a <code>too many redirects</code> error for the site requiring ssl.</p></li> <li><p>Attempted to add rules in the ConfigMap as per <a href="https://stackoverflow.com/a/49857465/1005609">this answer</a> to turn off ssl redirect and handle the conditional redirect in a server config snippet but this still resulted in an ssl redirect.</p></li> <li><p>Tried to add a second ingress controller so that one could have ssl redirect enabled and the other one could have it turned off. I created the controller but I think I also need to create a second nginx ingress and configure and label the apps that will be returned to each? This seems like overkill when all I want to do is exclude one service on the cluster from the ssl redirect.</p></li> </ol> <p>Is there anything obvious I am missing? It feels as though it shouldn't be this hard to add a simple rule to exclude one host from the ssl-redirect.</p>
<p>You can create two Ingress objects, one for each site in the same namespace.</p> <p>Use annotation <code>nginx.ingress.kubernetes.io/ssl-redirect: "true"</code> for SSL site</p> <p>Use annotation <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> for Non-SSL site</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cmac-ingress namespace: ns1 annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: tls: - hosts: - my-ssl-site.co.uk secretName: testsecret-tls rules: - host: my-ssl-site.co.uk http: paths: - path: / backend: serviceName: my-service servicePort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cmac-ingress1 namespace: ns1 annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: tls: - hosts: - my-site.co.uk secretName: testsecret-tls rules: - host: my-site.co.uk http: paths: - path: / backend: serviceName: my-service servicePort: 80 </code></pre> <p>Here is the result from ingress-controller <code>nginx.conf</code> file:</p> <pre><code> ## start server my-site.co.uk server { server_name my-site.co.uk ; listen 80; set $proxy_upstream_name "-"; listen 443 ssl http2; # PEM sha: ffa288482443e529d72a0984724f79d5267a2a22 ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem; ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem; location / { &lt;some lines skipped&gt; if ($scheme = https) { more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains"; } &lt;some lines skipped&gt; } } ## end server my-site.co.uk ## start server my-ssl-site.co.uk server { server_name my-ssl-site.co.uk ; listen 80; set $proxy_upstream_name "-"; listen 443 ssl http2; # PEM sha: ffa288482443e529d72a0984724f79d5267a2a22 ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem; ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem; location / { &lt;some lines skipped&gt; if ($scheme = https) { more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains"; } # enforce ssl on server side if ($redirect_to_https) { return 308 https://$best_http_host$request_uri; } &lt;some lines skipped&gt; } } ## end server my-ssl-site.co.uk </code></pre> <p>You can find additional redirection section in the SSL-enforced site definition:</p> <pre><code># enforce ssl on server side if ($redirect_to_https) { return 308 https://$best_http_host$request_uri; } </code></pre>
<p>I have the following pods:</p> <pre><code>NAME READY STATUS RESTARTS AGE xxx-myactivities-79f49cdfb4-nwg22 1/1 Terminating 0 10h xxx-mysearch-55864b5c59-6bnwl 1/1 Terminating 0 1d xxx-mysearch-55864b5c59-rpn48 1/1 Terminating 0 13h xxx-mysearch-6ff9bbb7cb-9qgbb 1/1 Terminating 0 3d </code></pre> <p>I am running the following code to forcefully delete those pods:</p> <pre><code># # Clean up dying pods # pods=$( kubectl get pods | grep -v Running | tail -n +2 | awk -F " " '{print $1}' ) for pod in $pods; do kubectl delete pod $pod --force done </code></pre> <p>Here is the output:</p> <pre><code>pod "xxx-myactivities-79f49cdfb4-nwg22" deleted pod "xxx-mysearch-55864b5c59-6bnwl" deleted pod "xxx-mysearch-55864b5c59-rpn48" deleted pod "xxx-mysearch-6ff9bbb7cb-9qgbb" deleted </code></pre> <p>After cleaning up, those pods still hang around.</p> <pre><code>NAME READY STATUS RESTARTS AGE xxx-myactivities-79f49cdfb4-nwg22 1/1 Terminating 0 10h xxx-mysearch-55864b5c59-6bnwl 1/1 Terminating 0 1d xxx-mysearch-55864b5c59-rpn48 1/1 Terminating 0 13h xxx-mysearch-6ff9bbb7cb-9qgbb 1/1 Terminating 0 3d </code></pre> <p>How do I clean up those pods?</p>
<blockquote> <p>To clean the pods you need to delete their <strong>deployments namespace</strong>.</p> </blockquote> <p>First discover that deployments existed:</p> <pre><code>$ kubectl get deployments --all-namespaces NAME READY STATUS RESTARTS AGE chetabahana-web-584b95d576-62ccj 1/1 Running 0 20m tutorial-web-56fbccc56b-wbwjq 1/1 Running 0 1m </code></pre> <p>Delete the deployment <code>&lt;NAME&gt;-xxxx</code> like this:</p> <pre><code>$ kubectl delete deployment &lt;NAME&gt; </code></pre> <p>For example to delete <code>tutorial-web-56fbccc56b-wbwjq</code> run:</p> <pre><code>$ kubectl delete deployment tutorial </code></pre> <p>Then all corresponded pods of <code>tutorial-xxxx</code> will terminate by itself.</p> <pre><code>NAME READY STATUS RESTARTS AGE chetabahana-web-584b95d576-62ccj 1/1 Running 0 20m tutorial-web-56fbccc56b-wbwjq 0/1 Terminating 0 1m </code></pre>
<p>I would like to know if it is possible to abort a <code>helm install</code> after I have fired it. I could not find anything in under <code>helm install --help</code> or <code>helm update --help</code>.</p> <p><a href="https://helm.sh/" rel="nofollow noreferrer">Info about helm.</a> </p>
<p>I think all you can do is abort the command from the command line with control-C or command-C (assuming you ran it manually, if it's from CI then you can presumably abort the job). Then you can do <code>helm list</code> to find the new release and <code>helm delete &lt;release_name&gt; --purge --no-hooks</code>, since it's an <code>install</code> (it <a href="https://github.com/helm/helm/issues/2956" rel="nofollow noreferrer">would be trickier if it were an upgrade</a>).</p>
<p>How do I get the current namespace of a deployment/service using the kubernetes client-go API? It doesn't seem to be in the client object or in the config.</p>
<p>Using</p> <pre><code>ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/namespace") </code></pre> <p>works but is ugly, when the desired implementation is present in the <code>Namespace()</code> method of <code>inClusterClientConfig</code>. But how would you get that object starting from <code>rest.InClusterConfig()</code>? It is only instantiable from outside the package via <code>NewNonInteractiveDeferredLoadingClientConfig</code>.</p> <p>I see <a href="https://github.com/kubernetes/kubernetes/pull/63707" rel="noreferrer">kubernetes #63707</a> which looks related but was abandoned.</p>
<p>Do you know (if it is possible) how to reserve threads/memory for a specific endpoint in a spring boot microservice?</p> <p>I've one <strong>microservice</strong> that accepts <strong>HTTP Requests via Spring MVC</strong>, and those requests trigger http calls to <strong>3rd system</strong>, which sometimes is partially degraded, and it responds very slow. I can't reduce the timeout time because there are some calls that are very slow by nature.</p> <p>I've the spring-boot-actuator <strong><code>/health</code></strong> endpoint enabled and I use it like a container <strong><code>livenessProbe</code></strong> in a <strong>kubernetes cluster</strong>. Sometimes, when the 3rd system is degraded, the microservice doesn't respond to <code>/health</code> endpoint and kubernetes restarts my service.</p> <p>This is because I'm using a <strong>RestTemplate to make HTTP calls</strong>, so I'm continuously creating new threads, and JVM starts to have problems with the memory.</p> <p>I have thought about some solutions:</p> <ol> <li><p>Implement a high availability “/health” endpoint, reserve threads, or something like that.</p> </li> <li><p>Use an async http client.</p> </li> <li><p>Implement a Circuit Breaker.</p> </li> <li><p>Configure custom timeouts per 3rd endpoint that I'm using.</p> </li> <li><p>Create other small service (golang) and deploy it in the same pod. This service is going to process the liveness probe.</p> </li> <li><p>Migrate/Refactor services to small services, and maybe with other framework/languages like Vert.x, go, etc.</p> </li> </ol> <p>What do you think?</p>
<p>The actuator health endpoint is very convenient with Spring boot - almost too convenient in this context as it does deeper health checks than you necessarily want in a liveness probe. For readiness you want to do deeper checks but not liveness. The idea is that if the Pod is overwhelmed for a bit and fails readiness then it will be withdrawn from the load balancing and get a breather. But if it fails liveness it will be restarted. So you want only minimal checks in liveness (<a href="https://stackoverflow.com/questions/53873153/should-health-checks-call-other-app-health-checks">Should Health Checks call other App Health Checks</a>). By using actuator health for both there is no way for your busy Pods to get a breather as they get killed first. And kubernetes is periodically calling the http endpoint in performing both probes, which contributes further to your thread usage problem (do consider the periodSeconds on the probes). </p> <p>For your case you could define a liveness command and not an http probe - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command</a>. The command could just check that the Java process is running (so kinda similar to your go-based probe suggestion). </p> <p>For many cases using the actuator for liveness would be fine (think apps that hit a different constraint before threads, which would be your case if you went async/non-blocking with the reactive stack). Yours is one where it can cause problems - the actuator's probing of availability for dependencies like message brokers can be another where you get excessive restarts (in that case on first deploy). </p>
<p>I am not able to see any log output when deploying a very simple Pod:</p> <p>myconfig.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] </code></pre> <p>then</p> <pre><code>kubectl apply -f myconfig.yaml </code></pre> <p>This was taken from this official tutorial: <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes</a> </p> <p>The pod appears to be running fine:</p> <pre><code>kubectl describe pod counter Name: counter Namespace: default Node: ip-10-0-0-43.ec2.internal/10.0.0.43 Start Time: Tue, 20 Nov 2018 12:05:07 -0500 Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0... Status: Running IP: 10.0.0.81 Containers: count: Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303 Image: busybox Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: /bin/sh -c i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done State: Running Started: Tue, 20 Nov 2018 12:05:08 -0500 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-r6tr6: Type: Secret (a volume populated by a Secret) SecretName: default-token-r6tr6 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6" Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox" Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox" Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container </code></pre> <p>Nothing appears when running:</p> <pre><code>kubectl logs counter --follow=true </code></pre>
<p>I followed Seenickode's comment and i got it working. </p> <p>I found the new cloudformation template for 1.10.11 or 1.11.5 (current version in aws) useful to compare with my stack.</p> <p>Here is what i learned:</p> <ol> <li>Allowed ports 1025 - 65535 from cluster security group to worker nodes.</li> <li>Allowed port 443 Egress from Control Plane to Worker Nodes. </li> </ol> <p>Then the kubectl logs started to work.</p> <p>Sample Cloudformation template updates here:</p> <pre><code> NodeSecurityGroupFromControlPlaneIngress: Type: AWS::EC2::SecurityGroupIngress DependsOn: NodeSecurityGroup Properties: Description: Allow worker Kubelets and pods to receive communication from the cluster control plane GroupId: !Ref NodeSecurityGroup SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup IpProtocol: tcp FromPort: 1025 ToPort: 65535 </code></pre> <p>Also</p> <pre><code> ControlPlaneEgressToNodeSecurityGroupOn443: Type: AWS::EC2::SecurityGroupEgress DependsOn: NodeSecurityGroup Properties: Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443 GroupId: Ref: ControlPlaneSecurityGroup DestinationSecurityGroupId: Ref: NodeSecurityGroup IpProtocol: tcp FromPort: 443 ToPort: 443 </code></pre>