prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I running docker desktop with kubernetes option turned on. I have one node called docker-for-dektop.
Now i have created a new ubuntu docker container. I want to add this container to my kubernetes cluster. Can be done? how can i do it?</p>
| <p>As far as I'm aware, you cannot add a node to Docker for Desktop with Kubernetes enabled.</p>
<p>Docker for Desktop is a single-node Kubernetes or Docker Swarm cluster, you might try using <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="noreferrer">kubernetes-the-hard-way</a> as this explains how to setup a cluster and add nodes manually without the use of <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">kubeadm</a>.</p>
<p>But I don't think this might work as there will be a lot of issues with setting up the network to work correctly.</p>
<p>You can also use the instructions on how to <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="noreferrer">install kubeadm</a> with kubelet and kubectl on Linux machine and adding a node using <code>kubeadm join</code>.</p>
|
<p>trying to shell into the container by <code>kubectl exec -it xxxxxx</code></p>
<p>but it returns </p>
<pre><code>rpc error: code = 5 desc = open /var/run/docker/libcontainerd/containerd/faf3fd49262cc738e16368001eba5e1113abcb8a87e7b818cb84af3799906149/30fe901c16e0465aa15b596bf3e4f244fb12a7e4133b6e4da5aa35167a8dfb30/shim-log.json: no such file or directory
</code></pre>
<p>trying to reboot the node but not help</p>
| <p>The problem is with containerd, Once the containerd restart in the background, the docker daemon still try to process event streams against the old socket handles. After that, the error handeling when client can't connect to the containerd leads to the CPU spike on machine.</p>
<p>This is the open issue with docker and currently the workaround is to restart the docker.</p>
<pre><code>sudo systemctl restart docker
</code></pre>
|
<p>I am trying to get Zalenium working with kubernetes, and I wanted to understand if Zalenium with kubernetes works in a hub-node architecture, i.e. can I have my Zalenium container running on one node (or the master ) and my selenium containers in other node ? Any help in this direction would be a great help. </p>
<p>Thanks. </p>
<p>I have got my kubectl running and i've created my clusters but I am not able to create separate pods for zalenium and selenium containers and I don't know if they can be even connected. </p>
| <p>Right now Zalenium does not have the possibility to decide where the pods get created, so it does that wherever it is deployed and the Zalenium pod interacts with the Kubernetes API to create the docker-selenium pods.
There is a <a href="https://github.com/zalando/zalenium/issues/662" rel="nofollow noreferrer">feature request</a> waiting for help to come, so contributions are welomed.
Side note, I think this does not belong to SO, so if you have more questions come to the #zalenium channel in <a href="https://seleniumhq.slack.com/join/shared_invite/enQtNTE4MTc4MzYwMjkxLWEyNTZkMDgzNWIwZmY1ZTlmMzg4ZjM1YzZkNGUwZGFlMWE2OTYxMDYxODA1ZWJlMzZjYjc3MmE3ODA1OGZmZTk" rel="nofollow noreferrer">Slack</a> </p>
|
<p>I am currently setting the 'dnsPolicy' configuration in the pod spec to 'Default' so that the pod can inherit the node's DNS configuration. </p>
<p>While this is good, it requires a re-build/re-deploy of the Docker container in order for the policy to effect and it is limited at the pod level.</p>
<p>Is there a similar policy that can be applied cluster-wide? Such that deployment of new pods onto the cluster will automatically inherit the nodes DNS configuration because of the cluster-wide policy? </p>
| <p>There isn't really a supported way to do this cluster-wide. One reason is that your <code>coredns</code> or <code>kube-dns</code> use <code>dnsPolicy: Default</code> and not the default <code>dnsPolicy: ClusterFirst</code> so changing it cluster-wide might affect your <code>coredns/kube-dns</code> pods.</p>
<p>There is, however a more complicated approach that you can use with <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">Dynamic Admission Controllers</a>. In particular, using a <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="nofollow noreferrer">MutatingAdmissionWebhook</a> that you can use to modify the pods with certain annotations to have <code>dnsPolicy: Default</code>. For example, <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> uses this to inject the Envoy sidecar. This is a good <a href="https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74" rel="nofollow noreferrer">document</a> that describes how to build your own MutatingAdmissionWebhook.</p>
<p>Similar question: <a href="https://serverfault.com/questions/928257/is-there-a-way-to-change-the-default-dnspolicy-without-modifying-every-podspec">https://serverfault.com/questions/928257/is-there-a-way-to-change-the-default-dnspolicy-without-modifying-every-podspec</a></p>
|
<p>I have deployed some simple services as a proof of concept: an nginx web server patched with <a href="https://stackoverflow.com/a/8217856/735231">https://stackoverflow.com/a/8217856/735231</a> for high performance.</p>
<p>I also edited <code>/etc/nginx/conf.d/default.conf</code> so that the line <code>listen 80;</code> becomes <code>listen 80 http2;</code>.</p>
<p>I am using the Locust distributed load-testing tool, with a class that swaps the <code>requests</code> module for <code>hyper</code> in order to test HTTP/2 workloads. This may not be optimal in terms of performance, but I can spawn many locust workers, so it's not a huge concern.</p>
<p>For testing, I spawned a cluster on GKE of 5 machines, 2 vCPU, 4GB RAM each, installed Helm and the charts of these services (I can post them on a gist later if useful).</p>
<p>I tested Locust with min_time=0 and max_time=0 so that it spawned as many requests as possible; with 10 workers against a single nginx instance.</p>
<p>With 10 workers, 140 "clients" total, I get ~2.1k requests per second (RPS).</p>
<pre><code>10 workers, 260 clients: I get ~2.0k RPS
10 workers, 400 clients: ~2.0k RPS
</code></pre>
<p>Now, I try to scale horizontally: I spawn 5 nginx instances and get:</p>
<pre><code>10 workers, 140 clients: ~2.1k RPS
10 workers, 280 clients: ~2.1k RPS
20 workers, 140 clients: ~1.7k RPS
20 workers, 280 clients: ~1.9k RPS
20 workers, 400 clients: ~1.9k RPS
</code></pre>
<p>The resouce usage is quite low as portrayed by <code>kubectl top pod</code> (this is for 10 workers, 280 clients; nginx is not resource-limited, locust workers are limited to 1 CPU per pod):</p>
<pre><code>user@cloudshell:~ (project)$ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
h2test-nginx-cc4d4c69f-4j267 34m 68Mi
h2test-nginx-cc4d4c69f-4t6k7 27m 68Mi
h2test-nginx-cc4d4c69f-l942r 30m 69Mi
h2test-nginx-cc4d4c69f-mfxf8 32m 68Mi
h2test-nginx-cc4d4c69f-p2jgs 45m 68Mi
lt-master-5f495d866c-k9tw2 3m 26Mi
lt-worker-6d8d87d6f6-cjldn 524m 32Mi
lt-worker-6d8d87d6f6-hcchj 518m 33Mi
lt-worker-6d8d87d6f6-hnq7l 500m 33Mi
lt-worker-6d8d87d6f6-kf9lj 403m 33Mi
lt-worker-6d8d87d6f6-kh7wt 438m 33Mi
lt-worker-6d8d87d6f6-lvt6j 559m 33Mi
lt-worker-6d8d87d6f6-sxxxm 503m 34Mi
lt-worker-6d8d87d6f6-xhmbj 500m 33Mi
lt-worker-6d8d87d6f6-zbq9v 431m 32Mi
lt-worker-6d8d87d6f6-zr85c 480m 33Mi
</code></pre>
<p>I portrayed this test on GKE for easier replication, but I have come to the same results in a private-cloud cluster.</p>
<p>Why does it seem that it does not matter how many instances I spawn of a service?</p>
<p><strong>UPDATE</strong>: As per the first answer, I'm updating information with information on the nodes and on what happens with a single Locust worker.</p>
<pre><code>1 worker, 1 clients: 22 RPS
1 worker, 2 clients: 45 RPS
1 worker, 4 clients: 90 RPS
1 worker, 8 clients: 174 RPS
1 worker, 16 clients: 360 RPS
32 clients: 490 RPS
40 clients: 480 RPS (this seems over max. sustainable clients per worker)
</code></pre>
<p>But above all, it seems that the root problem is that I'm at the limit of capacity:</p>
<pre><code>user@cloudshell:~ (project)$ kubectl top pod
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-sc1-default-pool-cbbb35bb-0mk4 1903m 98% 695Mi 24%
gke-sc1-default-pool-cbbb35bb-9zgl 2017m 104% 727Mi 25%
gke-sc1-default-pool-cbbb35bb-b02k 1991m 103% 854Mi 30%
gke-sc1-default-pool-cbbb35bb-mmcs 2014m 104% 776Mi 27%
gke-sc1-default-pool-cbbb35bb-t6ch 1109m 57% 743Mi 26%
</code></pre>
| <p>If I understood correctly, you did run the load testing on same cluster/nodes as your pods, this will definitely have an impact on the overall result, I would recommend you split the client from the server on separate nodes so that one does not affect each other.</p>
<p>For the values you reported, is clearly visible that the workers are consuming more CPU that the nginx servers. </p>
<p>You should check either:</p>
<ul>
<li>The Host CPU utilization, it might be under high pressure with context switches because the amount threads is much higher than the number of CPU available.</li>
<li>A network bottleneck, maybe you could try add more nodes or increase the worker capacity(SKU) and split client from servers.</li>
<li>The clients does not have enough capacity to generate the load, you increase the threads but the raw limits are the same</li>
</ul>
<p>You should also test individual server capacity to validate the limit of each server, so you have a parameter to compare if the results are in line with the expected values.</p>
|
<p>I'm surveying the google cloud tools that could be used to: deploy and update a micro-service shaped application. So far I focused my attention on two solutions:
(a) Container clusters; (b) Managed Instance groups plus autoscaler. Could you please help me to decide which way I should go. You'll find below some details and requirements:</p>
<ul>
<li>The application PULLs tasks from a pubsub topics and write results to a another pubsub topic;</li>
<li>Tasks are independent from each other; </li>
<li>The number of worker should autoscale wrt. the CPU usage level;</li>
<li>Each worker uses up to 10GiB of RAM. </li>
<li>At startup time a worker needs several minutes (<=5mn) to be ready to process tasks;</li>
<li>Out of the box rolling update is a plus; </li>
<li>Workers share a memcache sever, except that, there is strictly none communication of whatsoever kind between workers;</li>
<li>I suspect there is no need for load balancing, since a worker will process a new task as soon as it can;</li>
<li>Logs are pushed to a collection API (google cloud logging or third party).</li>
</ul>
<p>I did a MWE for solution (a) and solution (b). So far I have the sentiment that I won't use the kubernetes features. Hence I'm more inclined towards solution (b).</p>
<p>What do you think ? </p>
<p>Bests,
François.</p>
| <p>I would say that the main difference between hosted Kubernetes and Managed Instance Groups [MIGs] is that Kubernetes operates on the abstraction level of Containers and MIGs operate on VM instances. So it is easier for you to package your software into containers, then go for Kubernetes, if it is easier to package you software into an image then use MIGs. </p>
|
<p>Basically I am trying to do is play around with pod lifecycle and check if we can do some cleanup/backup such as copy logs before the pod terminates.</p>
<p>What I need :
Copy logs/heapdumps from container to a hostPath/S3 before terminating</p>
<p>What I tried:</p>
<p>I used a preStop hook with a bash command to echo a message (just to see if it works !!). Used terminationGracePeriodSeconds with a delay to preStop and toggle them to see if the process works. Ex. keep terminationGracePeriodSeconds:30 sec (default) and set preStop command to sleep by 50 sec and the message should not be generated since the container will be terminated by then. This works as expected.</p>
<p>My questions:</p>
<ul>
<li>what kind of processes are allowed(recommended) for a preStop hook? As copying logs/heapdumps of 15 gigs or more will take a lot of time. This time will then be used to define terminationGracePeriodSeconds</li>
<li>what happens when preStop takes more time than the set gracePeriod ?
(in case logs are huge say 10 gigs)</li>
<li>what happens if I do not have any hooks but still set terminationGracePeriodSeconds ? will the container remain up until that grace time ? </li>
</ul>
<p>I found this article which closely relates to this but could not follow through <a href="https://github.com/kubernetes/kubernetes/issues/24695" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/24695</a></p>
<p>All inputs appreciated !!</p>
| <blockquote>
<p>what kind of processes are allowed(recommended) for a preStop hook? As copying logs/heapdumps of 15 gigs or more will take a lot of time. This time will then be used to define terminationGracePeriodSeconds</p>
</blockquote>
<p>Anything goes here, it's more of an opinion and how you would like your pods to linger around. Another option is to let your pods terminate and store your data in some place (i.e, AWS S3, EBS) where data will persist past the pod lifecycle then use something like <a href="https://kubernetes.io/docs/tasks/job/" rel="nofollow noreferrer">Job</a> to clean up the data, etc.</p>
<blockquote>
<p>what happens when preStop takes more time than the set gracePeriod? (in case logs are huge say 10 gigs)</p>
</blockquote>
<p>Your preStop will not complete which may mean incomplete data or data corruption.</p>
<blockquote>
<p>what happens if I do not have any hooks but still set terminationGracePeriodSeconds ? will the container remain up until that grace time ?</p>
</blockquote>
<p>This explains would be the sequence:</p>
<ul>
<li>A SIGTERM signal is sent to the main process in each container, and a “grace period” countdown starts.</li>
<li>If a container doesn’t terminate within the grace period, a SIGKILL signal will be sent and the container.</li>
</ul>
|
<p>I've got a problem doing automatic heap dump to a mounted persistent volume in Microsoft Azure AKS (Kubernetes).</p>
<p>So the situation looks like this:</p>
<ul>
<li>Running program with parameters -Xmx200m causes out of memory
exception</li>
<li>After building, pushing and deploying docker image in AKS after few
seconds pod is killed and restarted</li>
<li>I got message in hello.txt in mounted volume but no dump file is
created</li>
</ul>
<p>What could be the reason of such a behaviour?</p>
<p>My test program looks like this:</p>
<pre><code>import java.io._
object Main {
def main(args: Array[String]): Unit = {
println("Before printing test info to file")
val pw = new PrintWriter(new File("/borsuk_data/hello.txt"))
pw.write("Hello, world")
pw.close
println("Before allocating to big Array for current memory settings")
val vectorOfDouble = Range(0, 50 * 1000 * 1000).map(x => 666.0).toArray
println("After creating to big Array")
}
}
</code></pre>
<p>My entrypoint.sh:</p>
<pre><code>#!/bin/sh
java -jar /root/scala-heap-dump.jar -Xmx200m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/scala-heap-dump.bin
</code></pre>
<p>My Dockerfile:</p>
<pre><code>FROM openjdk:jdk-alpine
WORKDIR /root
ADD target/scala-2.12/scala-heap-dump.jar /root/scala-heap-dump.jar
ADD etc/entrypoint.sh /root/entrypoint.sh
ENTRYPOINT ["/bin/sh","/root/entrypoint.sh"]
</code></pre>
<p>My deployment yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: scala-heap-dump
spec:
replicas: 1
template:
metadata:
labels:
app: scala-heap-dump
spec:
containers:
- name: scala-heap-dump-container
image: PRIVATE_REPO_ADDRESS/scala-heap-dump:latest
imagePullPolicy: Always
resources:
requests:
cpu: 500m
memory: "1Gi"
limits:
cpu: 500m
memory: "1Gi"
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: dynamic-persistence-volume-claim
dnsPolicy: ClusterFirst
hostNetwork: false
imagePullSecrets:
- name: regsecret
</code></pre>
<p>UPDATE:
As lawrencegripper pointed out the first issue was that pod was OOM killed due to memory limits in yaml. After changing memory to 2560Mi or higher (I've tried even such ridiculous values in yaml as CPU: 1000m and memory 5Gi) I don't get reason OOM killed. However, no dump file is created and different kind of message occurs under lastState terminated. The reason is: Error. Unfortunately this isn't very helpful. If anybody knows how to narrow it down, please help.</p>
<p>UPDATE 2:
I've added some println in code to have better understanding of what's going on. The logs for killed pod are:</p>
<pre><code>Before printing test info to file
Before allocating to big Array for current memory settings
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at scala.reflect.ManifestFactory$DoubleManifest.newArray(Manifest.scala:153)
at scala.reflect.ManifestFactory$DoubleManifest.newArray(Manifest.scala:151)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:285)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:283)
at scala.collection.AbstractTraversable.toArray(Traversable.scala:104)
at Main$.main(Main.scala:12)
at Main.main(Main.scala)
</code></pre>
<p>So as you can see program never reaches: println("After creating to big Array").</p>
| <p>I think the problem is the entrypoint.sh command. </p>
<pre><code>> java --help
Usage: java [options] <mainclass> [args...]
(to execute a class)
or java [options] -jar <jarfile> [args...]
(to execute a jar file)
</code></pre>
<p>Note that anything after the -jar are arguments passed to your application, not to the JVM. </p>
<p>Try:</p>
<pre><code>java -Xmx200m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/scala-heap-dump.bin -jar /root/scala-heap-dump.jar
</code></pre>
|
<p>If I want to develop a SaaS system and want to use k8's namespace to do isolation, i.e., I will create a namespace for every user, it's a multi-tenancy system, then, how many namespaces can I have? Will k8s be slowdown when namespace increases?</p>
| <p>To answer your question, namespace is a logical entity that is used to isolate the application environment from another application environment. It doesn't consume cluster resources like cpu and memory. Ideally you can create any number of namespaces. Am not sure if there is a limit on number of namespaces that is allowed in a custer</p>
<p>On the other hand it is not a good approach to have one namespace each for user. Applications multi tenancy should be better handled in the application code itself. Namespace is recommended to isolate the environment like one for Development, one for TEST, one for QA and Another one for production </p>
|
<p>I'd like to access and edit files in my Kubernetes <strong>PersistentVolume</strong> on my local computer (macOS), but I cannot understand where to find those files!</p>
<p>I'm pointing my <code>hostPath</code> to <code>/tmp/wordpress-volume</code> but I cannot find it anywhere. What is the hidden secret I'm missing</p>
<p>I'm using the following configuration on a <strong>docker-for-desktop</strong> cluster <code>Version 2.0.0.2 (30215)</code>.</p>
<h3>PersistentVolume</h3>
<pre><code>kind: PersistentVolume
metadata:
name: wordpress-volume
spec:
# ...
hostPath:
path: /tmp/wordpress-volume
</code></pre>
<h3>PersistentVolumeClaim</h3>
<pre><code>kind: PersistentVolumeClaim
metadata:
name: wordpress-volume-claim
# ...
</code></pre>
<h3>Deployment</h3>
<pre><code>kind: Deployment
metadata:
name: wordpress
# ...
spec:
containers:
- image: wordpress:4.8-apache
# ...
volumeMounts:
- name: wordpress-volume
mountPath: /var/www/html
volumes:
- name: wordpress-volume
persistentVolumeClaim:
claimName: wordpress-volume-claim
</code></pre>
| <p>Thanks to @aman-tuladhar and some hours lost on the internet I've found out that you just need to make sure <code>storageClassName</code> is set for you <strong>PersistentVolume</strong> and <strong>PersistentVolumeClaim</strong>. </p>
<p>As per <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic" rel="noreferrer">documentation</a> if you want to avoid that <strong>Kubernetes</strong> dynamically generetes <strong>PersistentVolumes</strong> without considering the one you statically declared, you can just set a empty string <code>" "</code>.</p>
<p>In my case I've set <code>storageClassName: manual</code>.</p>
<h3>PersistentVolume</h3>
<pre><code>kind: PersistentVolume
metadata:
name: wordpress-volume
spec:
# ...
storageClassName: manual
hostPath:
path: /tmp/wordpress-volume
</code></pre>
<h3>PersistentVolumeClaim</h3>
<pre><code>kind: PersistentVolumeClaim
metadata:
name: wordpress-volume-claim
spec:
storageClassName: manual
# ...
</code></pre>
<p>This works out of the box with <code>docker-for-desktop</code> cluster (as long as <code>mountPath</code> is set to a absolute path).</p>
<p>References:</p>
<ul>
<li><a href="https://medium.com/@snowmiser/kubernetes-binding-persistentvolumes-and-persistentvolumeclaims-33323b907722" rel="noreferrer">Kubernetes: Binding PersistentVolumes and PersistentVolumeClaims</a></li>
<li><a href="https://medium.com/@xcoulon/storing-data-into-persistent-volumes-on-kubernetes-fb155da16666" rel="noreferrer">Storing data into Persistent Volumes on Kubernetes</a></li>
</ul>
|
<p>Is there something like the old J2EE Platform Roles but for Kubernetes? It doesn't have to map 1:1 to these old roles of course, but I would like to have a reference to answer questions like, "The person who runs <code>helm install</code> and is responsible for knowing what all the options do is called _________?" Or, "The person who designs the autoscaling policy is called _______?" Or even, "The person who is responsible that all of the docker images used in the enterprise have been patched with the latest CVE vulnerability fixes is called ________?" Is there a standard nomenclature for this sort of thing? Where is it?</p>
<p>Thanks,</p>
<p>Ed</p>
<p><a href="https://i.stack.imgur.com/t33dS.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t33dS.gif" alt="J2EE Platform Roles"></a></p>
| <p>The closest to this is Kubernetes <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a>. You can create roles and bind the roles to groups/users/service accounts. You will have to do so heavy lifting in terms of defining the specific roles that suit your organizations and what types of permissions.</p>
<p>If you are looking for an audit trail, you can look at <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">Auditing</a>.</p>
<p>Hope it helps!</p>
|
<p>I have a filebeat outside of the kubernetes cluster, installed as an application on the host. I want to ignore two namespaces in filebeat, since they are very large and I don't need them within elastichsearch. </p>
<p>Here is my input definition in filebeat.yml: </p>
<pre><code>- type: log
enabled: true
paths:
- /var/lib/docker/containers/*/*.log
json.message_key: log
json.keys_under_root: true
processors:
- add_kubernetes_metadata:
in_cluster: false
host: main-backend
kube_config: /etc/kubernetes/admin.conf
- drop_event.when.regexp:
or:
- kubernetes.namespace: "kube-system"
- kubernetes.namespace: "monitoring"
</code></pre>
<p>However, I still see a lot of log from those namespaces within my elasticsearch. Is there any way to debug it why is it happening? </p>
| <p>Can you try as given below</p>
<pre><code>- drop_event:
when:
or:
- not:
equals:
kubernetes.namespace: "kube-system"
- not:
equals:
kubernetes.namespace: "monitoring"
- regexp:
kubernetes.pod.name: "filebeat-*"
- regexp:
kubernetes.pod.name: "elasticsearch-*"
</code></pre>
|
<p>I've set-up an AKS cluster and am now trying to connect to it. My deployment YAML is here:</p>
<pre><code>apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
</code></pre>
<p>If I run the dashboard, I get this:</p>
<p><a href="https://i.stack.imgur.com/HqYAB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HqYAB.png" alt="KubernetesDashboard"></a></p>
<p>Which looks like it should be telling me the external endpoint, but isn't. I have a theory that this is because the Yaml file is only deploying a Pod, which is in some way not able to expose an endpoint - is that the case and if so, why? Otherwise, how can I find this endpoint?</p>
| <p>Thats not how that works, you need to read up on basic kubernetes concept. Pods are only container, to expose pods you need to create services (and you need labels), to expose pods externally you need to set service type to LoadBalancer. You probably want to use deployments instead of pods, its a lot easier\reliable.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a><br>
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a></p>
<p>so in short, you need to add labels to your pod and create a service of type load balancer with selectors that match your pods labels</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 443
type: LoadBalancer
</code></pre>
|
<p>I have setup strapi on kubernetes, everything is running fine but when I am trying to hit APIs exposed by strapi from my frontend application, which is running on HTTPs I am getting an error as Kubernetes ingress has exposed strapi on HTTP. I am clueless how to configure strapi for HTTPS request. I would be glad if someone could guide me.</p>
| <p>Basically Ingress provides different mechanisms of TLS termination. </p>
<p>If your frontend application can handle https, you should just route the tls traffic to the respective service. If your frontend application has no tls capabilities, you should use ingress https termination.
<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>http:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
</code></pre>
<p>Example https config from kubernetes, how it would look if your service does not do https:</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-application-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- mydomain.com
secretName: mycertificate
rules:
- host: mydomain.com
https:
paths:
- path: /
backend:
serviceName: frontend-application
servicePort: http
</code></pre>
|
<p>As of now we can create NLb from K8s with this annotation <strong><em>"service.beta.kubernetes.io/aws-load-balancer-type: "nlb"</em></strong>. It will take take available Public IP in the subnet. </p>
<p>So is it possible to change the SubnetMapping with the Elastic IP from AWS CLI once NLB is created? If yes can anyone give an example.</p>
<p>I tried to update the Subnet mapping on NLB which was created by K8s using service annotation. </p>
<p>aws elbv2 set-subnets --load-balancer-arn arnValue --subnet-mappings SubnetId=abcd,AllocationId=eipalloc-1 --region us-east-1</p>
<p>Output was
SetSubnets is not supported for load balancers of type 'network'</p>
| <p>Elastic IP can be attached or changed only while creating the Network Load Balancer. Secondly, ´set-subnets´ works only for Application Load Balancer.</p>
<p>You will need to recreate this Network Load Balancer to achieve your goal.
Hope this helps. </p>
<p>Please refer-
<a href="https://forums.aws.amazon.com/thread.jspa?threadID=263577" rel="nofollow noreferrer">https://forums.aws.amazon.com/thread.jspa?threadID=263577</a>
<a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_SetSubnets.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_SetSubnets.html</a></p>
|
<p>I've got the following pipeline but I don't know why it fails on the first line of code:</p>
<pre><code>pipeline {
agent any
environment {
def mypods = []
}
stages {
stage('Getting pods') {
steps {
script {
withKubeConfig(caCertificate: '.....', credentialsId: '.....', serverUrl: '.....') {
env.mypods = sh "kubectl get pod | grep Running | awk '{print \$1}'"
}
}
}
}
stage('Print pods') {
steps {
script {
mypods.each {
println "Item: $it"
}
}
}
}
}
}
</code></pre>
<p>I need to use a list because the kubectl get pods command return a list of pods, so I have to save and use them during the stages.
How can I create a list on declarative pipeline?
Thanks in advance.</p>
<p>This is the error:</p>
<pre><code>Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 4: Environment variable values must either be single quoted, double quoted, or function calls. @ line 4, column 22.
def mypods = []
^
WorkflowScript: 3: No variables specified for environment @ line 3, column 5.
environment {
^
2 errors
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:131)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:125)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:560)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:521)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:320)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
</code></pre>
| <p>The declarative pipeline comes with some limitations if it comes to its syntax. You see this error because in the <code>environment</code> block you can assign only two types of expressions:</p>
<ul>
<li>strings (single or double quoted)</li>
<li>values returns from function calls</li>
</ul>
<p>However, you need to be aware that the environment variables store <strong>only</strong> string values, so if you return an array (or any other type from) from the function call, it will be automatically converted to its <code>toString()</code> representation.</p>
<pre><code>pipeline {
agent any
environment {
MYPODS = getPods()
}
stages {
stage("Test") {
steps {
script {
println "My pods = ${env.MYPODS}"
}
}
}
}
}
def getPods() {
return ['pod1', 'pod2']
}
</code></pre>
<p>Console output:</p>
<pre class="lang-sh prettyprint-override"><code>[Pipeline] node
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] script (hide)
[Pipeline] {
[Pipeline] echo
<java.lang.String@ed6c7b35 value=[pod1, pod2] hash=-311657675>
[Pipeline] echo
MYPODS = [pod1, pod2]
[Pipeline] echo
Item: [
[Pipeline] echo
Item: p
[Pipeline] echo
Item: o
[Pipeline] echo
Item: d
[Pipeline] echo
Item: 1
[Pipeline] echo
Item: ,
[Pipeline] echo
Item:
[Pipeline] echo
Item: p
[Pipeline] echo
Item: o
[Pipeline] echo
Item: d
[Pipeline] echo
Item: 2
[Pipeline] echo
Item: ]
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
</code></pre>
<h3>Solution</h3>
<p>If you want to store a list of string values then you can define it as a single string of values delimited with <code>,</code> character. In this case you can simply tokenize it to a list of values. Consider the following example:</p>
<pre><code>pipeline {
agent any
environment {
MYPODS = 'pod1,pod2,pod3'
}
stages {
stage("Test") {
steps {
script {
MYPODS.tokenize(',').each {
println "Item: ${it}"
}
}
}
}
}
}
</code></pre>
<p>Output:</p>
<pre class="lang-sh prettyprint-override"><code>[Pipeline] node
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
Item: pod1
[Pipeline] echo
Item: pod2
[Pipeline] echo
Item: pod3
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
</code></pre>
|
<p>I am getting two errors after deploying my object detection model for prediction using GPUs:</p>
<p>1.PodUnschedulable Cannot schedule pods: Insufficient nvidia</p>
<p>2.PodUnschedulable Cannot schedule pods: com/gpu.</p>
<p>I have two node pools. One of them is configured to have Tesla K80 GPU and auto scaling enabled. When I deploy the serving component using a ksonnet app (described in here :<a href="https://github.com/kubeflow/examples/blob/master/object_detection/tf_serving_gpu.md#deploy-serving-component" rel="nofollow noreferrer">https://github.com/kubeflow/examples/blob/master/object_detection/tf_serving_gpu.md#deploy-serving-component</a>.</p>
<p>This is the output of the <code>kubectl describe pods</code> command:</p>
<pre><code> Name: xyz-v1-5c5b57cf9c-kvjxn
Namespace: default
Node: <none>
Labels: app=xyz
pod-template-hash=1716137957
version=v1
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/xyz-v1-5c5b57cf9c
Containers:
aadhar:
Image: tensorflow/serving:1.11.1-gpu
Port: 9000/TCP
Host Port: 0/TCP
Command:
/usr/bin/tensorflow_model_server
Args:
--port=9000
--model_name=xyz
--model_base_path=gs://xyz_kuber_app-xyz-identification/export/
Limits:
cpu: 4
memory: 4Gi
nvidia.com/gpu: 1
Requests:
cpu: 1
memory: 1Gi
nvidia.com/gpu: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
aadhar-http-proxy:
Image: gcr.io/kubeflow-images-public/tf-model-server-http-proxy:v20180606-9dfda4f2
Port: 8000/TCP
Host Port: 0/TCP
Command:
python
/usr/src/app/server.py
--port=8000
--rpc_port=9000
--rpc_timeout=10.0
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 500m
memory: 500Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-b6dpn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b6dpn
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
nvidia.com/gpu:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 20m (x5 over 21m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were unschedulable.
Warning FailedScheduling 20m (x2 over 20m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space, 1 node(s) were unschedulable.
Warning FailedScheduling 16m (x9 over 19m) default-scheduler 0/1 nodes are available: 1 Insufficient nvidia.com/gpu.
Normal NotTriggerScaleUp 15m (x26 over 20m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added)
Warning FailedScheduling 2m42s (x54 over 23m) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu.
Normal TriggeredScaleUp 13s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/xyz-identification/zones/us-central1-a/instanceGroups/gke-kuberflow-xyz-pool-1-9753107b-grp 1->2 (max: 10)}]
Name: mnist-deploy-gcp-b4dd579bf-sjwj7
Namespace: default
Node: gke-kuberflow-xyz-default-pool-ab1fa086-w6q3/10.128.0.8
Start Time: Thu, 14 Feb 2019 14:44:08 +0530
Labels: app=xyz-object
pod-template-hash=608813569
version=v1
Annotations: sidecar.istio.io/inject:
Status: Running
IP: 10.36.4.18
Controlled By: ReplicaSet/mnist-deploy-gcp-b4dd579bf
Containers:
xyz-object:
Container ID: docker://921717d82b547a023034e7c8be78216493beeb55dca57f4eddb5968122e36c16
Image: tensorflow/serving:1.11.1
Image ID: docker-pullable://tensorflow/serving@sha256:a01c6475c69055c583aeda185a274942ced458d178aaeb84b4b842ae6917a0bc
Ports: 9000/TCP, 8500/TCP
Host Ports: 0/TCP, 0/TCP
Command:
/usr/bin/tensorflow_model_server
Args:
--port=9000
--rest_api_port=8500
--model_name=xyz-object
--model_base_path=gs://xyz_kuber_app-xyz-identification/export
--monitoring_config_file=/var/config/monitoring_config.txt
State: Running
Started: Thu, 14 Feb 2019 14:48:21 +0530
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 14 Feb 2019 14:45:58 +0530
Finished: Thu, 14 Feb 2019 14:48:21 +0530
Ready: True
Restart Count: 1
Limits:
cpu: 4
memory: 4Gi
Requests:
cpu: 1
memory: 1Gi
Liveness: tcp-socket :9000 delay=30s timeout=1s period=30s #success=1 #failure=3
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /secret/gcp-credentials/user-gcp-sa.json
Mounts:
/secret/gcp-credentials from gcp-credentials (rw)
/var/config/ from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mnist-deploy-gcp-config
Optional: false
gcp-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: user-gcp-sa
Optional: false
default-token-b6dpn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b6dpn
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>The output of <code>kubectl describe pods | grep gpu</code>is :</p>
<pre><code> Image: tensorflow/serving:1.11.1-gpu
nvidia.com/gpu: 1
nvidia.com/gpu: 1
nvidia.com/gpu:NoSchedule
Warning FailedScheduling 28m (x5 over 29m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were unschedulable.
Warning FailedScheduling 28m (x2 over 28m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space, 1 node(s) were unschedulable.
Warning FailedScheduling 24m (x9 over 27m) default-scheduler 0/1 nodes are available: 1 Insufficient nvidia.com/gpu.
Warning FailedScheduling 11m (x54 over 31m) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu.
Warning FailedScheduling 48s (x23 over 6m57s) default-scheduler 0/3 nodes are available: 3 Insufficient nvidia.com/gpu.
</code></pre>
<p>I am new to kubernetes and am not able to understand what is going wrong here.</p>
<p><strong>Update</strong>: I did have an extra pod running that I was experimenting with earlier. I shut that after @Paul Annett pointed it out but I still have the same error. </p>
<pre><code>Name: aadhar-v1-5c5b57cf9c-q8cd8
Namespace: default
Node: <none>
Labels: app=aadhar
pod-template-hash=1716137957
version=v1
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/aadhar-v1-5c5b57cf9c
Containers:
aadhar:
Image: tensorflow/serving:1.11.1-gpu
Port: 9000/TCP
Host Port: 0/TCP
Command:
/usr/bin/tensorflow_model_server
Args:
--port=9000
--model_name=aadhar
--model_base_path=gs://xyz_kuber_app-xyz-identification/export/
Limits:
cpu: 4
memory: 4Gi
nvidia.com/gpu: 1
Requests:
cpu: 1
memory: 1Gi
nvidia.com/gpu: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
aadhar-http-proxy:
Image: gcr.io/kubeflow-images-public/tf-model-server-http-proxy:v20180606-9dfda4f2
Port: 8000/TCP
Host Port: 0/TCP
Command:
python
/usr/src/app/server.py
--port=8000
--rpc_port=9000
--rpc_timeout=10.0
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 500m
memory: 500Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-b6dpn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b6dpn
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
nvidia.com/gpu:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal TriggeredScaleUp 3m3s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/xyz-identification/zones/us-central1-a/instanceGroups/gke-kuberflow-xyz-pool-1-9753107b-grp 0->1 (max: 10)}]
Warning FailedScheduling 2m42s (x2 over 2m42s) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space.
Warning FailedScheduling 42s (x10 over 3m45s) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu.
</code></pre>
<p><strong>Update 2</strong>: I haven't used nvidia-docker. Although, the <code>kubectl get pods -n=kube-system</code> command gives me:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
event-exporter-v0.2.3-54f94754f4-vd9l5 2/2 Running 0 16h
fluentd-gcp-scaler-6d7bbc67c5-m8gt6 1/1 Running 0 16h
fluentd-gcp-v3.1.0-4wnv9 2/2 Running 0 16h
fluentd-gcp-v3.1.0-r6bd5 2/2 Running 0 51m
heapster-v1.5.3-75bdcc556f-8z4x8 3/3 Running 0 41m
kube-dns-788979dc8f-59ftr 4/4 Running 0 16h
kube-dns-788979dc8f-zrswj 4/4 Running 0 51m
kube-dns-autoscaler-79b4b844b9-9xg69 1/1 Running 0 16h
kube-proxy-gke-kuberflow-aadhaar-pool-1-57d75875-8f88 1/1 Running 0 16h
kube-proxy-gke-kuberflow-aadhaar-pool-2-10d7e787-66n3 1/1 Running 0 51m
l7-default-backend-75f847b979-2plm4 1/1 Running 0 16h
metrics-server-v0.2.1-7486f5bd67-mj99g 2/2 Running 0 16h
nvidia-device-plugin-daemonset-wkcqt 1/1 Running 0 16h
nvidia-device-plugin-daemonset-zvzlb 1/1 Running 0 51m
nvidia-driver-installer-p8qqj 0/1 Init:CrashLoopBackOff 13 51m
nvidia-gpu-device-plugin-nnpx7 1/1 Running 0 51m
</code></pre>
<p>Looks like an issue with nvidia driver installer.</p>
<p><strong>Update 3:</strong> Added nvidia driver installer log. Describing the pod: <code>kubectl describe pods nvidia-driver-installer-p8qqj -n=kube-system</code></p>
<pre><code>Name: nvidia-driver-installer-p8qqj
Namespace: kube-system
Node: gke-kuberflow-aadhaar-pool-2-10d7e787-66n3/10.128.0.30
Start Time: Fri, 15 Feb 2019 11:22:42 +0530
Labels: controller-revision-hash=1137413470
k8s-app=nvidia-driver-installer
name=nvidia-driver-installer
pod-template-generation=1
Annotations: <none>
Status: Pending
IP: 10.36.5.4
Controlled By: DaemonSet/nvidia-driver-installer
Init Containers:
nvidia-driver-installer:
Container ID: docker://a0b18bc13dad0d470b601ad2cafdf558a192b3a5d9ace264fd22d5b3e6130241
Image: gke-nvidia-installer:fixed
Image ID: docker-pullable://gcr.io/cos-cloud/cos-gpu-installer@sha256:e7bf3b4c77ef0d43fedaf4a244bd6009e8f524d0af4828a0996559b7f5dca091
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 32
Started: Fri, 15 Feb 2019 13:06:04 +0530
Finished: Fri, 15 Feb 2019 13:06:33 +0530
Ready: False
Restart Count: 23
Requests:
cpu: 150m
Environment: <none>
Mounts:
/boot from boot (rw)
/dev from dev (rw)
/root from root-mount (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-n5t8z (ro)
Containers:
pause:
Container ID:
Image: gcr.io/google-containers/pause:2.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-n5t8z (ro)
Conditions:
Type Status
Initialized False
Ready False
PodScheduled True
Volumes:
dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
boot:
Type: HostPath (bare host directory volume)
Path: /boot
HostPathType:
root-mount:
Type: HostPath (bare host directory volume)
Path: /
HostPathType:
default-token-n5t8z:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-n5t8z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations:
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m36s (x437 over 107m) kubelet, gke-kuberflow-aadhaar-pool-2-10d7e787-66n3 Back-off restarting failed container
</code></pre>
<p>Error log from the pod <code>kubectl logs nvidia-driver-installer-p8qqj -n=kube-system</code> :</p>
<pre><code>Error from server (BadRequest): container "pause" in pod "nvidia-driver-installer-p8qqj" is waiting to start: PodInitializing
</code></pre>
| <p>Issue seems to be with the resources not being available to run the pod. the pod contains two containers that needs min 1.5Gi Memory and 1.5 cpu and max 5GB memenory and 5 cpu.</p>
<p>controller is not able to identify the node that meets the resource requirements for running the pod and hence it is not getting scheduled. </p>
<p>see if you can reduce the resource limits that can be matched with one of the node. i also see from the logs one of the node is out of disk space. check those issues reported from ( kubectl describe po ) and take action on those items</p>
<pre><code> Limits:
cpu: 4
memory: 4Gi
nvidia.com/gpu: 1
Requests:
cpu: 1
memory: 1Gi
nvidia.com/gpu: 1
</code></pre>
<pre><code> Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 500m
memory: 500Mi
</code></pre>
<p>i see the pod is using a node affinity. </p>
<pre><code> affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-accelerator
operator: Exists
</code></pre>
<p>can you check the node where the pod is deployed has the below label</p>
<pre><code>cloud.google.com/gke-accelerator
</code></pre>
<p>alternatively remove the nodeaffinity section and see if the pods gets deployed and shows running</p>
|
<p>Is it possible to have a service name under <code>hostAliases</code> in Kubernetes? I want to point a non-existent domain name to a service instead of an IP.</p>
<p>Something like the following:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
spec:
containers:
hostAliases:
- ip: "my-service-name"
hostnames:
- "my.domain.com"
</code></pre>
<p>If not, how would you guys set up a local hosts file entry for a pod to a service? I need to resolve the <strong>non-existent</strong> domain to the service.</p>
| <p>As @VKR explained in the other comment, HostAliases basically just injects into /etc/hosts which only allows for A-record type entries.</p>
<p>For CNAME entries, a workaround is to inject the alias into CoreDNS.</p>
<p>This can be done by editing the ConfirMap for CoreDNS:</p>
<pre><code>$ kubectl edit configmap coredns -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
rewrite name orderer0.example.com orderer0-example-com.orbix-mvp.svc.cluster.local
rewrite name peer0.example.com peer0-example-com.orbix-mvp.svc.cluster.local
kubernetes cluster.local {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
}
</code></pre>
<p>I've added the two lines that start with <code>rewrite name</code> and once CoreDNS is restarted, the new entries are available throughout the cluster.</p>
<p>CoreDNS can be restarted using the following command:</p>
<p><code>kubectl exec -n kube-system coredns-980047985-g2748 -- kill -SIGUSR1 1</code></p>
<p>The above needs to be ran for both of the CoreDNS pods.</p>
|
<p>From local machine, we can apply Kubernetes YAML files to <a href="https://aws.amazon.com/eks/" rel="nofollow noreferrer">AWS EKS</a> using <a href="https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html" rel="nofollow noreferrer">AWS CLI</a> + <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator" rel="nofollow noreferrer">aws-iam-authenticator</a> + <a href="https://kubernetes.io/docs/reference/kubectl/kubectl/" rel="nofollow noreferrer">kubectl</a>. How to do it in Ansible Tower / AWX?</p>
<p>Understand that there are a few Ansible modules available but none seems to be able to apply Kubernetes YAML to EKS.</p>
<ul>
<li><a href="https://docs.ansible.com/ansible/latest/modules/k8s_module.html" rel="nofollow noreferrer">k8s</a> doesn't seem to support EKS at the moment.</li>
<li><a href="https://docs.ansible.com/ansible/latest/modules/aws_eks_cluster_module.html" rel="nofollow noreferrer">aws_eks_cluster</a> only allows user to manage EKS cluster (e.g. create, remove).</li>
</ul>
| <p>I think that you can possibly reach the goal via <a href="https://docs.ansible.com/ansible/latest/modules/k8s_module.html" rel="nofollow noreferrer">k8s</a> module as it natively supports <code>kubeconfig</code> parameter which you can use for EKS cluster authentication. You can follow the steps described in the official <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html" rel="nofollow noreferrer">documentation</a> in order to compose <code>kubeconfig</code> file. There was a separate thread discussed on GitHub <a href="https://github.com/ansible/ansible/issues/45858" rel="nofollow noreferrer">#45858</a> about kubernetes manifest file implementation through k8s module, however Git contributors were facing some authorization issue, therefore take a chance and look through the conversation maybe you will find some helpful suggestions.</p>
|
<p>We have a situation where we have an abundance of Spring boot applications running in containers (on OpenShift) that access centralized infrastructure (external to the pod) such as databases, queues, etc.</p>
<p>If a piece of central infrastructure is down, the health check returns "unhealthy" (rightfully so). The problem is that the liveliness check sees this, and restarts the pod (the readiness check then sees it's down too, so won't start the app). This is fine when only a few are available, but if many (potentially hundreds) of applications are using this, it forces restarts on all of them (crash loop).</p>
<p>I understand that central infrastructure being down is a bad thing. It "should" never happen. But... if it does (Murphy's law), it throws containers into a frenzy. Just seems like we're either doing something wrong, or we should reconfigure something.</p>
<p>A couple questions:</p>
<ul>
<li>If you are forced to use centralized infrastructure from a Spring boot app running in a container on OpenShift/Kubernetes, should all actuator checks still be enabled for backend? (bouncing the container really won't fix the backend being down anyway)</li>
<li>Should the /actuator/health endpoint be set for both the liveliness probe and the readiness probe?</li>
<li>What common settings do folk use for the readiness/liveliness probe in a spring boot app? (timeouts/interval/etc).</li>
</ul>
| <ol>
<li><p>Using actuator checks for liveness/readiness is the de-facto way to check for healthy app in Spring Boot Pod. Your application, once up, should ideally not go down or become unhealthy if a central piece, such as DB or Queueing service goes down , ideally you should add some sort of a resiliency which will either connect to alternate DR site or wait for certain time period for central service to come back up and app to reconnect. This is more of a technical failure on backend side causing a functional failure of your Application after it was started up cleanly.</p></li>
<li><p>Yes , both liveness and readiness is required as they both serve different purposes. Read <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">this</a></p></li>
<li><p>In one of my previous projects, the settings used for readiness was around 30 seconds and liveness at around 90, but to be honest this is completely dependent on your application , if your app takes 1 minute to start , that is what your readiness time should be configured at , and your liveness should factor in the same along with any time required for making failover switch of your backend services.</p></li>
</ol>
<p>Hope this helps.</p>
|
<p>We have a GKE cluster (1.11) and implemented HPA based on memory utilization for pods. During our testing activity, we have observed HPA behavior is not consistent, HPA is not scaling pods even though the target value is met. We have also noticed that, HPA events is not giving us any response data ( either scaling or downscaling related info).</p>
<h1>Example</h1>
<p><strong>kubectl get hpa</strong></p>
<p><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE</code></p>
<p><code>com-manh-cp-organization Deployment/com-manh-cp-organization 95%/90% 1 25 1 1d</code></p>
<p><strong>kubectl describe hpa com-manh-cp-organization</strong></p>
<pre><code>Name: com-manh-cp-organization
Namespace: default
Labels: app=com-manh-cp-organization
stereotype=REST
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"app":"com-manh-cp-organizatio...
CreationTimestamp: Tue, 12 Feb 2019 18:02:12 +0530
Reference: Deployment/com-manh-cp-organization
Metrics: ( current / target )
resource memory on pods (as a percentage of request): 95% (4122087424) / 90%
Min replicas: 1
Max replicas: 25
Deployment pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
</code></pre>
<ul>
<li>Cluster version : 1.11.6</li>
<li>Cloud service : GKE</li>
<li>Metric : memory</li>
<li>Target : targetAverageUtilization</li>
</ul>
<p>Any inputs will be much appreciated and let us know if we can debug HPA implementation.</p>
<p>Thanks.</p>
| <p>There is a tolerance on the values for the threshold in HPA when calculating the replica numbers as specified in this <a href="https://github.com/kubernetes/kubernetes/blob/a2f4f585afba0c57b432772c6358cb1b2727fb5f/pkg/controller/podautoscaler/replica_calculator.go#L39" rel="nofollow noreferrer">link.</a></p>
<p>This tolerance is by default 0.1. And in your configuration you might not be hitting the threshold when you put 90% due to this. I would recommend you to change the metrics to 80% and see if it is working. </p>
|
<p>I'm trying to create K8s cluster on Amazon EKS with Terraform. All the code is on github: <a href="https://github.com/amorfis/aws-eks-terraform" rel="nofollow noreferrer">https://github.com/amorfis/aws-eks-terraform</a></p>
<p>access_key and secret are configured for the user which has the necessary policy, as seen in README.md. </p>
<p>I run <code>terraform init</code>, then <code>terraform apply</code> and it fails with following error:
<code>module.eks.null_resource.update_config_map_aws_auth (local-exec): error: unable to recognize "aws_auth_configmap.yaml": Unauthorized</code></p>
<p>I also checked in the modules, and it looks like it should create 2 files: <code>aws_auth_configmap.yaml</code> and <code>kube_config.yaml</code>, but instead I can see 2 different files created: <code>kubeconfig_eks-cluster-created-with-tf</code> and <code>config-map-aws-auth_eks-cluster-created-with-tf.yaml</code>. </p>
| <p>The problem here seems to be that you try to use an AssumedRole but then the module attempts to do local exec which is why it fails. </p>
<p>What you would be required is something like this where you add
"kubeconfig_aws_authenticator_env_variables" to the module taken from the official example like below - </p>
<pre><code>module "my-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "my-cluster"
kubeconfig_aws_authenticator_env_variables = {
AWS_PROFILE = "NameOfProfile"
}
subnets = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
vpc_id = "vpc-1234556abcdef"
worker_groups = [
{
instance_type = "m4.large"
asg_max_size = 5
}
]
tags = {
environment = "test"
}
}
</code></pre>
<p>Note: The following is added - </p>
<pre><code> kubeconfig_aws_authenticator_env_variables = {
AWS_PROFILE = "NameOfProfile"
}
</code></pre>
<p>Replace the value of profile with whatever name you have provided with in the ~/.aws/config. </p>
|
<p>Currently <a href="https://github.com/istio/istio/issues/9030" rel="noreferrer">Istio does not support a fully automated certificate procedure</a>. The standard ingress does support this by means of cert-manager. Would it be possible to combine standard ingress configuration for certification management with istio for other stuff? What are the down-sides to this combination?</p>
| <p>This was discussed in <a href="https://medium.com/ww-engineering/istio-part-ii-e219a2e771bb" rel="nofollow noreferrer">a blog post on Medium last fall</a>, actually. I held onto the link because I too am interested in using nginx-ingress as the front-end, but then taking advantage of istio "for other stuff". If it pans out for you, would love to hear.</p>
|
<p>Is it possible to see what traffic is going through <code>kubectl proxy</code>? For example HTTP request, response.</p>
<p>Is it possible to follow that log (kind of <code>-f</code>)?</p>
| <p><code>kubectl --v=10 proxy</code> follows the log.</p>
|
<p>How to update namespace without changing External-IP of the Service? </p>
<p>Initially, it was without namespace and it was deployed:</p>
<pre><code>kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: LoadBalancer
ports:
port: 80
protocol: TCP
selector:
app: my-app
</code></pre>
<p>It creates External-IP address and it was pointed to the DNS. Now I would like to update the Namespace to keep it more organised. So I have created the namespace.</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
</code></pre>
<p>and I have updated the service with a namespace. </p>
<pre><code>kind: Service
metadata:
name: my-app
namespace: my-namespace
labels:
app: my-app
spec:
type: LoadBalancer
ports:
port: 80
protocol: TCP
selector:
app: my-app
</code></pre>
<p>The above file creates another Service in <code>my-namespace</code> and the External-IP is not the same. Is there a way to update the namespace without recreating? </p>
<p>Please let me know if you need any information. Thanks!</p>
| <p>some cloud providers allow you to specify external ip of a service with <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a> if you can ake use of this, you should be able to achieve what you need. This will not be a zero downtime operation though as you'll first need to delete the current service and recreate it under different namespace with externalIP specified.</p>
|
<p>I created a Kubernetes cluster on Google Cloud using:</p>
<pre><code>gcloud container clusters create my-app-cluster --num-nodes=1
</code></pre>
<p>Then I deployed my 3 apps (backend, frontend and a scraper) and created a load balancer. I used the following configuration file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-server
image: gcr.io/my-app/server
ports:
- containerPort: 8009
envFrom:
- secretRef:
name: my-app-production-secrets
- name: my-app-scraper
image: gcr.io/my-app/scraper
ports:
- containerPort: 8109
envFrom:
- secretRef:
name: my-app-production-secrets
- name: my-app-frontend
image: gcr.io/my-app/frontend
ports:
- containerPort: 80
envFrom:
- secretRef:
name: my-app-production-secrets
---
apiVersion: v1
kind: Service
metadata:
name: my-app-lb-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: my-app-server-port
protocol: TCP
port: 8009
targetPort: 8009
- name: my-app-scraper-port
protocol: TCP
port: 8109
targetPort: 8109
- name: my-app-frontend-port
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>When typing <code>kubectl get pods</code> I get:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-app-deployment-6b49c9b5c4-5zxw2 0/3 Pending 0 12h
</code></pre>
<p>When investigation i Google Cloud I see "Unschedulable" state with "insufficient cpu" error on pod:</p>
<p><a href="https://i.stack.imgur.com/7boXc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7boXc.png" alt="Unschedulable state due to Insufficient cpu"></a></p>
<p>When going to Nodes section under my cluster in the Clusters page, I see 681 mCPU requested and 940 mCPU allocated:
<a href="https://i.stack.imgur.com/tLpKL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tLpKL.png" alt="enter image description here"></a></p>
<p>What is wrong? Why my pod doesn't start?</p>
| <p>Every container has a default CPU request (in GKE I’ve noticed it’s 0.1 CPU or 100m). Assuming these defaults you have three containers in that pod so you’re requesting another 0.3 CPU.</p>
<p>The node has 0.68 CPU (680m) requested by other workloads and a total limit (allocatable) on that node of 0.94 CPU (940m).</p>
<p>If you want to see what workloads are reserving that 0.68 CPU, you need to inspect the pods on the node. In the page on GKE where you see the resource allocations and limits per node, if you click the node it will take you to a page that provides this information.<br>
In my case I can see 2 pods of <code>kube-dns</code> taking 0.26 CPU each, amongst others. These are system pods that are needed to operate the cluster correctly. What you see will also depend on what add-on services you have selected, for example: HTTP Load Balancing (Ingress), Kubernetes Dashboard and so on.</p>
<p>Your pod would take CPU to 0.98 CPU for the node which is more than the 0.94 limit, which is why your pod cannot start.</p>
<p>Note that the scheduling is based on the amount of CPU <em>requested</em> for each workload, not how much it actually uses, or the limit.</p>
<p>Your options:</p>
<ol>
<li>Turn off any add-on service which is taking CPU resource that you don't need.</li>
<li>Add more CPU resource to your cluster. To do that you will either need to change your node pool to use VMs with more CPU, or increase the number of nodes in your existing pool. You can do this in GKE console or via the <code>gcloud</code> command line.</li>
<li>Make explicit requests in your containers for less CPU that will override the defaults.</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- name: my-app-server
image: gcr.io/my-app/server
...
resources:
requests:
cpu: "50m"
- name: my-app-scraper
image: gcr.io/my-app/scraper
...
resources:
requests:
cpu: "50m"
- name: my-app-frontend
image: gcr.io/my-app/frontend
...
resources:
requests:
cpu: "50m"
</code></pre>
|
<p>I have 3 services in my ingress, the first 2 use <code>default</code> namespace. The third service is <strong>prometheus-server</strong> service which has namespace <code>ingress-nginx</code>.
Now, I want to map my prometheus DNS to the service, but getting error because ingress can't find the prometheus service in <code>default</code> namespace.</p>
<p>How to deal with non-default namespace in ingress definition?</p>
| <p>You will need to refer to your service in the other namespace with its full path, that is <code>prometheus-server.ingress-nginx.svc.cluster.local</code>.</p>
<p>You shouldn’t need a second Ingress to do this.</p>
|
<p>I have 3 services in my ingress, the first 2 use <code>default</code> namespace. The third service is <strong>prometheus-server</strong> service which has namespace <code>ingress-nginx</code>.
Now, I want to map my prometheus DNS to the service, but getting error because ingress can't find the prometheus service in <code>default</code> namespace.</p>
<p>How to deal with non-default namespace in ingress definition?</p>
| <p>You would want to create a new <code>Ingress</code> in namespace <code>ingress-nginx</code> that would route your DNS to that service. For example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
namespace: ingress-nginx
spec:
rules:
- host: your.domain.com
http:
paths:
- path: /
backend:
serviceName: prometheus-server
servicePort: 80
</code></pre>
|
<p>I'm setting up a ConfigMap for my Helm chart.</p>
<p>As per good practice, I want to include non-yaml resources through separate files rather than inline. Currently I am trying to include both an xml file and a tpl helper in my ConfigMap under "data". Both are read without issue in the below code. But I cannot seem to make the indentation for the keys work properly.</p>
<p>My ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "name" . }}
labels:
app: {{ template "name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ quote .Release.Name }}
heritage: {{ quote .Release.Service }}
name: {{ template "name" . }}
data:
logback.xml: |-
{{- .Files.Get "application-resources/logback.xml" | nindent 8 -}}
application.yml: |-
{{- include "application.yml" . | nindent 8 -}}
</code></pre>
<p>This produces the following indentation (actual values are removed for readability):</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: erklaering-anden-lov-detektor-app
labels:
app: name-of-app
chart: name-of-chart
release: "release-name"
heritage: "Tiller"
name: name-of-app
data:
logback.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true">
<xml-stuff>
</configuration>
application.yml: |-
application.yml.contents
</code></pre>
<p>Which should be:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: erklaering-anden-lov-detektor-app
labels:
app: name-of-app
chart: name-of-chart
release: "release-name"
heritage: "Tiller"
name: name-of-app
data:
logback.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true">
<xml-stuff>
</configuration>
application.yml: |-
application.yml.contents
</code></pre>
<p>I'm at my wit's end. What am I doing wrong? How do I make yaml snap back to recognizing configMap's own indentation and/or explicitly control it?</p>
| <p>Try this:</p>
<pre><code>data:
logback.xml: |-
{{- .Files.Get "application-resources/logback.xml" | nindent 8 }}
application.yml: |-
{{- include "application.yml" . | nindent 8 -}}
</code></pre>
<p>I've removed the "-" from the 3rd line as it removes following whitespace.</p>
<p>You can also have a look at this GitHub Issue <a href="https://github.com/helm/helm/issues/3470" rel="noreferrer">#3470</a>.</p>
<p>If you need more help, you can check the documentation for <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/" rel="noreferrer">Chart Development Tips and Tricks</a></p>
|
<p>I am new to k8s and exploring more on production grade deployment.
We have py Django app which is running in (say in 9000) node port. When I try to expose them using a k8s-service ELB,
- it works by running 80 and 443 separately; where as 80 to 443 redirection is not supported in AWS classic ELB.</p>
<p>Then I switched to aws alb ingress controller; the problem i faced was
- ALB does not works with node port and only with http and https port.</p>
<p>Any thoughts would be much appreciated!!</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ABC
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: 'subnet-1, subnet-2'
alb.ingress.kubernetes.io/security-group: sg-ABC
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200"
labels:
name: ABC
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: ABC
servicePort: 80 ```
</code></pre>
| <p>Thank you @sulabh and @Fahri it works perfect now. I went through the doc again and corrected my mistake.</p>
<p>Issues was with route path in ALb; </p>
<p>Setup is like;</p>
<p>python-django-uwsgi app in pods and expose it as service in NodePort and use aws ingress controller for ALB;</p>
<p>Cheers!</p>
|
<p>I need a breakdown of my usage inside a single project categorized on the basis of Pods or Services or Deployments but the billing section in console doesn't seem to provide such granular information. Is it possible to get this data somehow? I want to know what was the network + compute cost on per deployment or pods.
Or maybe if it is possible to have it atleast on the cluster level? Is this breakdown available in BigQuery?</p>
| <p>Recently it was released a new features in GKE that allows to collect metrics inside a cluster that can also be combined with the exported billing data to separate costs per project/environment, making it possible to separate costs per namespace, deployment, labels, among other criteria.</p>
<p><a href="https://cloud.google.com/blog/products/containers-kubernetes/gke-usage-metering-whose-line-item-is-it-anyway" rel="nofollow noreferrer">https://cloud.google.com/blog/products/containers-kubernetes/gke-usage-metering-whose-line-item-is-it-anyway</a></p>
|
<p>I want to write docker image name in kubernetes config and then use it in in my deployment file instead of directly hardcoding it.
So instead of:</p>
<p><strong><code>image: "nginx:latest</code>"</strong></p>
<p>I want to do following:</p>
<pre><code>image:
valueFrom:
configMapKeyRef:
name: docker-config
key: docker-image
</code></pre>
<p>How can it be done or any other alternatives?
Thanks.</p>
| <p>If you want to <strong>update</strong> the <strong>value</strong> of image key you can use following <strong>data-driven commands</strong> by using <strong>set</strong> verb for example </p>
<pre><code> # Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'.
kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1
# Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'
kubectl set image deployments,rc nginx=nginx:1.9.1 --all
# Update image of all containers of daemonset abc to 'nginx:1.9.1'
kubectl set image daemonset abc *=nginx:1.9.1
# Print result (in yaml format) of updating nginx container image from local file, without hitting the server
kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml
</code></pre>
<p>you can get more detail by using </p>
<pre><code>kubectl set image --help
</code></pre>
<p>here are more example for <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources" rel="noreferrer">updating-resources</a></p>
|
<p>I deployed the app in the kubernetes+istio cluster. I used the http probe for the readiness check. In the Graph section of Kiali, the kube-probe traffic is shown as a line from unkonwn to httpbin. I tried to add "x-b3-sampled" http header to avoid the record for this traffic. But it doesn't work. Is there any method to hide the traffic from kube-probe?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
containers:
- image: docker.io/citizenstig/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 8000
readinessProbe:
httpGet:
path: /get
port: 8000
httpHeaders:
- name: 'x-b3-sampled'
value: '0'
initialDelaySeconds: 5
timeoutSeconds: 1
livenessProbe:
tcpSocket:
port: 8000
initialDelaySeconds: 5
timeoutSeconds: 1
</code></pre>
| <p>UPDATE: This is actually going to be fixed in Istio 1.1, and the nice part is that you can easily apply the patch by yourself without waiting 1.1, as it's in the yaml configs:</p>
<p>Patch link: <a href="https://github.com/istio/istio/pull/10480" rel="nofollow noreferrer">https://github.com/istio/istio/pull/10480</a></p>
<p>So for Istio 1.0.x, you basically have to edit the Custom Resource of type <code>Rule</code>, named <code>promhttp</code>, in namespace <code>istio-system</code> to set the following <code>match</code> expression : </p>
<pre><code> match: (context.protocol == "http" || context.protocol == "grpc") && (match((request.useragent | "-"), "kube-probe*") == false)
</code></pre>
<hr>
<p>Initial response:</p>
<p>I'm not sure if there's a "clean" solution for that, but there's a workaround described at the bottom of this doc page : <a href="https://istio.io/docs/tasks/traffic-management/app-health-check/#liveness-and-readiness-probes-with-http-request-option" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/app-health-check/#liveness-and-readiness-probes-with-http-request-option</a></p>
<blockquote>
<p>Because the Istio proxy only intercepts ports that are explicitly declared in the containerPort field, traffic to 8002 port bypasses the Istio proxy regardless of whether Istio mutual TLS is enabled.</p>
</blockquote>
<p>So you can have your health endpoints using a different port that you <strong>would not declare</strong> as container ports, and that way the traffic is not intercepted by the envoy proxy, hence won't generate telemetry in Kiali.</p>
<p>This is not an ideal solution as it forces you to shape your app in a certain way for Istio... but still, it works.</p>
<p>[Edit, just found that: <a href="https://istio.io/help/faq/telemetry/#controlling-what-the-sidecar-reports" rel="nofollow noreferrer">https://istio.io/help/faq/telemetry/#controlling-what-the-sidecar-reports</a> . Looks like you can also filter out requests from telemetry based on source. Though I'm not sure if it's going to work in that case where source is "unknown"]</p>
|
<p>I'm setting a dotnet core app into kubernetes cluster and i getting error "Unable to start kestrel". </p>
<p>Dockerfile is working ok on local machine.</p>
<pre><code>at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
Unhandled Exception: System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found.
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found.
Unable to start Kestrel.
</code></pre>
<p>My dockerfile:</p>
<pre><code>[...build step]
FROM microsoft/dotnet:2.1-aspnetcore-runtime
COPY --from=build-env /app/out ./app
ENV PORT=5000
ENV ASPNETCORE_URLS=http://+:${PORT}
WORKDIR /app
EXPOSE $PORT
ENTRYPOINT [ "dotnet", "Gateway.dll" ]
</code></pre>
<p>I expected application started successfully but i getting this error "unable to start kestrel".</p>
<p>[<strong>UPDATE</strong>]</p>
<p>I've removed https port from app and tried again without https but now application just start and stop without any error or warning. Container log bellow:</p>
<p><a href="https://i.stack.imgur.com/IFuSR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IFuSR.png" alt="enter image description here"></a></p>
<p>Running local using dotnet run or building image and running from container, everything work. Application just shut down into kubernetes.</p>
<p>I am using dotnet core 2.2</p>
<p>[<strong>UPDATE</strong>]</p>
<p>I've generated a cert, added in project, setup in kestrel and i got same result. Localhost using docker imagem it work, but in kubernetes (google cloud), it just shutdown immediately after it started.</p>
<p><strong>localhost:</strong></p>
<pre><code>$ docker run --rm -it -p 5000:5000/tcp -p 5001:5001/tcp juooo:latest
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {f7808ac5-0a0d-47d0-86cb-c605c2db84a3} may be persisted to storage in unencrypted form.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Overriding address(es) 'https://+:5001, http://+:5000'. Binding to endpoints defined in UseKestrel() instead.
Hosting environment: Production
Content root path: /app
Now listening on: https://0.0.0.0:5001
Application started. Press Ctrl+C to shut down.
</code></pre>
| <p>I found a event log with a kubernetes error saying that kubernetes was unable to hit (:5000/). So i tried create a controller targeting root application (because it's a api, so don't have a root like a web app) and it worked.</p>
|
<p>in my kubernetes cluster, http liveness probe always failed with this message </p>
<pre><code>Liveness probe failed: Get http://10.233.90.72:8080/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>so, <code>coredns</code> and <code>kubernetes-dashboard</code> (any other using http liveness probe) pods being infinitely restart.</p>
<p>while pod running (between events start and restart), i check the endpoints for the pod with executing command <code>curl http://10.233.90.72:8080/health</code> on the <code>busyboxplus</code> pod.
this command are working normally, i can see <code>OK</code> return.
but liveness probe still failed. pod is restarting...</p>
<p>in this situation, i want to debug liveness probe, but i don't have any idea who/where actually work liveness probe in kubernetes?
is this pod? or node?</p>
<p>how can i debug liveness probe?
does anyone have same issue..?</p>
<p>please advice for me.</p>
<pre><code>kubectl version:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
version info:
OS: Ubuntu 18.04
Kubernetes: 1.13.3
Docker: 18.09.2
</code></pre>
<p>thanks in advance</p>
| <p>Did you check the DNS already ? using the busybox:1.28 try to execute dns lookup to the pod and see what you get.</p>
<pre><code>nslookup pod-ip-in-dash.pod.cluster.local
</code></pre>
<p>Another thing you can do which you might already did, check in kube-system if coredns pod is running.</p>
<p>Let me know how went,</p>
|
<p>I'm trying to follow either of the following instructions:</p>
<ul>
<li><a href="https://istio.io/docs/setup/kubernetes/helm-install/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/helm-install/</a></li>
<li><a href="https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio" rel="nofollow noreferrer">https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio</a></li>
</ul>
<pre><code> ~ helm repo add istio.io https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
~ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
istio.io https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
~ helm repo update
~ helm install install/kubernetes/helm/istio --name istio --namespace istio-system
Error: failed to download "install/kubernetes/helm/istio" (hint: running `helm repo update` may help)
~ helm dependency update install/kubernetes/helm/istio
Error: could not find <current directory>/install/kubernetes/helm/istio: stat
<current directory>/install/kubernetes/helm/istio: no such file or directory
</code></pre>
| <p>To answer my own question for anyone else with the same issue:</p>
<p>Don't use the public helm repo.</p>
<p>Download a release archive from:</p>
<p><a href="https://github.com/istio/istio/releases" rel="nofollow noreferrer">https://github.com/istio/istio/releases</a></p>
<p>Unpack it, navigate into the istio root directory, then you can successfully do:</p>
<p>Helm 2.x syntax:</p>
<pre><code>helm install install/kubernetes/helm/istio --name istio --namespace istio-system
</code></pre>
<p>Helm 3.x syntax:</p>
<pre><code>helm install istio install/kubernetes/helm/istio --namespace istio-system
</code></pre>
|
<p>I am learning Kubernetes and have deployed a headless service on Kubernetes(on AWS) which is exposed to the external world via nginx ingress.</p>
<p>I want <code>nslookup <ingress_url></code> to directly return IP address of PODs.
How to achieve that?</p>
| <h3>Inside the cluster:</h3>
<p>It's not a good idea to let a <code><ingress_host></code> resolved to Pod IP. It's a common design to let different kinds of pod served on one single <code>hostname</code> under different paths, but you can only set one (or one group of, with DNS load balance) IP record for it.</p>
<p>However, you can do this by adding <code><ingress_host> <Pod_IP></code> into <code>/etc/hosts</code> in init script, since you can get <code><Pod_IP></code> by doing <code>nslookup <headless_service></code>.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">HostAlias</a> is another option if you konw the pod ip before applying the deployment.</p>
<h3>From outside:</h3>
<p>I don't think it's possible outside the cluster. Because you need to do the DNS lookup to get to the ingress controller first, which means it has to be resolved to the IP of ingress controller.</p>
<p>At last, it's a bad idea to use a headless service on Pod because many apps do DNS lookups once and cache the results, which might bring a problem because the IP of Pod can be "changed" frequently.</p>
|
<p>I am hosting my application on GKE. The kubectl version installed in the server is <code>v1.10.11-gke.1</code> and nginx-ingress is <code>nginx-ingress-0.28.2</code></p>
<p>I would like to see the client IP address in my logs. For now, I can only see the pod IP address for example:</p>
<p><code>2019-02-14 15:17:21.000 EAT
10.60.1.1 - [10.60.1.1] - - [14/Feb/2019:12:17:21 +0000] "GET /user HTTP/2.0" 404 9 "-" "Mozilla/5.0 (Macintosh;</code></p>
<p>My service has tls managed by letsencrypt. How can I get the client IP address on the logs? </p>
<p><a href="https://i.stack.imgur.com/05Ky1.png" rel="nofollow noreferrer">screenshot of log files</a></p>
| <p>I reproduced the behavior you observed in a test. In my own container logs, on a job running with an nginx-ingress controller, we can only see the internal IP address assuming that nginx-ingress-controller service YAML file is set to:</p>
<pre><code>externalTrafficPolicy: Cluster
</code></pre>
<p>Setting traffic to 'Cluster” means that all the nodes can receive the requests. 'Cluster obscures the client source IP', the requests also could be <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-clusterip" rel="nofollow noreferrer">SNAT'd</a> to a node that has the running pod.</p>
<p>However, If you change:</p>
<pre><code>externalTrafficPolicy: Local
</code></pre>
<p>The client source IP are exposed. “Local” preserves the client source IP but may cause imbalanced traffic spreading.This due to the fact that only the Nodes that are running the pods will be considered healthy by the network load balancer. The requests will be sent only to healthy nodes.</p>
<p>Some background explanation on how to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">preserve source IP</a> in your containers and some further reading on the hops for source IP for services with <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">Type=Nodeport</a> can be useful to understand what is happening. </p>
|
<p>I have an ubuntu box out of the kubernetes cluster</p>
<p>My /etc/resolv.conf content</p>
<pre><code>nameserver 10.3.0.1 (kubedns)
</code></pre>
<p>If i make a nslookup, everything works fine</p>
<pre><code>nslookup spark-master-0.spark-master.ns.svc.cluster.local
Server: 10.3.0.1
Address: 10.3.0.1#53
Non-authoritative answer:
Name: spark-master-0.spark-master.ns.svc.cluster.local
Address: 10.2.0.252
</code></pre>
<p>if i try to use any other tool (chrome, curl, ping, wget) i get an error:</p>
<pre><code>curl spark-master-0.spark-master.ns.svc.cluster.local
curl: (6) Could not resolve host: spark-master-0.spark-master.ns.svc.cluster.local
</code></pre>
<p>The only way is to add search .cluster.local in /etc/resolv.conf, but now i cannot use the fqdn of the nodes</p>
<p>any tip on how to use the fqdn ?</p>
<p><strong>Update</strong> The same setup in my mac works perfect ! the problem is only with my ubuntu 14.04.3</p>
| <p>It seems like FQDN is working fine with DNS but issue with the host system.</p>
<p>Can you try after changing the below entry in /etc/nsswitch.conf.</p>
<pre><code>hosts: files mdns4_minimal [NOTFOUND=return] dns
</code></pre>
<p>to</p>
<pre><code>hosts: files mdns4_minimal dns [NOTFOUND=return]
</code></pre>
<p>if above also not work then try putting only DNS.</p>
<pre><code>hosts: dns [NOTFOUND=return]
</code></pre>
|
<p>I have a private registry (gitlab) where my docker images are stored.
For deployment a secret is created that allows GKE to access the registry. The secret is called <code>deploy-secret</code>.
The secret's login information expires after short time in the registry. </p>
<p>I additionally created a second, permanent secret that allows access to the docker registry, named <code>permanent-secret</code>.</p>
<p>Is it possible to specify the Pod with two secrets? For example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: deploy-secret
- name: permanent-secret
</code></pre>
<p>Will Kubernetes, when trying to re-pull the image later, recognize that the first secret does not work (does not allow authentication to the private registry) and then fallback successfully to the second secret?</p>
| <p>Surprisingly this works! I just tried this on my cluster. I added a fake registry credentials secret, with the wrong values. I put both secrets in my yaml like you did (below) and the pods got created and container is created and running successfully:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
nodeSelector:
containers:
- image: gitlab.myapp.com/my-image:tag
name: test
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred-test
- name: regcred
</code></pre>
<p>The <code>regcred</code> secret has the correct values and the <code>regcred-test</code> is just a bunch of gibberish. So we can see that it ignores the incorrect secret.</p>
|
<p>Hello kubernetes developers,</p>
<p>i get the error 'ImagePullBackOff' if deploy a pod in kubernetes.
Pulling in docker to get the image from git-hub repository is no problem. But what is wrong with my configuration?</p>
<p>I tried this workaround to create a secret-key with the following command.</p>
<pre><code>kubectl create secret docker-registry secretkey \
--docker-server=registry.hub.docker.com \
--docker-username=reponame \
--docker-password=repopassword \
--docker-email=repoemail
</code></pre>
<p>And this is the yaml file to create the kubernetes pod.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
io.kompose.service: gps-restful-server
name: gps-restful-server
spec:
containers:
- image: tux/gps:latest
name: gps-restful-server
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /var/www/html/modules
name: gps-modules
- mountPath: /var/www/html/profiles
name: gps-profile
- mountPath: /var/www/html/themes
name: gps-theme
- mountPath: /var/www/html/sites
name: gps-sites
imagePullPolicy: Always
restartPolicy: OnFailure
imagePullSecrets:
- name: secretkey
volumes:
- name: gps-modules
persistentVolumeClaim:
claimName: gps-modules
- name: gps-profile
persistentVolumeClaim:
claimName: gps-profile
- name: gps-theme
persistentVolumeClaim:
claimName: gps-theme
- name: gps-sites
persistentVolumeClaim:
claimName: gps-sites
status: {}
</code></pre>
<p>To deploy the pod in kubernetes, i execute the command: </p>
<pre><code>kubectl create -f gps-restful-server-pod.yaml.
</code></pre>
<p>Get the status from the pod:</p>
<pre><code>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/telemetry-restful-server 0/1 ImagePullBackOff 0 12m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
</code></pre>
<p>Description of the pod:</p>
<pre><code>kubectl describe pod gps-restful-server
Name: gps-restful-server
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.178.92
Start Time: Thu, 14 Feb 2019 16:56:25 +0100
Labels: io.kompose.service=gps-restful-server
Annotations: <none>
Status: Pending
IP: 172.17.0.3
Containers:
gps-restful-server:
Container ID:
Image: tux/gps:latest
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4t28k (ro)
/var/www/html/modules from gps-modules (rw)
/var/www/html/profiles from gps-profile (rw)
/var/www/html/sites from gps-sites (rw)
/var/www/html/themes from gps-theme (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
gps-modules:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: gps-modules
ReadOnly: false
gps-profile:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: gps-profile
ReadOnly: false
gps-theme:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: gps-theme
ReadOnly: false
gps-sites:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: gps-sites
ReadOnly: false
default-token-4t28k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4t28k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned default/gps-restful-server to minikube
Normal Pulling 2m (x4 over 4m) kubelet, minikube pulling image "tux/gps:latest"
Warning Failed 2m (x4 over 4m) kubelet, minikube Failed to pull image "tux/gps:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for tux/gps, repository does not exist or may require 'docker login'
Warning Failed 2m (x4 over 4m) kubelet, minikube Error: ErrImagePull
Warning Failed 2m (x6 over 4m) kubelet, minikube Error: ImagePullBackOff
Normal BackOff 2m (x7 over 4m) kubelet, minikube Back-off pulling image "tux/gps:latest"
</code></pre>
<p>How it's possible to pull the image from docker-hub in kubernetes?</p>
| <p>The Image tux/gps:latest does not exist because its a dummy value.
Solution to ImagePullBackoff: </p>
<ol>
<li>Make sure that your Image points to latest version repouser/reponame:latest</li>
<li>Create a secret docker-registry (look above)</li>
<li>Server address[<a href="https://index.docker.io/v1/" rel="nofollow noreferrer">docker-hub registry</a>]</li>
<li>Add the following propertie in pod yaml file (look above):</li>
</ol>
<p>imagePullSecrets:<br>
- name: secretkey</p>
|
<p>I try, I try, but Rancher 2.1 fails to deploy the "<strong>mongo-replicaset</strong>" Catalog App, with <strong>Local Persistent Volumes</strong> configured.</p>
<p>How to correctly deploy a mongo-replicaset with Local Storage Volume? Any debugging techniques appreciated since I am new to rancher 2.</p>
<p>I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found <a href="https://github.com/rancher/rancher/issues/16298" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Note</strong>: Deployment without Local Persistent Volumes <strong>succeed</strong>.</p>
<p><strong>Note</strong>: Deployment with Local Persistent Volume and with the "mongo" image <strong>succeed</strong> (without replicaset version).</p>
<p><strong>Note</strong>: Deployment with both mongo-replicaset and with Local Persistent Volume <strong>fails</strong>.</p>
<hr />
<p><strong>Step A - Cluster</strong></p>
<p>Create a rancher instance, and:</p>
<ol>
<li>Add three nodes: a worker, a worker etcd, a worker control plane</li>
<li>Add a label on each node: name one, name two and name three for node Affinity</li>
</ol>
<hr />
<p><strong>Step B - Storage class</strong></p>
<p>Create a storage class with these parameters:</p>
<ol>
<li>volumeBindingMode : WaitForFirstConsumer <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">saw here</a></li>
<li>name : local-storage</li>
</ol>
<hr />
<p><strong>Step C - Persistent Volumes</strong></p>
<p>Add 3 persistent volumes like this:</p>
<ol>
<li>type : local node path</li>
<li>Access Mode: Single Node RW, 12Gi</li>
<li>storage class: local-storage</li>
<li>Node Affinity: name one (two for second volume, three for third volume)</li>
</ol>
<hr />
<p><strong>Step D - Mongo-replicaset Deployment</strong></p>
<p>From catalog, select Mongo-replicaset and configure it like that:</p>
<ol>
<li>replicaSetName: rs0</li>
<li>persistentVolume.enabled: true</li>
<li>persistentVolume.size: 12Gi</li>
<li>persistentVolume.storageClass: local-storage</li>
</ol>
<hr />
<p><strong>Result</strong></p>
<p>After doing ABCD steps, the newly created mongo-replicaset app stay infinitely in "Initializing" state.</p>
<p><img src="https://i.stack.imgur.com/JYW6H.png" alt="mongo status stopped at initialized" /></p>
<p>The associated mongo workload contain only one pod, instead of three. And this pod has two 'crashed' containers, bootstrap and mongo-replicaset.</p>
<p><img src="https://i.stack.imgur.com/lzu2j.png" alt="crashed workload with only one pod" /></p>
<hr />
<p><strong>Logs</strong></p>
<p>This is the output from the 4 containers of the only running pod. There is no error, no problem.</p>
<p><img src="https://i.stack.imgur.com/saijl.png" alt="no logs in mongo container" />
<img src="https://i.stack.imgur.com/awx7D.png" alt="allmost no logs in copy-config container" />
<img src="https://i.stack.imgur.com/2ojI6.png" alt="allmost no logs in install container" />
<img src="https://i.stack.imgur.com/Ea4le.png" alt="some logs from bootstrap container" /></p>
<p>I can't figure out what's wrong with this configuration, and I don't have any tools or techniques to analyze the problem. Detailed configuration can be found <a href="https://github.com/rancher/rancher/issues/16298" rel="nofollow noreferrer">here</a>. Please ask me for more commands results.</p>
<p>Thanks you</p>
| <p>All this configuration is correct.</p>
<p>It's missing a detail since Rancher is a containerized deployment of kubernetes.
Kubelets are deployed on each node in docker containers. They don't access to OS local folders.</p>
<p>It's needed to add a volume binding for the kubelets, like that K8s will be able to create the mongo pod with this same binding.</p>
<p>In rancher:
Edit the cluster yaml (Cluster > Edit > Edit as Yaml)</p>
<p>Add the following entry under "services" node:</p>
<pre><code> kubelet:
extra_binds:
- "/mongo:/mongo:rshared"
</code></pre>
|
<p>I am learning Kubernetes and have deployed a headless service on Kubernetes(on AWS) which is exposed to the external world via nginx ingress.</p>
<p>I want <code>nslookup <ingress_url></code> to directly return IP address of PODs.
How to achieve that?</p>
| <p>If you declare a “headless” service with selectors, then the internal DNS for the service will be configured to return the IP addresses of its pods directly. This is a somewhat unusual configuration and you should also expect an effect on other, cluster internal, users of that service.</p>
<p>This is documented <a href="https://kubernetes.io/docs/concepts/services-networking/service/#with-selectors" rel="nofollow noreferrer">here</a>. Example:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
clusterIP: None
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
</code></pre>
|
<p>I recently created a cluster on EKS with eksctl. <code>kubectl logs -f mypod-0</code> bumps into Authorization error:</p>
<p><code>Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)
</code>
Any advice and insight is appreciated</p>
| <p>You would need to create a ClusterRoleBinding with a Role pointing towards the user : kube-apiserver-kubelet-client</p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubelet-api-admin
subjects:
- kind: User
name: kube-apiserver-kubelet-client
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:kubelet-api-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>kubelet-api-admin is usually a role that has the necessary permissions, but you can replace this with an apt role. </p>
|
<p>I'm currently running a Spring Boot Pod in Kubernetes. There's a side car in the pod for the cloud SQL proxy. </p>
<p>Below is my spring Boot application.properties configuration:</p>
<pre><code>server.port=8081
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.continue-on-error=true
spring.datasource.url=jdbc:mysql://localhost:3306/<database_name>
spring.datasource.username=<user_name>
spring.datasource.password=<password>
</code></pre>
<p>Below is my pom.xml extract with plugins and dependencies:</p>
<pre><code><properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.jayway.jsonpath</groupId>
<artifactId>json-path</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-hateoas</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>ca.performance.common</groupId>
<artifactId>common-http</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>mysql-socket-factory</artifactId>
<version>1.0.10</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</code></pre>
<p></p>
<p>And this is my deployment.yaml file:</p>
<hr>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-dummy-name
spec:
selector:
app: app-dummy-name
ports:
- port: 81
name: http-app-dummy-name
targetPort: http-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-dummy-name
spec:
replicas: 1
selector:
matchLabels:
app: app-dummy-name
template:
metadata:
labels:
app: app-dummy-name
spec:
containers:
- name: app-dummy-name
image: <image url>
ports:
- containerPort: 8081
name: http-api
env:
- name: DB_HOST
value: 127.0.0.1:3306
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
</code></pre>
<p>I followed the instructions from <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="nofollow noreferrer">this link</a>, so I created the secrets and the service account. However, I'm constantly getting connection refusal errors when I deploy the previous yaml file in Kubernetes after creating the secrets:</p>
<pre><code>org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data;
nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection;
nested exception is com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure.
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
</code></pre>
<p>I even tested the Spring boot application locally using the proxy and the same application.properties configuration and it was working fine.</p>
| <p>Im adding my deployment yaml which worked for me, check if adding the following will help:</p>
<h3>under volumes:</h3>
<pre><code> volumes:
- name: cloudsql
emptyDir:
</code></pre>
<h3>in the connection: <code>--dir=/cloudsql</code></h3>
<pre><code> - name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=<INSTANCE_CONNECTION_NAME=tcp:5432>",
"-credential_file=/secrets/cloudsql/credentials.json"]
</code></pre>
<p>also make sure you enabled the <a href="https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview" rel="noreferrer">Cloud SQL Administration API</a></p>
<h3>here is my full deployment yaml</h3>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-dummy-name
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: app-dummy-name
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: app-dummy-name
image: <image url>
ports:
- containerPort: 80
env:
- name: DB_HOST
value: localhost
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=my-project-id:us-central1:postgres-instance-name=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
# volumes
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
</code></pre>
<p>here are my pre-delpoy script:</p>
<pre><code>#!/bin/bash
# https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
# 1. Go to the Cloud SQL Service accounts page of the Google Cloud Platform Console.
# GO TO THE SERVICE ACCOUNTS PAGE
# 2. If needed, select the project that contains your Cloud SQL instance.
# 3. Click Create service account.
# 4. In the Create service account dialog, provide a descriptive name for the service account.
# 5. For Role, select Cloud SQL > Cloud SQL Client.
# Alternatively, you can use the primitive Editor role by selecting Project > Editor, but the Editor role includes permissions across Google Cloud Platform.
#
# 6. If you do not see these roles, your Google Cloud Platform user might not have the resourcemanager.projects.setIamPolicy permission. You can check your permissions by going to the IAM page in the Google Cloud Platform Console and searching for your user id.
# Change the Service account ID to a unique value that you will recognize so you can easily find this service account later if needed.
# 7. Click Furnish a new private key.
# 8. The default key type is JSON, which is the correct value to use.
# 9. Click Create.
# 10. enable Cloud SQL Administration API [here](https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview)
# make sure to choose your project
echo "create cloudsql secret"
kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=postgres-sql-credential.json
echo "create cloudsql user and password"
kubectl create secret generic cloudsql-db-credentials \
--from-literal=username=postgres --from-literal=password=123456789
</code></pre>
<h3>postgres-sql-credential.json file:</h3>
<pre><code>{
"type": "service_account",
"project_id": "my-project",
"private_key_id": "1234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\n123445556\n123445\n-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "1234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/postgres-sq%my-project.iam.gserviceaccount.com"
}
</code></pre>
|
<p>I could not understand what the package can do, the offical doc show nothing about <code>unstructured</code>. What the package used for ? Is it used for converting map[string]interface{} to K8S Obj ?</p>
<p><a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" rel="nofollow noreferrer">https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured</a></p>
| <p>It looks like <code>unstructured</code> provides an interface to kubernetes objects, when you don't know object type upfront, i.e. <a href="https://github.com/kubernetes/client-go/blob/master/dynamic/interface.go" rel="nofollow noreferrer">dynamic</a> package in client-go uses it extensively</p>
|
<p>We have deployed etcd of k8s using static pod, it's 3 of them. We want to upgrade pod to define some labels and readiness probe for them. I have searched but found no questions/article mentioned. So I'd like to know the best practice for upgrading static pod.</p>
<p>For example, I found modifying yaml file directly may result pod unscheduled for a long time, maybe I should remove the old file and create a new file?</p>
| <p>You need to recreate the pod if you want to define readiness probe for it, for labels an edit should suffice.</p>
<p>Following error is thrown by Kubernetes if editing readinessProbe:</p>
<pre><code># * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
</code></pre>
<p>See also <a href="https://stackoverflow.com/a/40363057/499839">https://stackoverflow.com/a/40363057/499839</a></p>
<p>Have you considered using DaemonSets? <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/</a></p>
|
<p>I am deploying the helm chart for Elastic-stack on a bare-metal k8s cluster here <a href="https://github.com/helm/charts/tree/master/stable/elastic-stack" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/elastic-stack</a></p>
<p>This includes the helm chart for Elasticsearch here <a href="https://github.com/helm/charts/tree/master/stable/elasticsearch" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/elasticsearch</a></p>
<p>The Elastic-stack chart calls the Elasticsearch with the default values in values.yaml, and I am not setting anything else.</p>
<p>After helm installing, I see the pods <code>elastic-stack-elasticsearch-data-0</code> and <code>elastic-stack-elasticsearch-master-0</code> are stuck in <code>Init:CrashLoopBackOff</code> (after repeating <code>Init:Error</code> for some time).</p>
<p><code>kubectl describe pod</code> shows me that the problem is with the initContainer called <code>chown</code>. The code for this container is here <a href="https://github.com/helm/charts/blob/master/stable/elasticsearch/templates/data-statefulset.yaml#L79" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/elasticsearch/templates/data-statefulset.yaml#L79</a></p>
<p>The relevant output from <code>describe pod</code> is not very helpful:</p>
<pre><code>State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 24 Jan 2019 05:35:14 +0000
Finished: Thu, 24 Jan 2019 05:35:14 +0000
Ready: False
Restart Count: 1
Environment: <none>
Mounts:
/usr/share/elasticsearch/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from elastic-stack-elasticsearch-data-token-kgtsm (ro)
</code></pre>
<p>I know that it is able to pull the image, since it pulled it for the previous initContainer.</p>
<p>I have a feeling it has to do with the Mounts. What exactly is <code>/usr/share/elasticsearch/data from data (rw)</code> doing? I created persistentvolumes called <code>es-data-volume</code> and <code>es-master-volume</code> and they have been claimed by <code>data-elastic-stack-elasticsearch-data-0</code> and <code>data-elastic-stack-elasticsearch-master-0</code>. Is that line looking for a volume named <code>data</code>?</p>
<p>I don't know where to look to troubleshoot this problem. What could be some possible causes of this issue? </p>
| <p>I had the same issue and this has fixed for me, I changed settings on my NFS server (sudo vim /etc/exports)</p>
<p>from:</p>
<p>/data/nfs/kubernetes 192.168.1.0/24(rw,sync,no_subtree_check)</p>
<p>to :</p>
<p>/data/nfs/kubernetes 192.168.1.0/24(rw,insecure,sync,no_subtree_check,no_root_squash)</p>
<p>from what I understood no_root_squash is the key</p>
<p>Hopefully that will solve it for you as well</p>
|
<p>I tried to configure <code>mongo</code> with authentication on a kubernetes cluster. I deployed the following <code>yaml</code>:</p>
<pre><code>kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:4.0.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
# Get password from secret
value: "abc123changeme"
command:
- mongod
- --auth
- --replSet
- rs0
- --bind_ip
- 0.0.0.0
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-ps
mountPath: /data/db
volumes:
- name: mongo-ps
persistentVolumeClaim:
claimName: mongodb-pvc
</code></pre>
<p>When I tried to authenticate with username "admin" and password "abc123changeme" I received <code>"Authentication failed."</code>. </p>
<p>How can I configure mongo admin username and password (I want to get password from secret)?</p>
<p>Thanks</p>
| <p>The reason the environment variables don't work is that the MONGO_INITDB environment variables are used by the docker-entrypoint.sh script within the image ( <a href="https://github.com/docker-library/mongo/tree/master/4.0" rel="noreferrer">https://github.com/docker-library/mongo/tree/master/4.0</a> ) however when you define a 'command:' in your kubernetes file you override that entrypoint (see notes <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a> )</p>
<p>See below YML which is adapted from a few of the examples I found online. Note the learning points for me</p>
<ol>
<li><p>cvallance/mongo-k8s-sidecar looks for ANY mongo instance matching the POD labels REGARDLESS of namespace so it'll try to hook up with any old instance in the cluster. This caused me a few hours of headscratching as I'd removed the environment= labels from the example as we use namespaces to segregate our environments..silly and obvious in retrospect...extremely confusing in the beginning (mongo logs were throwing all sorts of authentication errors and service down type errors because of the cross talk)</p></li>
<li><p>I was new to ClusterRoleBindings and it took me a while to realise they are Cluster level which I know seems obvious (despite needing to supply a namespace to get kubectl to accept it) but was causing mine to get overwritten between each namespace so make sure you create unique names per environment to avoid a deployment in one namespace messing up another as the ClusterRoleBinding gets overwritten if they're not unqiue within the cluster</p></li>
<li><p>MONGODB_DATABASE needs to be set to 'admin' for authentication to work.</p></li>
<li><p>I was following <a href="https://thilina.piyasundara.org/2017/11/run-mongo-cluster-with-authentication.html" rel="noreferrer">this example</a> to configure authentication which depended on a sleep5 in the hope the daemon was up and running before attempting to create the adminUser. I found this wasn't long enough so upped it initially as failure to create the adminUser obviously led to connection refused issues. I later changed the sleep to test the daemon with a while loop and a ping of mongo which is more foolproof.</p></li>
<li><p>If you run mongod in a <a href="https://docs.mongodb.com/manual/reference/program/mongod/" rel="noreferrer">container</a> (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set --wiredTigerCacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.</p>
<ol start="6">
<li><a href="https://docs.mongodb.com/manual/core/replica-set-architecture-three-members/" rel="noreferrer">You need at least 3 nodes in a Mongo cluster !</a></li>
</ol></li>
</ol>
<p>The YML below should spin up and configure a mongo replicaset in kubernetes with persistent storage and authentication enabled.
If you connect into the pod...</p>
<pre><code>kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash
</code></pre>
<p>mongo shell is installed in the image so you should be able to connect to the replicaset with...</p>
<pre><code>mongo mongodb://mongoadmin:adminpassword@mongo-db/admin?replicaSet=rs0
</code></pre>
<p>And see that you get either rs0:PRIMARY> or rs0:SECONDARY, indicating the two pods are in a mongo replicateset. use rs.conf() to verify that from the PRIMARY.</p>
<pre><code>#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD
#so we can enable authentication
apiVersion: v1
data:
#echo -n "mongoadmin" | base64
init.userid: bW9uZ29hZG1pbg==
#echo -n "adminpassword" | base64
init.password: YWRtaW5wYXNzd29yZA==
kind: Secret
metadata:
name: mongo-init-credentials
namespace: somenamespace
type: Opaque
---
# Create a secret to hold a keyfile used to authenticate between replicaset members
# this seems to need to be base64 encoded twice (might not be the case if this
# was an actual file reference as per the examples, but we're using a simple key
# here
apiVersion: v1
data:
#echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64
mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYK
kind: Secret
metadata:
name: mongo-key
namespace: somenamespace
type: Opaque
---
# Create a service account for Mongo and give it Pod List role
# note this is a ClusterROleBinding - the Mongo Pod will be able
# to list all pods present in the cluster regardless of namespace
# (and this is exactly what it does...see below)
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-serviceaccount
namespace: somenamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo-somenamespace-serviceaccount-view
namespace: somenamespace
subjects:
- kind: ServiceAccount
name: mongo-serviceaccount
namespace: somenamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-viewer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
namespace: somenamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
#Create a Storage Class for Google Container Engine
#Note fstype: xfs isn't supported by GCE yet and the
#Pod startup will hang if you try to specify it.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: somenamespace
name: mongodb-ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
---
#Headless Service for StatefulSets
apiVersion: v1
kind: Service
metadata:
namespace: somenamespace
name: mongo-db
labels:
name: mongo-db
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
# Now the fun part
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: somenamespace
name: mongo-db
spec:
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
# Labels MUST match MONGO_SIDECAR_POD_LABELS
# and MUST differentiate between other mongo
# instances in the CLUSTER not just the namespace
# as the sidecar will search the entire cluster
# for something to configure
app: mongo
environment: somenamespace
spec:
#Run the Pod using the service account
serviceAccountName: mongo-serviceaccount
terminationGracePeriodSeconds: 10
#Prevent a Mongo Replica running on the same node as another (avoid single point of failure)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: mongo:4.0.12
command:
#Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
#in order to pass the new admin user id and password in
- /bin/sh
- -c
- >
if [ -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"
#ensure wiredTigerCacheSize is set within the size of the containers memory limit
mongod --wiredTigerCacheSizeGB 0.5 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
else
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"
mongod --auth;
fi;
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ ! -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"
#replaced simple sleep, with ping and test.
while (! mongo --eval "db.adminCommand('ping')"); do sleep 10; echo "KUBERNETES LOG $HOSTNAME - waiting another 10 seconds for mongo to start" >> /data/db/configlog.txt; done;
touch /data/db/admin-user.lock
if [ "$HOSTNAME" = "mongo-db-0" ]; then
echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"
mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log
fi;
echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"
mongod --shutdown;
fi;
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
ports:
- containerPort: 27017
livenessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
readinessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
resources:
requests:
memory: "350Mi"
cpu: 0.05
limits:
memory: "1Gi"
cpu: 0.1
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# Sidecar searches for any POD in the CLUSTER with these labels
# not just the namespace..so we need to ensure the POD is labelled
# to differentiate it from other PODS in different namespaces
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,environment=somenamespace"
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
#don't be fooled by this..it's not your DB that
#needs specifying, it's the admin DB as that
#is what you authenticate against with mongo.
- name: MONGODB_DATABASE
value: admin
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
</code></pre>
|
<p>I have initialized kubernetes v1.13.1 cluster on <code>Ubuntu 16.04</code> using below command:</p>
<pre><code>sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.88.142
</code></pre>
<p>and installed <code>weave</code> using:</p>
<pre><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
</code></pre>
<p>I have 10 <code>raspberry pi</code> acting as worker nodes and connected to the cluster. All of them are running the deployment fine. There nodes are running pods which try to connect to iot hub <code>visdwk-azure-devices.net</code> and publish some data. Out of 10 nodes, only few nodes are able to connect and other throws error <code>unable to connect to iot hub</code>. I did a ping test and found out that they were not able to ping google while they were pinging the public IP address of google. </p>
<p>This made me think that something is wrong with the <code>coredns</code> pod. I followed this <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">documentation</a> and did below test. </p>
<p>Pod has below contents in <code>/etc/resolv.conf</code></p>
<pre><code>nameserver 10.96.0.10
search visdwk.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>which looks normal to me. All the coredns pods are running fine.</p>
<pre><code>coredns-86c58d9df4-42xqc 1/1 Running 8 1d11h
coredns-86c58d9df4-p6d98 1/1 Running 7 1d6h
</code></pre>
<p>I have also done <code>nslookup kubernetes.default</code> from the busybox container and got the proper response. Below are the logs of <code>coredns-86c58d9df4-42xqc</code></p>
<pre><code>.:53
2019-02-08T08:40:10.038Z [INFO] CoreDNS-1.2.6
2019-02-08T08:40:10.039Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
[INFO] plugin/reload: Running configuration MD5 =
f65c4821c8a9b7b5eb30fa4fbc167769
t
</code></pre>
<p>Above logs also looks normal.</p>
<p>I can also not say that the pod is not able to resolve the iot hub because of any error from weave because if weave is throwing error then I believe the pod will never start and will always be in failed state but in actual the pod remains in running state. Please correct me here if I am wrong.</p>
<p>DNS service also seems to be in running state:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1d6h
</code></pre>
<p>But still I am not able to figure out as to why few nodes in the cluster are not able to resolve the iot hub. Can anyone please give me some suggestions here. Please help. Thanks.</p>
<p>Logs from failed pod:</p>
<pre><code> 1550138544: New connection from 127.0.0.1 on port 1883.
1550138544: New client connected from 127.0.0.1 as 6f1e2c4f-c44d-4c27-b9a9-0fb91f816504 (c1, k60).
1550138544: Sending CONNACK to 6f1e2c4f-c44d-4c27-b9a9-0fb91f816504 (0, 0)
1550138544: Received PUBLISH from 6f1e2c4f-c44d-4c27-b9a9-0fb91f816504 (d0, q0, r0, m0, 'devices/machine6/messages/events/', ... (1211 bytes))
1550138544: Received DISCONNECT from 6f1e2c4f-c44d-4c27-b9a9-0fb91f816504
1550138544: Client 6f1e2c4f-c44d-4c27-b9a9-0fb91f816504 disconnected.
1550138547: Saving in-memory database to /mqtt/data/mosquitto.db.
1550138547: Bridge local.machine6 doing local SUBSCRIBE on topic devices/machine6/messages/events/#
1550138547: Connecting bridge iothub-bridge (visdwk.azure-devices.net:8883)
1550138552: Error creating bridge: Try again.
1550138566: New connection from 127.0.0.1 on port 1883.
1550138566: New client connected from 127.0.0.1 as afb6cc2a-ee78-482e-aff0-fc595e06f86a (c1, k60).
1550138566: Sending CONNACK to afb6cc2a-ee78-482e-aff0-fc595e06f86a (0, 0)
1550138566: Received PUBLISH from afb6cc2a-ee78-482e-aff0-fc595e06f86a (d0, q0, r0, m0, 'devices/machine6/messages/events/', ... (1211 bytes))
1550138566: Received DISCONNECT from afb6cc2a-ee78-482e-aff0-fc595e06f86a
1550138566: Client afb6cc2a-ee78-482e-aff0-fc595e06f86a disconnected.
1550138567: New connection from 127.0.0.1 on port 1883.
1550138567: New client connected from 127.0.0.1 as 01b9e135-fbc8-4d67-9962-356e8cf9f080 (c1, k60).
1550138567: Sending CONNACK to 01b9e135-fbc8-4d67-9962-356e8cf9f080 (0, 0)
1550138567: Received PUBLISH from 01b9e135-fbc8-4d67-9962-356e8cf9f080 (d0, q0, r0, m0, 'devices/machine6/messages/events/', ... (755 bytes))
1550138567: Received DISCONNECT from 01b9e135-fbc8-4d67-9962-356e8cf9f080
1550138567: Client 01b9e135-fbc8-4d67-9962-356e8cf9f080 disconnected.
1550138578: Saving in-memory database to /mqtt/data/mosquitto.db.
1550138583: Bridge local.machine6 doing local SUBSCRIBE on topic devices/machine6/messages/events/#
1550138583: Connecting bridge iothub-bridge (visdwk.azure-devices.net:8883)
1550138588: Error creating bridge: Try again.
</code></pre>
<p>Pod is running a mosquitto container which try to connect to <code>visdwk.azure-devices.net</code> and throws error. </p>
<pre><code>Connecting bridge iothub-bridge (visdwk.azure-devices.net:8883)
Error creating bridge: Try again.
</code></pre>
| <p>It would appear that one of your DNS Pods is not providing DNS services. </p>
<p>The evidence is is in the statement that "only few nodes are able to connect and other throws error unable to connect to iot hub"</p>
<p>This is a classic symptom of load-balancing with a failed node in the loop.</p>
<p>Try:</p>
<ol>
<li>Remove the DNS server pod that gave the message: <code>visdwk.azure-devices.net.visdwknamespace.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd,ra 175 0.000651078s where visdwk.azure-devices.net</code></li>
<li>Wait for the changes to propagate through the cluster.</li>
<li>Test the connections.</li>
</ol>
<p>If this is correct they should all connect.</p>
<p>To confirm, add the pod back and remove the other one. Retest, they should all fail to connect.</p>
|
<p>I'm migrating a number of applications from AWS ECS to Azure AKS and being the first production deployment for me in Kubernetes I'd like to ensure that it's set up correctly from the off.</p>
<p>The applications being moved all use resources at varying degrees with some being more memory intensive and others being more CPU intensive, and all running at different scales.</p>
<p>After some research, I'm not sure which would be the best approach out of running a single large cluster and running them all in their own Namespace, or running a single cluster per application with Federation.</p>
<p>I should note that I'll need to monitor resource usage per application for cost management (amongst other things), and communication is needed between most of the applications.</p>
<p>I'm able to set up both layouts and I'm sure both would work, but I'm not sure of the pros and cons of each approach, whether I should be avoiding one altogether, or whether I should be considering other options?</p>
| <p>Because you are at the beginning of your kubernetes journey I would go with separate clusters for each stage you have (or at least separate dev and prod). You can very easily take your cluster down (I did it several times with resource starvation). Also not setting correctly those network policies you might find that services from different stages/namespaces (like test and sandbox) communicate with each other. Or pipelines that should deploy dev to change something in other namespace.
Why risk production being affected by dev work?</p>
<p>Even if you don't have to upgrade the control plane yourself, aks still has its versions and flags and it is better to test them before moving to production on a separate cluster. </p>
<p>So my initial decision would be to set some hard boundaries: different clusters. Later once you get more knowledge with aks and kubernetes you can review your decision.</p>
|
<p>I am configuring a Kubernetes cluster with 2 nodes in CoreOS as described in <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="noreferrer">https://coreos.com/kubernetes/docs/latest/getting-started.html</a> without <strong>flannel</strong>.
Both servers are in the same network.</p>
<p><strong>But I am getting:
x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca")</strong> while running kubelet in worker.</p>
<p>I configured the TLS certificates properly on both the servers as discussed in the doc.</p>
<p>The master node is working fine.
And the kubectl is able to fire containers and pods in master.</p>
<p>Question 1: How to fix this problem?</p>
<p>Question 2: Is there any way to configure a cluster without TLS certificates?</p>
<pre><code>Coreos version:
VERSION=899.15.0
VERSION_ID=899.15.0
BUILD_ID=2016-04-05-1035
PRETTY_NAME="CoreOS 899.15.0"
</code></pre>
<p>Etcd conf:</p>
<pre><code> $ etcdctl member list
ce2a822cea30bfca: name=78c2c701d4364a8197d3f6ecd04a1d8f peerURLs=http://localhost:2380,http://localhost:7001 clientURLs=http://172.24.0.67:2379
</code></pre>
<p>Master: kubelet.service:</p>
<pre><code>[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION=v1.2.2_coreos.0
ExecStart=/opt/bin/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--register-schedulable=false \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--hostname-override=172.24.0.67 \
--cluster-dns=10.3.0.10 \
--cluster-domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
<p>Master: kube-controller.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
command:
- /hyperkube
- controller-manager
- --master=http://127.0.0.1:8080
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>Master: kube-proxy.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>Master: kube-apiserver.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers=http://172.24.0.67:2379
- --allow-privileged=true
- --service-cluster-ip-range=10.3.0.0/24
- --secure-port=443
- --advertise-address=172.24.0.67
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>Master: kube-scheduler.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
command:
- /hyperkube
- scheduler
- --master=http://127.0.0.1:8080
- --leader-elect=true
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 1
</code></pre>
<p>Slave: kubelet.service</p>
<pre><code>[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION=v1.2.2_coreos.0
ExecStart=/opt/bin/kubelet-wrapper \
--api-servers=https://172.24.0.67:443 \
--register-node=true \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--hostname-override=172.24.0.63 \
--cluster-dns=10.3.0.10 \
--cluster-domain=cluster.local \
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
<p>Slave: kube-proxy.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
command:
- /hyperkube
- proxy
- --master=https://172.24.0.67:443
- --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
- --proxy-mode=iptables
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"
</code></pre>
| <pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
|
<p>I've recently started working with Kubernetes clusters. The flow of network calls for a given Kubernetes service in our cluster is something like the following:</p>
<p>External Non-K8S Load Balancer -> Ingress Controller -> Ingress Resource -> Service -> Pod</p>
<p>For a given service, there are two replicas. By looking at the logs of the containers in the replicas, I can see that calls are being routed to different pods. As far as I can see, we haven't explicitly set up any load-balancing policies anywhere for our services in Kubernetes.</p>
<p>I've got a few questions:</p>
<p>1) Is there a default load-balancing policy for K8S? I've read about kube-proxy and random routing. It definitely doesn't appear to be round-robin.
2) Is there an obvious way to specify load balancing rules in the Ingress resources themselves? On a per-service basis?</p>
<p>Looking at one of our Ingress resources, I can see that the 'loadBalancer' property is empty:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"example-service-ingress","namespace":"member"},"spec":{"rules":[{"host":"example-service.x.x.x.example.com","http":{"paths":[{"backend":{"serviceName":"example-service-service","servicePort":8080},"path":""}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-02-13T17:49:29Z"
generation: 1
name: example-service-ingress
namespace: x
resourceVersion: "59178"
selfLink: /apis/extensions/v1beta1/namespaces/x/ingresses/example-service-ingress
uid: b61decda-2fb7-11e9-935b-02e6ca1a54ae
spec:
rules:
- host: example-service.x.x.x.example.com
http:
paths:
- backend:
serviceName: example-service-service
servicePort: 8080
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>I should specify - we're using an on-prem Kubernetes cluster, rather than on the cloud.</p>
<p>Cheers!</p>
| <p>The "internal load balancing" between Pods of a Service has already been covered in <a href="https://stackoverflow.com/questions/48789227/does-clusterip-service-distributes-requests-between-replica-pods">this question from a few days ago</a>.</p>
<p>Ingress isn't really doing anything special (unless you've been hacking in the NGINX config it uses) - it will use the same Service rules as in the linked question.</p>
<p>If you want or need fine-grained control of how pods are routed to within a service, it is possible to extend Kubernetes' features - I recommend you look into the traffic management features of <a href="https://istio.io" rel="nofollow noreferrer">Istio</a>, as one of its features is to be able to dynamically control how much traffic different pods in a service receive.</p>
|
<p>I would like to know if there is a command in kubernetes that returns true if all resources in a namespace have the ready status and false otherwise.</p>
<p>Something similar to this (ficticious) command: </p>
<pre><code>kubectl get namespace <namespace-name> readiness
</code></pre>
<p>If there is not one such command, any help guiding me in the direction of how to retrieve this information (if all resources are ready in a given namespace) is appreciated. </p>
| <p>There is no such command. try the below command to check all running pods</p>
<pre><code>kubectl get po -n <namespace> | grep 'Running\|Completed'
</code></pre>
<p>below command to check the pods that are failed,terminated, error etc.</p>
<pre><code>kubectl get po -n <namespace> | grep -v Running |grep -v Completed
</code></pre>
|
<p>I have a server running on ubuntu where I need to expose my app using kubernetes tools. I created a cluster using minikube with virtualbox machine and with the command kubectl expose deployment I was able tu expose my app... but only in my local network. It's mean that when I run minikube ip I receive a local ip. My question is how can I access my minikube machine from outside ?
I think the answer will be "port-forwarding".But how can I do that ? </p>
| <p>You can use SSH port forwarding to access your services from host machine in the following way:</p>
<pre><code>ssh -R 30000:127.0.0.1:8001 [email protected]
</code></pre>
<p>In which <code>8001</code> is port on which your service is exposed, <code>192.168.0.20</code> is minikube IP.</p>
<p>Now you'll be able to access your application from your laptop pointing the browser to <code>http://192.168.0.20:30000</code></p>
|
<p>I'm having hard time configuring mountPath as a relative path.
Let's say I'm running the deployment from <code>/user/app</code> folder and I want to create secret file under <code>/user/app/secret/secret-volume</code> as follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: secret/secret-volume
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
secretName: test-secret
</code></pre>
<p>For some reason the file <code>secret-volume</code> is created in the root directory <code>/secret/secret-volume</code>.</p>
| <p>It's because you have <code>mountPath: secret/secret-volume</code> change it to <code>mountPath: /user/app/secret/secret-volume</code></p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Check documentation here</a>.</p>
|
<p>My Docker for Windows <code>~/.kube/config</code> file was replaced when setting up access to cloud based K8s cluster.</p>
<p>Is there a way to re-create it without having to restart Docker for Windows Kubernetes?</p>
<p><em>Update</em>
My current <code>~/.kube/config</code> file is now set to a GKE cluster. I don't want to reset Docker for Kubernetes and clobber it. Instead I want to create a separate kubeconfig file for Docker for Windows i.e. place it in some other location rather than <code>~/.kube/config</code>.</p>
| <p>You probably want to back up your <code>~/.kube/config</code> for GKE and then disable/reenable Kubernetes on Docker for Windows. Pull up a Windows command prompt:</p>
<pre><code>copy \<where-your-.kube-is\config \<where-your-.kube-is\config.bak
</code></pre>
<p>Then follow <a href="https://blog.docker.com/2018/01/docker-windows-desktop-now-kubernetes/" rel="noreferrer">this</a>. In essence, uncheck the box, wait for a few minutes and check it again.</p>
<p><a href="https://i.stack.imgur.com/87shb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/87shb.png" alt="docker for windows"></a></p>
<p>You can re-recreate without disabling/reenabling Kubernetes on Docker but you will have to know exactly where your API server and credentials (certificates, etc):</p>
<pre><code> kubectl config set-context ...
kubectl config use-context ...
</code></pre>
<p>What's odd is that you are specifying <code>~/.kube/config</code> where the <code>~</code> (tilde) thingy is unix/linux thing, but maybe what you mean is <code>$HOME</code></p>
|
<p>I have installed Rancher 2 and created a kubernetes cluster of internal vm's ( no AWS / gcloud).</p>
<p>The cluster is up and running.</p>
<p>I logged into one of the nodes. </p>
<p>1) Installed Kubectl and executed kubectl cluster-info . It listed my cluster information correctly.</p>
<p>2) Installed helm </p>
<pre><code>curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
root@lnmymachine # helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
</code></pre>
<p>3) Configured helm referencing <a href="https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/" rel="noreferrer">Rancher Helm Init</a></p>
<pre><code>kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
</code></pre>
<p>Tried installing Jenkins via helm</p>
<pre><code>root@lnmymachine # helm ls
Error: Unauthorized
root@lnmymachine # helm install --name initial stable/jenkins
Error: the server has asked for the client to provide credentials
</code></pre>
<p>Browsed similar issues and few of them were due to multiple clusters. I have only one cluster. kubectl gives all information correctly.</p>
<p>Any idea whats happening. </p>
| <p>It seems there is a mistake while creating the <code>ClusterRoleBinding</code>:</p>
<p>Instead of <code>--clusterrole cluster-admin</code>, you should have <code>--clusterrole=cluster-admin</code></p>
<p>You can check if this is the case by verifying if ServiceAccount, ClustrerRoleBinding were created correctly.</p>
<p><code>kubectl describe -n kube-system sa tiller</code></p>
<p><code>kubectl describe clusterrolebinding tiller</code></p>
<p>Seems like they have already fixed this on <a href="https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/" rel="nofollow noreferrer">Rancher Helm Init</a> page.</p>
|
<p>I'm new in kubernetes and I have a question about kubernetes network.
In my setup each node has two interfaces. the first interface (eth0) is in private range (for example 172.20.12.10) and the second has a public address.</p>
<pre><code>auto eth0
iface eth0 inet static
address 172.20.12.10
netmask 255.255.0.0
network 172.20.0.0
broadcast 172.16.255.255
dns-nameservers 8.8.8.8
auto eth1
iface eth1 inet static
address x.x.x.x
gateway y.y.y.y
</code></pre>
<p>apparently kubernetes network configurations depends on node default gateway so above node network configuration doesn't work correctly.</p>
<p>my question is : how can I access internet in my containers?</p>
| <p>The <code>--apiserver-advertise-address</code> argument for <code>kubeadm init</code> can be used to make k8s use an interface different than the default network interface of the nodes:</p>
<pre><code>--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on.
Specify '0.0.0.0' to use the address of the default network interface.
</code></pre>
<p>Also, add a flag to <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> that specifies the private IP of the worker node:</p>
<pre><code>--node-ip=<private-node-ip>
</code></pre>
<p>Finally, when you run <code>kubeadm join</code> on the worker nodes, make sure you provide the private IP of the API server.</p>
<p>More info is in:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#using-internal-ips-in-your-cluster" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#using-internal-ips-in-your-cluster</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/33618#issuecomment-250082421" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/33618#issuecomment-250082421</a></li>
</ul>
|
<p>When deploying a Ruby based Passenger standalone application via Kubernetes we
ran into the issue of losing the capability of monitoring them via
<code>passenger-status</code>. There is a <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/passenger" rel="nofollow noreferrer">telegraf
plugin</a>
or a <a href="https://github.com/stuartnelson3/passenger_exporter" rel="nofollow noreferrer">passenger exporter</a> to forward metrics, which need access to the output of the
<code>passenger-status</code> binary.</p>
<p>Following the philosophy of having one (main) process per Container using a
Sidecar container to gather the metrics would be reasonable when deploying to
Kubernetes. Accessing the output of <code>passenger-status</code> from the other container is the challenge here. Link the files into another container is <a href="https://github.com/kubernetes/kubernetes/issues/6120#issuecomment-87139953" rel="nofollow noreferrer">not supported</a>. Setting up an directory for both containers and copy executables seems to be over complex.</p>
<p>Communication between Containers within one Pod works via the loopback network. Therefore exposing metrics via HTTP is a common pattern to export those. So we are looking into different ways of exposing the <code>passenger-status</code> metrics via HTTP:</p>
<h2>via Application</h2>
<p>Running the command via <em><a href="http://ruby-doc.org/core-2.5.0/Kernel.html#method-i-60" rel="nofollow noreferrer">Kernel#`</a></em> kind of defeats the purpose of monitoring it. This will only return when there are enough passenger processes free to answer this request. Once the passenger queue gets full the monitoring will also not work anymore, which is exactly what we want to see here.</p>
<h2>CGI script</h2>
<p>As nginx only supports FastCGI it is necessary to have something like <a href="https://github.com/gnosek/fcgiwrap" rel="nofollow noreferrer">fcgiwrap</a> to execute scripts. fciwrap itself needs to have another process running though, which itself needs monitoring. Furthermore it violates the idea of having one process per container.</p>
<h2>Lua script</h2>
<p>A lua snippet like this would probably work:</p>
<pre><code>location /passenger-status {
content_by_lua_block {
os.execute("/opt/ruby/bin/passenger-status")
}
}
</code></pre>
<p>However, adding Lua scripting to every production container just for this purpose seems to be cracking a walnut with a sledgehammer.</p>
<h2>Second Passenger instance</h2>
<p>Having a second tiny ruby script as passenger endpoint for the monitoring would also probably work:</p>
<pre><code>http {
...
server {
listen 80;
server_name _;
root /app;
passenger_enabled on;
...
}
server {
listen 8080;
server_name _;
root /monitoring;
passenger_enabled on;
...
}
...
}
</code></pre>
<hr>
<p>All in all I don't find any of those approaches satisfactory. What are your thoughts or solutions on this topic?</p>
| <p>We went the approach "Second Passenger instance" and have a <a href="https://github.com/xing/passenger-prometheus-exporter-app" rel="nofollow noreferrer">second ruby process group within passenger</a>. As described in the question already integrating works via adding a snippet like this to your <code>nginx.conf</code>:</p>
<pre class="lang-sh prettyprint-override"><code> server {
server_name _;
listen 0.0.0.0:10254;
root '/monitor/public';
passenger_app_root '/monitor';
passenger_app_group_name 'Prometheus exporter';
passenger_spawn_method direct;
passenger_enabled on;
passenger_min_instances 1;
passenger_load_shell_envvars off;
}
</code></pre>
<p>This will start another ruby process serving a prometheus endpoint on <code>http://<ip-of-this-server>:10254/metrics</code> that exposes the passenger metrics to be collected by your usual Kubernetes monitoring infrastructure. A response of this could look like:</p>
<pre><code># HELP passenger_capacity Capacity used
# TYPE passenger_capacity gauge
passenger_capacity{supergroup_name="/app (development)",group_name="/app (development)",hostname="my-container"} 1
# HELP passenger_wait_list_size Requests in the queue
# TYPE passenger_wait_list_size gauge
passenger_wait_list_size{supergroup_name="/app (development)",group_name="/app (development)",hostname="my-container"} 0
# HELP passenger_processes_active Active processes
# TYPE passenger_processes_active gauge
passenger_processes_active{supergroup_name="/app (development)",group_name="/app (development)",hostname="my-container"} 0
</code></pre>
<p>Find the project at <a href="https://github.com/xing/passenger-prometheus-exporter-app" rel="nofollow noreferrer">passenger-prometheus-exporter-app</a>.</p>
|
<p>There are a vast majority of tutorials and documentation on the web where Flask is running in development state. The log looks like this in development mode:</p>
<pre><code>* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:5555/ (Press CTRL+C to quit)
</code></pre>
<p>I want to know more about how to make it production ready. I've seen documentation on this as well using production ready WSGI servers and <code>nginx</code> as reverse proxy in front. But can somebody tell me why WSGI and reverse proxy is needed?</p>
<p>If my Flask application is dockerized and running in Google Kubernetes Engine is it even necessary then? Will GKE not take care of the purpose of WSGI and reverse proxy?</p>
| <p>As <a href="http://flask.pocoo.org/docs/1.0/deploying/" rel="nofollow noreferrer">Flask's documentation</a> states:</p>
<blockquote>
<p>Flask’s built-in server is not suitable for production</p>
</blockquote>
<p>Why WSGI? It's <a href="https://www.python.org/dev/peps/pep-0333/" rel="nofollow noreferrer">a standard way</a> to deploy Python web apps, it gives you options when choosing a server (i.e. you can choose the best fit for your application/workflow without changing your application), and it allows offloading scaling concerns to the server.</p>
<p>Why a reverse proxy? It depends on the server. Here is <a href="http://docs.gunicorn.org/en/stable/deploy.html" rel="nofollow noreferrer">Gunicorn's rationale</a>:</p>
<blockquote>
<p>... we strongly advise that you use Nginx. If you choose another proxy server you need to make sure that it buffers slow clients when you use default Gunicorn workers. Without this buffering Gunicorn will be easily susceptible to denial-of-service attacks.</p>
</blockquote>
<p>Here is <a href="https://docs.pylonsproject.org/projects/waitress/en/latest/reverse-proxy.html" rel="nofollow noreferrer">Waitress's rationale</a> for the same:</p>
<blockquote>
<p>Often people will set up "pure Python" web servers behind reverse proxies, especially if they need TLS support (Waitress does not natively support TLS). Even if you don't need TLS support, it's not uncommon to see Waitress and other pure-Python web servers set up to only handle requests behind a reverse proxy; these proxies often have lots of useful deployment knobs.</p>
</blockquote>
<p>Other practical reasons for a reverse proxy may include <em>needing</em> a reverse proxy for multiple backends (some of which may not be Python web apps), caching responses, and serving static content (something which Nginx, for example, happens to be good at). Not all WSGI servers need a reverse proxy: <a href="https://uwsgi-docs.readthedocs.io/en/latest/" rel="nofollow noreferrer">uWSGI</a> and <a href="https://cherrypy.org/" rel="nofollow noreferrer">CherryPy</a> treat it as optional.</p>
<p><sub>P.S. <a href="https://cloud.google.com/appengine/docs/standard/python/how-requests-are-handled" rel="nofollow noreferrer">Google App Engine</a> seems to be WSGI-compliant and doesn't require any additional configuration.</sub></p>
|
<p>While doing health check for kubernetes pods, why liveness probe is needed even though we already maintain readiness probe ?</p>
<p>Readiness probe already keeps on checking if the application within pod is ready to serve requests or not, which means that the pod is live. But still, why liveness probe is needed ?</p>
| <p>The probes have different meaning with different results:</p>
<ul>
<li>failing liveness probes -> restart container</li>
<li>failing readiness probes -> do not send traffic to that pod</li>
</ul>
<p>You can not determine liveness from readiness and vice-versa. Just because pod cannot accept traffic right know, doesn't mean restart is needed, it can mean that it just needs time to finish some work.</p>
<p>If you are deploying e.g. php app, those two will probably be the same, but k8s is a generic system, that supports many types of workloads.</p>
<hr />
<p>From: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a></p>
<blockquote>
<p>The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.</p>
<p>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
</blockquote>
<hr />
<p><em>Sidenote: Actually readiness should be a subset of liveness, that means readiness implies liveness (and failing liveness implies failing readiness). But that doesn't change the explanation above, because if you have only readiness, you can only imply when restart is NOT needed, which is the same as not having any probe for restarting at all. Also because of probes are defined separately there is no guarantee for k8s, that one is subset of the other</em></p>
|
<p>I’m setting up Gitlab Auto DevOps using Kubernetes. When deploying, I am getting this error for the auto-deploy-app container:
Liveness probe failed: Get <a href="http://xx.xx.xx.xx:5000/" rel="nofollow noreferrer">http://xx.xx.xx.xx:5000/</a>: dial tcp xx.xx.xx.xx:5000: getsockopt: connection refused</p>
<p>Has anyone run into this?</p>
| <p>I had the same problem. It can have many reasons. </p>
<ul>
<li><p>You should make sure that your application returns a 200 OK on the basepath "/" and not e.g. a redirect, as this makes your health check fail.</p></li>
<li><p>Make sure you allow HTTP GET requests without authentication on the basepath "/".</p></li>
<li><p>Another more tricky reason is that your application startup time might exceed the initialDelay of liveness/readiness probe and thus the check fails too often before the application is even ready. In that case either add more CPU power or increase the delay in the liveness probe.</p></li>
</ul>
<p>See this issue for more information on the second reason: <a href="https://github.com/kubernetes/kubernetes/issues/62594#issuecomment-420685737" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/62594#issuecomment-420685737</a></p>
<p>The readiness/liveness probe initialDelay time can be modified by setting the respective value for the helm charts. E.g. in the deploy function add </p>
<pre><code>helm upgrade --install \
--wait \
--set livenessProbe.initialDelaySeconds="60" \
--set readinessProbe.initialDelaySeconds="60" \
...
</code></pre>
<p>To the helm chart upgrades.</p>
|
<p>I want to calculate the actual Container CPU usage by dividing the number of used cores with the CPU limit (number of assignable cores). Accordingly I get two different metrics for 4 pods:</p>
<ul>
<li>Number of used cores for each pod</li>
<li>Number of available cores for each pod</li>
</ul>
<p><strong>My problem:</strong></p>
<p>I'd like to get the CPU usage for each container (number of used cores / number of available cores).</p>
<p><strong>What I tried:</strong></p>
<p>Each of these two queries return exactly what I want:</p>
<ol>
<li>Number of currently used cores for each pod:</li>
</ol>
<p>(I am using label_replace because one metric uses <code>pod_name</code> as metric name and the other uses <code>pod</code>)</p>
<p><code>label_replace(sum(rate(container_cpu_usage_seconds_total{pod_name=~"rewe-bd-palantir-vernemq.*", container_name="vernemq"}[1m])) by (pod_name), "pod", "$1", "pod_name", "(.*)")
</code></p>
<p>Response: <a href="https://monosnap.com/direct/6EPuLF59HBJaYsAmKG6CM0fRPyUXDk" rel="nofollow noreferrer">https://monosnap.com/direct/6EPuLF59HBJaYsAmKG6CM0fRPyUXDk</a></p>
<ol start="2">
<li>Number of available cores for each pod:</li>
</ol>
<p><code>sum(kube_pod_container_resource_limits_cpu_cores{pod=~"rewe-bd-palantir-vernemq.*", container="vernemq", job="kubernetes-pods"}) by (pod)
</code></p>
<p>Response: <a href="https://monosnap.com/direct/dRBfitwcxHIrTRYDmYHwV5YkomYJjH" rel="nofollow noreferrer">https://monosnap.com/direct/dRBfitwcxHIrTRYDmYHwV5YkomYJjH</a></p>
<p>This query didn't work (returned no data points):</p>
<pre><code>label_replace(sum(rate(container_cpu_usage_seconds_total{pod_name=~"rewe-bd-palantir-vernemq.*", container_name="vernemq"}[1m])) by (pod_name), "pod", "$1", "pod_name", "(.*)") / sum(kube_pod_container_resource_limits_cpu_cores{pod=~"rewe-bd-palantir-vernemq.*", container="vernemq", job="kubernetes-pods"}) by (pod)
</code></pre>
<p><strong>My question:</strong></p>
<p>How could I achieve a query which returns the CPU usage (number of used cores / number of available cores) for each pod? </p>
| <p>You need to use the <code>on()</code> function as well. So it would be like this.</p>
<pre><code>label_replace(sum(rate(container_cpu_usage_seconds_total{pod_name=~"rewe-bd-palantir-vernemq.*", container_name="vernemq"}[1m])) by (pod_name), "pod", "$1", "pod_name", "(.*)") / on(pod) sum(kube_pod_container_resource_limits_cpu_cores{pod=~"rewe-bd-palantir-vernemq.*", container="vernemq", job="kubernetes-pods"}) by (pod)
</code></pre>
|
<p>I have a weird problem with PGAdmin4.</p>
<p><strong>My setup</strong></p>
<ul>
<li><code>pgadmin</code> 4.1 deployed on <code>kubernetes</code> using the <code>chorss/docker-pgadmin4</code> image. One POD only to simplify troubleshooting;</li>
<li><code>Nginx ingress controller</code> as reverse proxy on the cluster;</li>
<li><code>Classic ELB</code> in front to load balance incoming traffic on the cluster.</li>
</ul>
<p><code>ELB <=> NGINX <=> PGADMIN</code></p>
<p>From a DNS point of view, the hostname of pgadmin is a CNAME towards the ELB.</p>
<p><strong>The problem</strong></p>
<p>Application is correctly reachable, users can login and everything works just fine. Problem is that after a couple of (roughly 2-3) minutes the session is invalidated and users are requested to login again. This happens regardless of the fact that pgadmin is actively used or not.</p>
<p>After countless hours of troubleshooting, I found out that the problem happens when the DNS resolution of ELB's CNAME switches to another IP address.</p>
<p>In fact, I tried:</p>
<ul>
<li>connecting to the pod directly by connecting to the <code>k8s service</code>'s node port directly => session doesn't expire;</li>
<li>connecting to <code>nginx</code> (bypassing the ELB) directly => session doesn't expire;</li>
<li>mapping one of the ELB's IP addresses in my hosts file => session doesn't expire.</li>
</ul>
<p>Given the above test, I'd conclude that the Flask app (PGAdmin4 is a Python Flask application apparently) is considering my cookie invalid after the remote address changes for my hostname.</p>
<p>Any Flask developer that can help me fix this problem? Any other idea about something I might be missing?</p>
| <p>PGadmin 4 seems to use Flask-Security for authentication:</p>
<blockquote>
<p>pgAdmin utilised the Flask-Security module to manage application security and users, and provides options for self-service password reset and password changes etc.</p>
</blockquote>
<p><a href="https://www.pgadmin.org/docs/pgadmin4/dev/code_overview.html" rel="nofollow noreferrer">https://www.pgadmin.org/docs/pgadmin4/dev/code_overview.html</a></p>
<p>Flask-Security seems to use Flask-Login:</p>
<blockquote>
<p>Many of these features are made possible by integrating various Flask extensions and libraries. They include:
Flask-Login
...</p>
</blockquote>
<p><a href="https://pythonhosted.org/Flask-Security/" rel="nofollow noreferrer">https://pythonhosted.org/Flask-Security/</a></p>
<p>Flask-Login seems to have a feature called "session protection":</p>
<blockquote>
<p>When session protection is active, each request, it generates an identifier for the user’s computer (basically, a secure hash of the IP address and user agent). If the session does not have an associated identifier, the one generated will be stored. If it has an identifier, and it matches the one generated, then the request is OK.</p>
</blockquote>
<p><a href="https://flask-login.readthedocs.io/en/latest/#session-protection" rel="nofollow noreferrer">https://flask-login.readthedocs.io/en/latest/#session-protection</a></p>
<p>I would assume setting <code>login_manager.session_protection = None</code> would solve the issue, but unfortunately I don't know how to set it in PGadmin. Hope it might help you somehow.</p>
|
<p>I have a Kubernetes Cluster running on Azure (AKS / ACS).
I created the cluster using the Portal. There a aadClient was created automatically with a client secret that now expired.</p>
<p>Can somebody please tell me how to set the new client secret which I already created?</p>
<p>Right now AKS is not able to update Loadbalancer values or mount persistant storage.</p>
<p>Thank you!</p>
| <p>AKS client credentials can be updated via command:</p>
<pre><code>az aks update-credentials \
--resource-group myResourceGroup \
--name myAKSCluster \
--reset-service-principal \
--service-principal $SP_ID \
--client-secret $SP_SECRET
</code></pre>
<p>Official documentation: <a href="https://learn.microsoft.com/en-us/azure/aks/update-credentials#update-aks-cluster-with-new-credentials" rel="noreferrer">https://learn.microsoft.com/en-us/azure/aks/update-credentials#update-aks-cluster-with-new-credentials</a></p>
|
<p>I have couple of different contexts with namespaces defined within my k8s cluster.
Using different pipelines for Jenkins I'd like to switch between them.
Idea is: based on git branch to do a deployment to specific environment. In order to do that I'd have to switch to existing production/dev/feature contexts and namespaces.</p>
<p>I want to use <a href="https://wiki.jenkins.io/display/JENKINS/Kubernetes+Cli+Plugin" rel="nofollow noreferrer">https://wiki.jenkins.io/display/JENKINS/Kubernetes+Cli+Plugin</a>
This is an example syntax for Jenkins scripted piepline:</p>
<p>Example:</p>
<pre><code>node {
stage('List pods') {
withKubeConfig([credentialsId: '<credential-id>',
caCertificate: '<ca-certificate>',
serverUrl: '<api-server-address>',
contextName: '<context-name>',
clusterName: '<cluster-name>'
]) {
sh 'kubectl get pods'
}
}
}
</code></pre>
<p>And as you can see it does not accept anything for <code>namespace</code></p>
<p>This is an example production context with namespace, which I'm using:</p>
<pre><code>$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* aws-production cluster.example.com cluster.example.com aws-production
</code></pre>
<p>And this is a result of running that step:
<a href="https://i.stack.imgur.com/XTj3W.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XTj3W.jpg" alt="enter image description here"></a></p>
<p>How to resolve that issue? Is it possible to use namespaces using mentioned plugin at all? If not is there an alternative way to achieve context+namespace switch during Jenkins pipeline step?</p>
<p>EDIT:
Seems that adding entry to .kube/config on Jenkins machine doesn't help for that issue. This plugin <code>kubernetes-cli</code> for Jenkins creates isolated context and does not care much about .kube/config :(</p>
<p>Manually forcing a config, doesn't help in this case too;</p>
<p><code>kubectl config use-context aws-production --namespace=aws-production --kubeconfig=/home/jenkins/.kube/config</code></p>
| <p>Thanks to help from official Jenkins IRC channel, solution below.</p>
<p>Solution:
You have to pass raw .kube/config file as a <code>credentialsId</code>.</p>
<ol>
<li>Create new creadentials in Jenkins. I have used Secret file option.</li>
<li>Upload you desired .kube/config and give it a name/id in credentials creation form.</li>
<li>Pass the id name you have given to this credential resource to <code>credentialsId</code> field</li>
</ol>
<p>withKubeConfig([<code>credentialsId: 'file-aws-config'</code>, [..]]) ...</p>
|
<p>From the master node in a Kubernetes cluster, I can run <code>kubectl get nodes</code> and see the status of any individual node on the cluster, as <code>kubectl</code> can find the cluster cert for authentication. On my local workstation, assuming I have auth configured correctly, I can do the same.</p>
<p>From the nodes that are <em>joined to the Kubernetes master</em>, is there any way, short of configuring auth so that <code>kubectl</code> works, that I can identify if the node is in <code>Ready</code> or <code>Not Ready</code> state?</p>
<p>I'm trying to build some monitoring tools which reside on the nodes themselves, and I'd like to avoid having to set up service accounts and the like just to check on the node status, in case there's some way I can identify it via kubelet, logs, a file somewhere on the node, a command, etc...</p>
| <p>There's no canonical way of doing this, one option is to use kubelet API.</p>
<p>The kubelet exposes an API which the controlplane talks to in order to make it run pods. By default, this runs on port 10250 but this is a write API and needs to be authenticated.</p>
<p>However, the kubelet also has a flag <code>--read-only-port</code> which by default is on port 10255. You can use this to check if the kubelet is ready by hitting the healthz endpoint.</p>
<pre><code>curl http://<ip>:10255/healthz
ok
</code></pre>
<p>This healthz endpoint is also available on localhost:</p>
<pre><code>curl http://localhost:10248/healthz
</code></pre>
<p>If this isn't sufficient, you could possibly check the for a running pod to be available by hitting the pods API:</p>
<pre><code>curl http://<ip>:10255/pods
</code></pre>
|
<p>I have a requirement that a test server should use the port range 20000 - 22767</p>
<p>I edited the <code>kubeadm-config</code> with the command</p>
<p><code>kubectl edit cm kubeadm-config -n kube-system</code></p>
<p>When I look at the result I see that the change seems to have been stored:</p>
<p>The command <code>$ kubeadm config view</code> gives me</p>
<pre><code>apiServer:
extraArgs:
authorization-mode: Node,RBAC
service-node-port-range: 20000-22767
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.13.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
</code></pre>
<p>But when I later try to install something within hte new port range I get the error</p>
<pre><code>helm upgrade --install --kubeconfig /external-storage/workspace/potapi-orchestration/clusters/at/admin.conf potapi-services charts/potapi-services -f charts/potapi-services/values.at.yaml
Error: UPGRADE FAILED: Service "potapi-services" is invalid: spec.ports[0].nodePort: Invalid value: 21011: provided port is not in the valid range. The range of valid ports is 30000-32767
</code></pre>
<p>I have fiddled with the suggestions here but with no luck: <a href="https://github.com/kubernetes/kubeadm/issues/122" rel="noreferrer">https://github.com/kubernetes/kubeadm/issues/122</a></p>
| <p>It is possible to update the <code>service-node-port-range</code>from it's default values.</p>
<p>I updated the file <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> with <code>--service-node-port-range=20000-22767</code>. </p>
<p>The apiserver was restarted and the port range was updated.</p>
<p>I wrote a <a href="http://www.thinkcode.se/blog/2019/02/20/kubernetes-service-node-port-range" rel="noreferrer">blog post</a> about it.</p>
|
<p>My Kubernetes cluster is running under Google Cloud. I have deployment which running with port 443 and LoadBalancer expose it to the internet. </p>
<p>I created it in this way:</p>
<pre><code>kubectl expose deployment my-app --target-port=443 --port=443 --type=LoadBalancer
</code></pre>
<p>After running this command, the loadbalancer is pointing to <code>my-app</code> deployment. Now I created <code>my-app2</code> and I want to change the loadbalancer to point the new deployment (<code>my-app2</code>).</p>
<p><strong>Note:</strong> Delete and re-create the deployment is releasing the external IP address and I want to avoid it. </p>
<p>How to patch the existing service to point another deployment <em>without</em> loosing the external IP?</p>
| <p>Finally, found the solution:</p>
<pre><code>kubectl patch service old-app -p '{"spec":{"selector":{"app": "new-app"}}}'
</code></pre>
|
<p>According to some of the tech blogs (e.g. <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82" rel="nofollow noreferrer">Understanding kubernetes networking: services</a>), k8s service dispatch all the requests through iptable rules.<br>
What if one of the upstream pods crashed when a request happened to be routed on that pods.<br>
Is there a failover mechanism in kubernetes service?<br>
Will the request will be forwarded to next pod automatically?<br>
How does kubernetes solve this through iptable?</p>
| <blockquote>
<p>Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers a virtual-IP-based bridge to Services which redirects to the backend Pods</p>
</blockquote>
<p>Here is the detail <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">k8s service & endpoints</a></p>
<p>So your answer is <strong>endpoint Object</strong> </p>
<p><code>kubectl get endpoints,services,pods</code></p>
<p>There are liveness and readiness checks which decides if the pod is able to process the request or not. Kubelet with docker has mechanism to control the life cycle of pods. If the pod is healthy then its the part of the endpoint object. </p>
|
<p>I want a command to check if ping utility is present in a pod i tried this</p>
<pre><code>kubectl exec -it auxiliary-etcd-ubuntu -n kube-system -c etcd-auxiliary-0 ping -c 1 127.0.0.1 ; echo $?
</code></pre>
<p>Response is.</p>
<pre><code>Error from server (BadRequest): container 1 is not valid for pod auxiliary-etcd-ubuntu
1
</code></pre>
<p>Is there any other better to just check if ping utility is already present or installed in a kubernetes pod.</p>
<p>Thanks in advance.</p>
| <p>If you just want to check if command is present/installed inside the <code>POD</code></p>
<p><code>kubectl exec -it auxiliary-etcd-ubuntu -- which ping ; echo $?</code></p>
<p>This will give you exit status <code>1</code> if it doesn't exist.</p>
<p>Also</p>
<p><code>kubectl exec -it auxiliary-etcd-ubuntu -- whereis ping</code></p>
<p>Which will provide a path to install location.</p>
|
<p>I am attempting to run Wildfly in HA Full mode on Kubernetes using the KUBE_PING JGroups protocol. Everything starts up fine, and I can scale the cluster up, and the nodes recognize one another without any issues.</p>
<p>The problem occurs when I attempt to scale-down the cluster. ActiveMQ Artemis continually complains that it can't connect to the disconnected node, even though JGroups has acknowledged that the old node has left the cluster.</p>
<p>I'm wondering what I might be doing wrong in my JGroups configuration. I have attached some log messages, as well as my JGroups configuration for <code>KUBE_PING</code>.</p>
<p>Just to make sure I've given as much info as possible, I'm running on the most recent Wildfly official docker image, <strong>15.0.1.Final</strong>, which runs on JDK 11.</p>
<p>Thanks in advance for any help!</p>
<p><strong>Edit:</strong> Fixed typos</p>
<p><strong>JGroups confirmation of node disconnect</strong></p>
<pre><code>wildfly-kube 12:48:36,514 INFO [org.apache.activemq.artemis.core.server] (Thread-22 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5@10f88645)) AMQ221027: Bridge ClusterConnectionBridge@379d51e3 [name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb], temp=false]@195607a8 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@379d51e3 [name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.7ee91868-337b-11e9-9849-ce422226aad5, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb], temp=false]@195607a8 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1699294977[nodeUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6, address=jms, server=ActiveMQServerImpl::serverUUID=314721ae-337b-11e9-9cfa-0e8a9828b1cb])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4], discoveryGroupConfiguration=null]] is connected
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:38,905 WARN [org.apache.activemq.artemis.core.server] (Thread-5 (ActiveMQ-client-global-threads)) AMQ222095: Connection failed with failedOver=false
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:43,758 ERROR [org.jgroups.protocols.TCP] (TQ-Bundler-7,ejb,wildfly-kube-b6f69fb9-b2hd5) JGRP000034: wildfly-kube-b6f69fb9-b2hd5: failure sending message to wildfly-kube-b6f69fb9-nshvn: java.net.SocketTimeoutException: connect timed out
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,759 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,772 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,777 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,779 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,787 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,788 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,791 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN000094: Received new cluster view for channel ejb: [wildfly-kube-b6f69fb9-b2hd5|2] (1) [wildfly-kube-b6f69fb9-b2hd5]
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 12:48:44,792 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-13,ejb,wildfly-kube-b6f69fb9-b2hd5) ISPN100001: Node wildfly-kube-b6f69fb9-nshvn left the cluster
</code></pre>
<p><strong>Repeated ActiveMQ Artemis Warnings (Every 3 seconds)</strong></p>
<pre><code>wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 13:02:11,825 WARN [org.apache.activemq.artemis.core.server] (Thread-55 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5@866e807)) AMQ224091: Bridge ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1432944139[nodeUUID=7ee91868-337b-11e9-9849-ce422226aad5, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4, address=jms, server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]] is unable to connect to destination. Retrying
wildfly-kube-b6f69fb9-b2hd5 wildfly-kube 13:02:14,897 WARN [org.apache.activemq.artemis.core.server] (Thread-68 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5@866e807)) AMQ224091: Bridge ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@39836857 [name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.314721ae-337b-11e9-9cfa-0e8a9828b1cb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5], temp=false]@39425add targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1432944139[nodeUUID=7ee91868-337b-11e9-9849-ce422226aad5, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-116-0-4, address=jms, server=ActiveMQServerImpl::serverUUID=7ee91868-337b-11e9-9849-ce422226aad5])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8080&host=100-122-0-6], discoveryGroupConfiguration=null]] is unable to connect to destination. Retrying
</code></pre>
<p><strong>JGroups configuration</strong></p>
<pre><code><subsystem xmlns="urn:jboss:domain:jgroups:6.0">
<channels default="ee">
<channel name="ee" stack="tcp" cluster="ejb"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp">
<property name="logical_addr_cache_expiration">360000</property>
</transport>
<protocol type="kubernetes.KUBE_PING">
<property name="namespace">${KUBERNETES_CLUSTER_NAMESPACE:default}</property>
<property name="labels">${KUBERNETES_CLUSTER_LABEL:cluster=nyc}</property>
<property name="port_range">0</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2">
<property name="use_mcast_xmit">false</property>
</protocol>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS">
<property name="join_timeout">30000</property>
<property name="print_local_addr">true</property>
<property name="print_physical_addrs">true</property>
</protocol>
<protocol type="MFC"/>
<protocol type="FRAG3"/>
</stack>
</stacks>
</code></pre>
<p></p>
<p><strong>ActiveMQ Artemis configuration</strong></p>
<pre><code><subsystem xmlns="urn:jboss:domain:messaging-activemq:5.0">
<server name="default">
<cluster user="my_admin" password="my_password"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10" redistribution-delay="1000"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
<cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
</code></pre>
<p></p>
<p><strong>UPDATE:</strong>
One thing I would add, if the container shuts down gracefully, Artemis seems to handle the disconnect properly. Adding a <em>preStop</em> command to the container definition in my Kubernetes deployment to shut Wildfly down before the container is terminated helped to gracefully take the container out of the cluster.</p>
| <p>ActiveMQ Artemis only uses JGroups (or any other discovery mechanism) to <em>discover</em> other brokers for the purpose of clustering them together. Once another broker is discovered then they establish TCP connections between themselves after which JGroups doesn't serve any role which means that JGroups seeing the broker leave the cluster is irrelevant.</p>
<p>The fact that the cluster bridge fails is enough to tell ActiveMQ Artemis that the broker has left the cluster. The question at that point is what should the broker do in response to the dead node. By default it will attempt to reconnect indefinitely as it expects the node to come back at some point. This is a reasonable expectation in a traditional use-case but not so much in the cloud. This behavior is controlled by the <code>reconnect-attempts</code> property on the <code>cluster-connection</code>. Set <code>reconnect-attempts</code> to something you think is reasonable (e.g. 10) and you'll see the bridge reconnect give up and stop logging.</p>
|
<p>Is it possible to run Apache Hive on Kubernetes (without YARN running on Kubernetes)? </p>
<p>I could not find any reasonable information on the web -- is running Hive on Kubernetes such a uncommon thing to do? </p>
| <p>Hive on MR3 runs on Kubernetes, as MR3 (a new execution engine for Hadoop and Kubernetes) provides a native support for Kubernetes.</p>
<p><a href="https://mr3docs.datamonad.com/docs/k8s/" rel="nofollow noreferrer">https://mr3docs.datamonad.com/docs/k8s/</a></p>
|
<p>I am using Rancher to manage Kubernetes which orchestrates my Docker containers.</p>
<p>Each of our microservices (running in a container) that requires persistence has a corresponding MySQL container. E.g. MyApp is running in a container called MyApp and persists to a MySQL container called MySQL-MyApp.</p>
<p>We have many of these. We don't want to define which nodes the MySQL containers runs on, and therefore can't publish/expose the port on the host in case it clashes with any other ports on that host.</p>
<p>However, if something goes wrong with some data for one of our microservices, we need to be able to access the MySQL instance in the relevant container using MySQL Workbench to view/edit the data in the database from an external machine on our physical network.</p>
<p>Any ideas how we would go about doing this? Are we able to somehow temporarily expose/publish a port on the fly for a MySQL container that is running so that we can connect to it via MySQL Workbench, or are there other ways to get this done?</p>
| <p>If the users have access to <code>kubectl</code> command-line for the cluster, they can set-up a temporary <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">port-forward</a> between a local development machine and the pod that contains your MySQL container.</p>
<p>For example, where <code>mypod-765d459796-258hz</code> is a pod and you want to connect to port 3306 of that pod:</p>
<p><code>kubectl port-forward mypod-765d459796-258hz 12345:3306</code></p>
<p>Then you could connect MySQL Workbench to <code>localhost:12345</code> and it would forward to your MySQL container in Kubernetes.</p>
|
<p>I'm using <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">ingress-nginx</a> as an Ingress controller for one of my services running over K8S (I'm using the nginx-0.20.0 release image with no specific metrics configurations in the K8S configmap the ingress-controller is using). </p>
<p>The nginx-ingress-controller pods are successfully scraped into my Prometheus server but all ingress metrics (e.g. <code>nginx_ingress_controller_request_duration_seconds_bucket</code>) show up with <code>path="/"</code> regardless of the real path of the handled request.</p>
<p>Worth noting that when I look at the ingress logs - the path is logged correctly.</p>
<p>How can I get the real path noted in the exported metrics?</p>
<p>Thanks!</p>
| <p>The <code>Path</code> attribute in the NGINX metrics collected by prometheus derives from the Ingress definition yaml.</p>
<p>For example, if your ingress is:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: <some-k8s-ingress-name>
namespace: <some-k8s-namespace-name>
spec:
rules:
- host: <hostname>
http:
paths:
- backend:
serviceName: <some-k8s-service-name>
servicePort: <some-port>
path: /
</code></pre>
<p>Then although NGINX will match any URL to your service, it'll all be logged under the path "<code>/</code>" (as seen <a href="https://github.com/kubernetes/ingress-nginx/blob/d74dea7585b7b26cf5a16ca9d7ac402b1e0cf8df/rootfs/etc/nginx/template/nginx.tmpl#L1010" rel="nofollow noreferrer">here</a>).</p>
<p>If you want metrics for a specific URL, you'll need to explicitly specify it like this (notice the ordering of rules):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: <some-k8s-ingress-name>
namespace: <some-k8s-namespace-name>
spec:
rules:
- host: <hostname>
http:
paths:
- backend:
serviceName: <some-k8s-service-name>
servicePort: <some-port>
path: /more/specific/path
- backend:
serviceName: <some-k8s-service-name>
servicePort: <some-port>
path: /
</code></pre>
|
<p>I am using the Jenkins <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">kubernetes-plugin</a>. Is it possible to build a docker image from a Dockerfile and then run steps inside the created image? The plugin requires to specify an image in the pod template so my first try was to use docker-in-docker but the step <code>docker.image('jenkins/jnlp-slave').inside() {..}</code> fails:</p>
<pre><code>pipeline {
agent {
kubernetes {
//cloud 'kubernetes'
label 'mypod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:1.11
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Build Docker image') {
steps {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container('docker') {
sh "docker build -t jenkins/jnlp-slave ."
docker.image('jenkins/jnlp-slave').inside() {
sh "whoami"
}
}
}
}
}
}
</code></pre>
<p>Fails with:</p>
<pre><code>WorkflowScript: 31: Expected a symbol @ line 31, column 11.
docker.image('jenkins/jnlp-slave').inside() {
</code></pre>
| <p>As pointed out by Matt in the comments this works:</p>
<pre><code>pipeline {
agent {
kubernetes {
//cloud 'kubernetes'
label 'mypod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:1.11
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Build Docker image') {
steps {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container('docker') {
script {
def image = docker.build('jenkins/jnlp-slave')
image.inside() {
sh "whoami"
}
}
}
}
}
}
}
</code></pre>
|
<p>how can I enable the <strong><em>record</em></strong> parameter by default each time I want to create a new pod?
My goal is change the default behaviour of the record parameter in order to avoid to use the --record=true eache time I want to instantiate new pod.</p>
<p>This is an example:</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/mhausenblas/kbe/master/specs/deployments/d09.yaml --record=true
</code></pre>
<p>Otherwise, if is not possible change the default behaviour of <strong><em>kubectl create</em></strong>, is there a possibility to add record option to my yaml configuration file?</p>
<p>Thank you.</p>
| <p>AFAIK, you can't define default values for commands parameters</p>
<p>Your alternatives are:</p>
<ul>
<li><p>create a bash function with the default parameters and call it with the parameters you want</p>
<p><strong><code>diego@PC:/$</code></strong><code>k8s() { kubectl $1 $2 $3 --record=true;}</code></p>
<p><strong><code>diego@PC:/$</code></strong><code>k8s create -f https://test</code></p></li>
<li><p>Create <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/" rel="nofollow noreferrer">kubectl plugins</a> and write your custom command to replace the <code>create</code> subcommand with your own parameter set and internally you call the kubectl create.</p>
<p>The idea is similar to above, but you would still use the kubectl, </p>
<p>i.e:
<code>kubectl createrec -f https://raw.githubusercontent.com/../d09.yaml</code></p></li>
<li><p>The other alternative is download the source and change the default value and compile a new version</p></li>
</ul>
|
<p>Hi i am new to kubernetes. </p>
<p>1) Could not able to scaled container/pods in worker nodes. and its memory usage always remain zero. any reason ?</p>
<p>2) Whenever i scaled pods/container its always create in master node. </p>
<p>3) Is there any way to limit pod on specific nodes ?</p>
<p>4) How pods divide when i scaled ?</p>
<p>any help appropriated.</p>
<p><strong>kubectl version</strong></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>kubectl describe nodes</strong></p>
<pre><code>Name: worker-node
Roles: worker
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=worker-node
node-role.kubernetes.io/worker=worker
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 19 Feb 2019 15:03:33 +0530
Taints: node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 19 Feb 2019 18:57:22 +0530 Tue, 19 Feb 2019 15:26:13 +0530 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Tue, 19 Feb 2019 18:57:22 +0530 Tue, 19 Feb 2019 15:26:23 +0530 KubeletHasDiskPressure kubelet has disk pressure
PIDPressure False Tue, 19 Feb 2019 18:57:22 +0530 Tue, 19 Feb 2019 15:26:13 +0530 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 19 Feb 2019 18:57:22 +0530 Tue, 19 Feb 2019 15:26:13 +0530 KubeletReady kubelet is posting ready status. AppArmor enabled
OutOfDisk Unknown Tue, 19 Feb 2019 15:03:33 +0530 Tue, 19 Feb 2019 15:25:47 +0530 NodeStatusNeverUpdated Kubelet never posted node status.
Addresses:
InternalIP: 192.168.1.10
Hostname: worker-node
Capacity:
cpu: 4
ephemeral-storage: 229335396Ki
hugepages-2Mi: 0
memory: 16101704Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 211355500604
hugepages-2Mi: 0
memory: 15999304Ki
pods: 110
System Info:
Machine ID: 1082300ebda9485cae458a9761313649
System UUID: E4DAAC81-5262-11CB-96ED-94898013122F
Boot ID: ffd5ce4b-437f-4497-9337-e72c06f88429
Kernel Version: 4.15.0-45-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.13.3
Kube-Proxy Version: v1.13.3
PodCIDR: 192.168.1.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 55m kube-proxy, worker-node Starting kube-proxy.
Normal Starting 55m kube-proxy, worker-node Starting kube-proxy.
Normal Starting 33m kube-proxy, worker-node Starting kube-proxy.
Normal Starting 11m kube-proxy, worker-node Starting kube-proxy.
Warning EvictionThresholdMet 65s (x1139 over 3h31m) kubelet, worker-node Attempting to reclaim ephemeral-storage
</code></pre>
<p><a href="https://i.stack.imgur.com/yBXCQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yBXCQ.png" alt="enter image description here"></a></p>
| <p>This is very strange, by default kubernetes has the label to exclude the master from pod execution.</p>
<pre><code>kubectl get nodes --show-labels
</code></pre>
<p>Now check for the label</p>
<pre><code>node-role.kubernetes.io/master=true:NoSchedule
</code></pre>
<p>If your master doesn't has this label, you can retain the master with:</p>
<pre><code>kubectl taint nodes $HOSTNAME node-role.kubernetes.io/master=true:NoSchedule
</code></pre>
|
<p>I'm trying to create a new ClusterRole in a Private Cluster which I'm administering from a Jumpbox but keep hitting the "forbidden: attempt to grant extra privileges" error.</p>
<p>I am authenticated with gcloud as the default compute service account and this has the Kubernetes Engine Admin role.</p>
<p>I have created a cluster role binding for the gcloud service account using</p>
<pre><code>kubectl create ClusterRoleBinding sa-admin-binding --ClusterRole=cluster-admin --User=xxxxxxxx-service-account@xxxx.developer.gserviceaccount.com
</code></pre>
<p>When i try to create the cluster role however I get the following error.</p>
<blockquote>
<p>Error from server (Forbidden): error when creating "role.yml":
clusterroles.rbac.authorization.k8s.io "pod-viewer" is forbidden:
attempt to grant extra privileges: [{[list] [] [pods] [] []}]
user=&{<strong>115268482330004182284</strong> [system:authenticated]
map[user-assertion.cloud.google.com:[AKUJVpkbsn........</p>
</blockquote>
<p>What I don't understand is why the error comes back with a 'numbered' user account as opposed to the service account I'm authenticated with.</p>
<p>I can add the ClusterRoleBinding to cluster-admin using my own gmail account, authenticate with my own account and then create the new role without problem, but adding the clusterrolebinding for a service account, and authenticating as that service account doesn't seem to grant the permission to create the role.</p>
<p>Interestingly I can add the clusterrolebinding using the numbered account in the error above and that also works but doesn't help me to script the setup as I don't know what that number is in advance nor where it's coming from.</p>
| <p>well...I later found that the numbered account was actually the 'uniqueId' of the Service Account in gclouds IAM console. Not sure why for service accounts it uses that but for user accounts it uses the email address but here's what I'm now using..</p>
<pre><code>CLUSTER_ADMIN_ID=`gcloud iam service-accounts describe <my-service-account>@<my-project>.iam.gserviceaccount.com --format="value(uniqueId)"`
</code></pre>
<p>followed by </p>
<pre><code>kubectl create ClusterRoleBinding <mybinding>-cluster-admin --clusterrole=cluster-admin --user=$CLUSTER_ADMIN_ID
</code></pre>
<p>and this allows the service account to now administer the cluster.</p>
|
<p>I've currently set up a PVC with the name <code>minio-pvc</code> and created a deployment based on the <a href="https://github.com/helm/charts/tree/master/stable/minio" rel="nofollow noreferrer">stable/minio chart</a> with the values</p>
<pre><code>mode: standalone
replicas: 1
persistence:
enabled: true
existingClaim: minio-pvc
</code></pre>
<p>What happens if I increase the number of replicas? Do i run the risk of corrupting data if more than one pod tries to write to the PVC at the same time?</p>
| <p>Don't use deployment for stateful containers. Instead use StatefulSets.
StatefulSets are specifically designed for running stateful containers like databases. They are used to persist the state of the container.</p>
<p>Note that each pod is going to bind a separate persistent volume via pvc. There is no possibility of multiple instances of pods writing to same pv. Hope I answered your question. </p>
|
<p>Terraform Kubernetes doesn't allow use of anything other than <code>apiVersion: v1</code>. However I wanted to work around that by using the <code>null_resource</code> provisioner to run <code>kubectl apply -f ...</code> and <code>kubectl delete -f ...</code> (with <code>when = "destroy"</code>).</p>
<p>I've attempted to get this to work using the kubernetes apply EOF pattern, but haven't managed it just yet.</p>
<p>Here is where I've gotten to but it seems the <code>|</code> is an illegal char.</p>
<p>Also hoping I could pull the yaml string out into a multiline variable so I don't have to repeat it in the null resource.</p>
<pre><code>resource "null_resource" "res_name" {
provisioner {
command = <<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: some-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: some-cluster-ip-service
servicePort: 80
}
provisioner {
when = "destroy"
command = <<EOF | kubectl delete -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: some-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: some-cluster-ip-service
servicePort: 80
}
}
</code></pre>
| <p>There were some bugs in your code.</p>
<ul>
<li>You missed the ending <code>EOF</code> for the <code><<EOF</code> (<a href="https://www.terraform.io/docs/configuration/syntax.html#terraform-syntax" rel="noreferrer">"here-doc" syntax</a>)</li>
<li>A shell can't start with the pipe (<code>|</code>) command. The <a href="https://www.terraform.io/docs/provisioners/local-exec.html" rel="noreferrer"><code>local-exec</code> provisioner</a> runs the command in the local shell. Therefore use the <a href="https://github.com/hashicorp/terraform/issues/13420#issuecomment-294858906" rel="noreferrer"><code><<EOF</code> stdin trick from the Github issue comment</a>. Thanks for the hint. Didn't know this, too.</li>
</ul>
<p>That code works on my side:</p>
<pre><code>resource "null_resource" "res_name" {
provisioner "local-exec" {
command = "kubectl apply -f - <<EOF\n${var.ingress_yaml}\nEOF"
}
provisioner "local-exec" {
when = "destroy"
command = "kubectl delete -f - <<EOF\n${var.ingress_yaml}\nEOF"
}
}
variable "ingress_yaml" {
default = <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: some-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: some-cluster-ip-service
servicePort: 80
EOF
}
</code></pre>
<p>I suggest to read in the YAML configuration from a file, instead. Then you can get YAML syntax highlighting and errors shown in your IDE.
Use either</p>
<ul>
<li><a href="https://www.terraform.io/docs/providers/local/d/file.html" rel="noreferrer"><code>data "local_file"</code></a> or </li>
<li><a href="https://www.terraform.io/docs/providers/template/d/file.html" rel="noreferrer"><code>data "template_file"</code></a>, when you want to change something in the
file.</li>
</ul>
|
<p>I've Prometheus operator which is <strong>working as expected</strong>
<a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator</a></p>
<p>Now I want to apply the <a href="https://prometheus.io/docs/alerting/alertmanager/" rel="nofollow noreferrer">alert manager</a> from scratch </p>
<p>After reading the docs im came out with those yamls.
but the problem is when I entered to the UI
Nothing is shown, any idea what I miss here ? </p>
<p><a href="http://localhost:9090/alerts" rel="nofollow noreferrer">http://localhost:9090/alerts</a>
I use port forwarding ...</p>
<p>This is <code>all</code> the config files I've apply to my k8s cluster
I just want to do some simple test to see that it working and then extend it to our needs...</p>
<p><code>alertmanger_main.yml</code></p>
<pre><code>---
apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
name: main
labels:
alertmanager: main
spec:
replicas: 3
version: v0.14.0
</code></pre>
<p><code>alertmanger_service.yml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: alertmanager-main
spec:
type: LoadBalancer
ports:
- name: web
port: 9093
protocol: TCP
targetPort: web
selector:
alertmanager: main
</code></pre>
<p><code>testalert.yml</code> </p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: prometheus-example-rules
labels:
role: prometheus-rulefiles
prometheus: prometheus
data:
example.rules.yaml: |+
groups:
- name: ./example.rules
rules:
- alert: ExampleAlert
expr: vector(1)
</code></pre>
<p><code>alertmanager.yml</code></p>
<pre><code>global:
resolve_timeout: 5m
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'webhook'
receivers:
- name: 'webhook'
webhook_configs:
- url: 'http://alertmanagerwh:30500/'
</code></pre>
<p>and to create secret I use </p>
<p><code>kubectl create secret generic alertmanager-main --from-file=alertmanager.yaml</code></p>
<p>what I need is some basic alerts in K8S and I follow the documatation but didnt find any good step by step tutorial </p>
<p><a href="https://i.stack.imgur.com/aTDBb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aTDBb.png" alt="enter image description here"></a></p>
<pre><code>to check my sys for monitoring namespace
~ kubectl get pods -n monitoring 13.4m Sun Feb 17 18:48:16 2019
NAME READY STATUS RESTARTS AGE
kube-state-metrics-593czc6b4-mrtkb 2/2 Running 0 12h
monitoring-grafana-771155cbbb-scqvx 1/1 Running 0 12h
prometheus-operator-79f345dc67-nw5zc 1/1 Running 0 12h
prometheus-prometheus-0 3/3 Running 1 12h
~ kubectl get svc -n monitoring 536ms Sun Feb 17 21:04:51 2019
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main NodePort 100.22.170.666 <none> 9093:30904/TCP 4m53s
kube-state-metrics ClusterIP 100.34.212.596 <none> 8080/TCP 4d7h
monitoring-grafana ClusterIP 100.67.230.884 <none> 80/TCP 4d7h
prometheus-operated ClusterIP None <none> 9090/TCP 4d7h
</code></pre>
<p>I've also now changed the service to LoadBalancer and I try to enter like </p>
<pre><code> ~ kubectl get svc -n monitoring 507ms Sun Feb 17 21:23:56 2019
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main LoadBalancer 100.22.170.666 38.482.152.331 9093:30904/TCP 23m
</code></pre>
<p>when I hit the browser with</p>
<pre><code>38.482.152.331:9093
38.482.152.331:30904
</code></pre>
<p>nothing happen...</p>
| <p>When you consider using <a href="https://prometheus.io/docs/alerting/alertmanager/" rel="nofollow noreferrer">AlertManager</a>, besides the general configuration and applying alert rules, <code>AlertManager</code> requires being integrated with a <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> server. The Prometheus instance can then track any incoming series of events, and once it detects any rule which is recognized, it triggers an alert to the nested alertmanager.</p>
<p>In order to enable alerting it might be necessary to append the following config the to Prometheus instance:</p>
<pre><code>alerting:
alertmanagers:
- static_configs:
- targets:
- 'alertmanagerIP:9093'
</code></pre>
<p>Specifically, for <code>AlertManager</code> implementation in <a href="https://coreos.com/" rel="nofollow noreferrer">CoreOS</a>, you can follow the steps described in the official <a href="https://coreos.com/operators/prometheus/docs/latest/user-guides/alerting.html" rel="nofollow noreferrer">Alerting</a> documentation; however, below you can find example for Prometheus pod alerting configuration kept from the mentioned guideline:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: example
spec:
replicas: 2
alerting:
alertmanagers:
- namespace: default
name: alertmanager-example
port: web
serviceMonitorSelector:
matchLabels:
team: frontend
resources:
requests:
memory: 400Mi
ruleSelector:
matchLabels:
role: prometheus-rulefiles
prometheus: example
</code></pre>
|
<p>I want to know if it's possible that multiple PersistentVolumeClaims bind to the same <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local persistent volume</a>.</p>
<p>My use-case if the following: I want to build a Daemon Set that will write some data (the same data actually) on each node on my cluster (on the node's local disk). Then, any other pod that is scheduled on any node should be able to read that data. Basically a kind of write-once-read-many policy at the node level.</p>
<p>I know that I can do that using the <strong>hostPath</strong> type of volume, but it's a bit hard to manage so I found out that local-storage would be a better approach.</p>
<p>My wish would be the following:</p>
<ul>
<li>Create the local Persistent Volume (named pv) with <strong>ReadWriteOnce</strong> and <strong>ReadOnlyMany</strong> access modes</li>
<li>Create the first persistent volume claim (pvc1) with <strong>ReadWriteOnce</strong> access mode and use it in the DaemonSet that writes the data in the volume. So <strong>pvc1</strong> should bind to <strong>pv</strong></li>
<li>Create the second persistent volume claim (pvc2) with <strong>ReadOnlyMany</strong> access mode that is used in any other pod that reads that data (so <strong>pvc2</strong> should also bind to <strong>pv</strong>)</li>
</ul>
<p>Is this possible?</p>
<p>I read that if a PVC is bounded to a PV, then that PV is "locked", meaning that no other PVC can bind to it. Is this really how it works? If seems a bit limiting for that kind of scenarios, where we have write-once-read-many operations.</p>
<p>Thanks!</p>
| <p>DaemonSets and PVCs for RWO volume types do not mix well because all the DaemonSets will share the same PVC. And for local volumes, that would result in only one replica to be scheduled since it restricts all Pods using that PVC to only get scheduled to one node. </p>
<p>You could potentially solve this by using a StatefulSet, which supports <code>volumeClaimTemplates</code> that creates a PVC per replica, and have it scale to the number of nodes in the cluster. However, your user pods would then need to know and pick a specific PVC to use, rather than use whatever is on that node.</p>
<p>I think your use case would be better addressed by writing a <a href="https://kubernetes-csi.github.io/docs/" rel="nofollow noreferrer">CSI driver</a>. It has a DaemonSet component, which on driver startup can initialize the data. Then when it implements <code>NodePublishVolume</code> (aka mount into the pod), it can bind-mount the data directory into the pod's container. You can make this volume type RWX, and you probably don't need to implement any of the controller routines for provisioning or attaching.</p>
|
<p><strong>TL;DR</strong></p>
<p>My pods mounted Azure file shares are (inconsistently) being deleted by either Kubernetes / Helm when deleting a deployment.</p>
<p><strong>Explanation</strong></p>
<p>I've recently transitioned to using Helm for deploying Kubernetes objects on my Azure Kubernetes Cluster via the DevOps release pipeline.</p>
<p>I've started to see some unexpected behaviour in relation to the Azure File Shares that I mount to my Pods (as Persistent Volumes with associated Persistent Volume Claims and a Storage Class) as part of the deployment.</p>
<p>Whilst I've been finalising my deployment, I've been pushing out the deployment via the Azure Devops release pipeline using the built in Helm tasks, which have been working fine. When I've wanted to fix / improve the process I've then either manually deleted the objects on the Kubernetes Dashboard (UI), or used Powershell (command line) to delete the deployment.</p>
<p>For example:</p>
<pre><code>helm delete myapp-prod-73
helm del --purge myapp-prod-73
</code></pre>
<p>Not every time, but more frequently, I'm seeing the underlying Azure File Shares also being deleted as I'm working through this process. There's very little around the web on this, but I've also seen an article outlining similar issues over at: <a href="https://winterdom.com/2018/07/26/kubernetes-azureFile-dynamic-volumes-deleting" rel="nofollow noreferrer">https://winterdom.com/2018/07/26/kubernetes-azureFile-dynamic-volumes-deleting</a>.</p>
<p>Has anyone in the community come across this issue?</p>
| <p><strong>Credit goes to</strong> <a href="https://twitter.com/tomasrestrepo" rel="nofollow noreferrer">https://twitter.com/tomasrestrepo</a> here on pointing me in the right direction (the author of the article I mentioned above).</p>
<p>The behaviour here was a consequence of having the Reclaim Policy on the Storage Class & Persistent Volume set to "Delete". When switching over to Helm, I began following their commands to Delete / Purge the releases as I was testing. What I didn't realise, was that deleting the release would also mean that Helm / K8s would also reach out and delete the underlying Volume (in this case an Azure Fileshare). This is documented over at: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete</a></p>
<p>I'll leave this Q & A here for anyone else that misses this subtly with the way in which the Storage Classes, Persistent Volumes (PVs) & underlying storage operates under K8s / Helm.</p>
<p><strong>Note</strong>: I think this issue was made slightly more obscure by the fact I was manually creating the Azure Fileshare (through the Azure Portal) and trying to mount that as a static volume (as per <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/azure-files-volume</a>) within my Helm Chart, but that the underlying volume wasn't immediately being deleted when the release was deleted (sometimes an hour later?).</p>
|
<p>I was looking at step-by-step tutorial on how to run my spring boot, mysql-backed app using AWS EKS (Elastic Container service for Kubernetes) using the existing SSL wildcard certificate and wasn't able to find a complete solution. </p>
<p>The app is a standard Spring boot self-contained application backed by MySQL database, running on port 8080. I need to run it with high availability, high redundancy including MySQL db that needs to handle large number of writes as well as reads. </p>
<p>I decided to go with the EKS-hosted cluster, saving a custom Docker image to AWS-own ECR private Docker repo going against EKS-hosted MySQL cluster. And using AWS issued SSL certificate to communicate over HTTPS. Below is my solution but I'll be very curious to see how it can be done differently</p>
| <p>This a step-by-step tutorial. Please don't proceed forward until the previous step is complete. </p>
<p><strong>CREATE EKS CLUSTER</strong></p>
<p>Follow <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="nofollow noreferrer">the standard tutorial</a> to create EKS cluster. Don't do step 4. When you done you should have a working EKS cluster and you must be able to use <code>kubectl</code> utility to communicate with the cluster. When executed from the command line you should see the working nodes and other cluster elements using
<code>kubectl get all --all-namespaces</code> command</p>
<p><strong>INSTALL MYSQL CLUSTER</strong></p>
<p>I used <code>helm</code> to install MySQL cluster following steps from <a href="https://github.com/helm/helm#install" rel="nofollow noreferrer">this tutorial</a>. Here are the steps</p>
<p><strong>Install helm</strong></p>
<p>Since I'm using Macbook Pro with <code>homebrew</code> I used <code>brew install kubernetes-helm</code> command</p>
<p><strong>Deploy MySQL cluster</strong></p>
<p>Note that in <em>MySQL cluster</em> and <em>Kubernetes (EKS) cluster</em>, word "cluster" refers to 2 different things. Basically you are installing cluster into cluster, just like a Russian Matryoshka doll so your MySQL cluster ends up running on EKS cluster nodes.</p>
<p>I used a 2nd part of <a href="https://www.presslabs.com/code/kubernetes-mysql-operator-aws-kops/" rel="nofollow noreferrer">this tutorial</a> (ignore kops part) to prepare the <code>helm</code> chart and install MySQL cluster. Quoting helm configuration:</p>
<pre><code>$ kubectl create serviceaccount -n kube-system tiller
serviceaccount "tiller" created
$ kubectl create clusterrolebinding tiller-crule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io "tiller-crule" created
$ helm init --service-account tiller --wait
$HELM_HOME has been configured at /home/presslabs/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
$ helm repo add presslabs https://presslabs.github.io/charts
"presslabs" has been added to your repositories
$ helm install presslabs/mysql-operator --name mysql-operator
NAME: mysql-operator
LAST DEPLOYED: Tue Aug 14 15:50:42 2018
NAMESPACE: default
STATUS: DEPLOYED
</code></pre>
<p>I run all commands exactly as quoted above.</p>
<p>Before creating a cluster, you need a secret that contains the ROOT_PASSWORD key.</p>
<p>Create a file named <code>example-cluster-secret.yaml</code> and copy into it the following YAML code</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
# root password is required to be specified
ROOT_PASSWORD: Zm9vYmFy
</code></pre>
<p>But what is that <code>ROOT_PASSWORD</code>? Turns out this is base64 encoded password that you planning to use with your MySQL root user. Say you want <code>root/foobar</code> (please don't actually use <code>foobar</code>). The easiest way to encode the password is to use one of the websites such as <a href="https://www.base64encode.org/" rel="nofollow noreferrer">https://www.base64encode.org/</a> which encodes <code>foobar</code> into <code>Zm9vYmFy</code></p>
<p>When ready execute <code>kubectl apply -f example-cluster-secret.yaml</code> which will create a new secret</p>
<p>Then you need to create a file named <code>example-cluster.yaml</code> and copy into it the following YAML code:</p>
<pre><code>apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
name: my-cluster
spec:
replicas: 2
secretName: my-secret
</code></pre>
<p>Note how the <code>secretName</code> matches the secret name you just created. You can change it to something more meaningful as long as it matches in both files. Now run <code>kubectl apply -f example-cluster.yaml</code> to finally create a MySQL cluster. Test it with</p>
<pre><code>$ kubectl get mysql
NAME AGE
my-cluster 1m
</code></pre>
<p>Note that I did not configure a backup as described in the rest of the article. You don't need to do it for the database to operate. But how to access your db? At this point the mysql service is there but it doesn't have external IP. In my case I don't even want that as long as my app that will run on the same EKS cluster can access it. </p>
<p>However you can use <code>kubectl</code> port forwarding to access the db from your dev box that runs <code>kubectl</code>. Type in this command: <code>kubectl port-forward services/my-cluster-mysql 8806:3306</code>. Now you can access your db from <code>127.0.0.1:8806</code> using user <code>root</code> and the non-encoded password (<code>foobar</code>). Type this into separate command prompt: <code>mysql -u root -h 127.0.0.1 -P 8806 -p</code>. With this you can also use MySQL Workbench to manage your database just don't forget to run <code>port-forward</code>. And of course you can change 8806 to other port of your choosing</p>
<p><strong>PACKAGE YOUR APP AS A DOCKER IMAGE AND DEPLOY</strong></p>
<p>To deploy your Spring boot app into EKS cluster you need to package it into a Docker image and deploy it into the Docker repo. Let's start with a Docker image. There are plenty tutorials on this <a href="https://spring.io/guides/gs/spring-boot-docker/" rel="nofollow noreferrer">like this one</a> but the steps are simple:</p>
<p>Put your generated, self-contained, spring boot jar file into a directory and create a text file with this exact name: <code>Dockerfile</code> in the same directory and add the following content to it:</p>
<pre><code>FROM openjdk:8-jdk-alpine
MAINTAINER [email protected]
LABEL name="My Awesome Docker Image"
# Add spring boot jar
VOLUME /tmp
ADD myapp-0.1.8.jar app.jar
EXPOSE 8080
# Database settings (maybe different in your app)
ENV RDS_USERNAME="my_user"
ENV RDS_PASSWORD="foobar"
# Other options
ENV JAVA_OPTS="-Dverknow.pypath=/"
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
</code></pre>
<p>Now simply run a Docker command from the same folder to create an image. Of course that requires Docker client installed on your dev box.</p>
<p><code>$ docker build -t myapp:0.1.8 --force-rm=true --no-cache=true .</code></p>
<p>If all goes well you should see your image listed with <code>docker ps</code> command</p>
<p><strong>Deploy to the private ECR repo</strong></p>
<p>Deploying your new image to ECR repo is easy and ECR works with EKS right out of the box. Log into AWS console and navigate to <a href="https://us-west-2.console.aws.amazon.com/ecr/get-started?region=us-west-2" rel="nofollow noreferrer">the ECR section</a>. I found it confusing that apparently you need to have one repository per image but when you click "Create repository" button put your image name (e.g. <code>myapp</code>) into the text field. Now you need to copy the ugly URL for your image and go back to the command prompt</p>
<p>Tag and push your image. I'm using a fake URL as example: <code>901237695701.dkr.ecr.us-west-2.amazonaws.com</code> you need to copy your own from the previous step</p>
<pre><code>$ docker tag myapp:0.1.8 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
$ docker push 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
</code></pre>
<p>At this point the image should show up at ECR repository you created</p>
<p><strong>Deploy your app to EKS cluster</strong></p>
<p>Now you need to create a Kubernetes deployment for your app's Docker image. Create a <code>myapp-deployment.yaml</code> file with the following content</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
selector:
matchLabels:
app: myapp
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
name: myapp
ports:
- containerPort: 8080
name: server
env:
# optional
- name: RDS_HOSTNAME
value: "10.100.98.196"
- name: RDS_PORT
value: "3306"
- name: RDS_DB_NAME
value: "mydb"
restartPolicy: Always
status: {}
</code></pre>
<p>Note how I'm using a full URL for the <code>image</code> parameter. I'm also using a private CLUSTER-IP of mysql cluster that you can get with <code>kubectl get svc my-cluster-mysql</code> command. This will differ for your app including any env names but you do have to provide this info to your app somehow. Then in your app you can set something like this in the <code>application.properties</code> file:</p>
<pre><code>spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://${RDS_HOSTNAME}:${RDS_PORT}/${RDS_DB_NAME}?autoReconnect=true&amp;zeroDateTimeBehavior=convertToNull
spring.datasource.username=${RDS_USERNAME}
spring.datasource.password=${RDS_PASSWORD}
</code></pre>
<p>Once you save the <code>myapp-deployment.yaml</code> you need to run this command</p>
<p><code>kubectl apply -f myapp-deployment.yaml</code></p>
<p>Which will deploy your app into EKS cluster. This will create 2 pods in the cluster that you can see with <code>kubectl get pods</code> command</p>
<p>And rather than try to access one of the pods directly we can create a service to front the app pods. Create a <code>myapp-service.yaml</code> with this content:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
ports:
- port: 443
targetPort: 8080
protocol: TCP
name: http
selector:
app: myapp
type: LoadBalancer
</code></pre>
<p>That's where the magic happens! Just by setting the port to 443 and type to <code>LoadBalancer</code> the system will create a Classic Load Balancer to front your app.</p>
<p>BTW if you don't need to run your app over HTTPS you can set port to 80 and you will be pretty much done!</p>
<p>After you run <code>kubectl apply -f myapp-service.yaml</code> the service in the cluster will be created and if you go to to the Load Balancers section in the EC2 section of AWS console you will see that a new balancer is created for you. You can also run <code>kubectl get svc myapp-service</code> command which will give you EXTERNAL-IP value, something like <code>bl3a3e072346011e98cac0a1468f945b-8158249.us-west-2.elb.amazonaws.com</code>. Copy that because we need to use it next.</p>
<p>It is worth to mention that if you are using port 80 then simply pasting that URL into the browser should display your app</p>
<p><strong>Access your app over HTTPS</strong></p>
<p>The following section assumes that you have AWS-issued SSL certificate. If you don't then go to AWS console "Certificate Manager" and create a wildcard certificate for your domain</p>
<p>Before your load balancer can work you need to access <code>AWS console -> EC2 -> Load Balancers -> My new balancer -> Listeners</code> and click on "Change" link in <code>SSL Certificate</code> column. Then in the pop up select the AWS-issued SSL certificate and save.</p>
<p>Go to Route-53 section in AWS console and select a hosted zone for your domain, say <code>myapp.com.</code>. Then click "Create Record Set" and create a <code>CNAME - Canonical name</code> record with <code>Name</code> set to whatever alias you want, say <code>cluster.myapp.com</code> and <code>Value</code> set to the EXTERNAL-IP from above. After you "Save Record Set" go to your browser and type in <a href="https://cluster.myapp.com" rel="nofollow noreferrer">https://cluster.myapp.com</a>. You should see your app running</p>
|
<p>I'm new to Kubernetes. Got confused with how CustomResourceDefinations changes got to apply:-)
Ex: If I have a CustomResourceDefinations "Prometheus", it creates a Statefulsets which create one pod. After the CRD changed, I need to use the latest CRD to create my pod again. What is the correct way? Should I completely remove the Statefulsets and pod then recreate them or just simply do "kubectl delete pod" then the change will auto apply when the new pod gets created? Thanks much!</p>
| <p>The operator, or more specifically the custom controller at the heart of the operator, takes care of this. It watches for changes in the Kubernetes API and updates things as needed to respond.</p>
|
<p><code>gcloud container clusters create --cluster-version 1.10 --zone us-east1-d ...</code> returns with the error message <code>ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=No valid versions with the prefix "1.10" found.</code>.</p>
<p>The GKE release notes <a href="https://cloud.google.com/kubernetes-engine/release-notes#february-11-2019" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/release-notes#february-11-2019</a>, indicates the specific kubernetes version is still supported.</p>
<p>Does anyone know what's going on?</p>
| <p>The syntax you are using looks correct, but support for k8s 1.10 is being phased out on GKE, as per the GKE release notes entry of February 11, 2019:</p>
<blockquote>
<h2>Coming soon</h2>
<p>We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.</p>
<p>25% of the upgrades from 1.10 to 1.11.6-gke.2 will be complete.<br>
Version 1.11.6-gke.8 will be made available.<br>
<strong>Version 1.10 will be made unavailable.</strong></p>
</blockquote>
<p>Have you tried with the full version, say <code>1.10.12-gke.7</code>?</p>
<p><code>gcloud container clusters create --cluster-version 1.10.12-gke.7 --zone us-east1-d ...</code></p>
<p>Alternatively, use 1.11, because it looks like GKE is moving that way anyhow.</p>
|
<p>I have an angular app and some node containers for backend, in my deployment file, how i can get container backed for connect my front end.</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: container_imaer_backend
env:
- name: IP_BACKEND
value: here_i_need_my_container_ip_pod
ports:
- containerPort: 80
protocol: TCP
</code></pre>
| <p>You could use Pod field values for environment(ref: <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">here</a>). That way you can set POD IP in environment variable.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- mountPath: /var/lib/mysql
name: data
volumes:
- name: data
emptyDir: {}
</code></pre>
|
<p>Kubernetes version 1.12.3. Does kubectl drain remove pod first or create pod first.</p>
| <p>You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.)</p>
<p>When kubectl drain return successfuly it means it has removed all the pods successfully from that node and it is safe to bring that node down(physically shut off, or start maintainence)</p>
<p>Now if you turn on the machine and want to schedule pods again on that node you need to run:</p>
<pre><code>kubectl uncordon <node name>
</code></pre>
<p>So, <code>kubectl drain</code> removes pods from the node and don't schedule any pods on that until you uncordon that node</p>
|
<p>I can list all the <code>custom.metrics</code> available, but I don't know how to query an individual value. For example I have tried:</p>
<pre><code>curl http://localhost:8001/apis/custom.metrics.k8s.io/v1beta1/ | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "deployments.apps/aws_sqs_approximate_number_of_messages_visible_average",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>But if I try this:</p>
<pre><code>curl http://localhost:8001/apis/custom.metrics.k8s.io/v1beta1/deployments.apps/aws_sqs_approximate_number_of_messages_visible_average | jq .
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
},
"code": 404
}
</code></pre>
<p>I get a 404. I've seen <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter/issues/164#issuecomment-465084278" rel="nofollow noreferrer">this issue</a> which shows how to get a namespaced metric, but mine does not have a namespace? Is there a definition for how to use this API?</p>
| <p>Just like Resource Metrics, Custom Metrics are bound to Kubernetes objects too.
What you're missing in your URL is the resource you want the metric to relate to.
For example the Pod the custom metric is related to, but the same is true for Deployments.</p>
<p>Try to adjust this url to your needs:</p>
<pre><code>kubectl get --raw \
'/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pod/podinfo-67c9fd95d-fqk4g/http_requests_per_second' \
| jq .
</code></pre>
<p>Here are the slides for the talk we gave at FOSDEM 2019 on the Prometheus Adapter: <a href="https://speakerdeck.com/metalmatze/kubernetes-metrics-api?slide=26" rel="nofollow noreferrer">https://speakerdeck.com/metalmatze/kubernetes-metrics-api?slide=26</a></p>
<p>I'll update this answer, once the video is available too.</p>
|
<p>I have a kubernetes setup with the configuration like below:</p>
<pre class="lang-yaml prettyprint-override"><code>#---
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
selector:
app: my-service
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8080
# Port to forward to inside the pod
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service
image: my-custom-docker-regisry/my-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
</code></pre>
<p>and my ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /myservice
backend:
serviceName: myservice
servicePort: 80
</code></pre>
<p>What I tried to do is pulling the image from my docker registry and run it in the kubernetes. I have configured here one deployment and one service and expose the service to the outside with the ingress.</p>
<p>My minikube is running under ip 192.168.99.100 and when I tried to access my application with address: curl 192.168.99.100:80/myservice, I got 502 Bad Gateway.</p>
<p>Does anyone have an idea why it happended or did I do something wrong with the configuration? Thank you in advanced!</p>
| <p>Your ingress targets this service:</p>
<pre><code> serviceName: myservice
servicePort: 80
</code></pre>
<p>but the service named <code>myservice</code> exposes port <code>8080</code> rather than <code>80</code>:</p>
<pre><code> ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8080
# Port to forward to inside the pod
targetPort: 80
</code></pre>
<p>Your ingress should point to one of the ports exposed by the service.</p>
<p>Also, the service itself targets port 80, but the pods in your deployment seem to expose port 8080, rather than 80:</p>
<pre><code> containers:
- name: my-service
image: my-custom-docker-regisry/my-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<p>So long story short, looks like you could swap <code>port</code> with <code>targetPort</code> in your <code>service</code> so that:</p>
<ul>
<li>the pods expose port <code>8080</code></li>
<li>the service exposes port <code>8080</code> of all the pods under service name <code>myservice</code> port <code>80</code>,</li>
<li>the ingress configures nginx to proxy your traffic to service <code>myservice</code> port <code>80</code>.</li>
</ul>
|
<p>I have a simple Kubernetes cluster on kops and aws, which is serving a web app, there is a single html page and a few apis. They are all running as services. I want to expose all endpoints(html and apis) publicly for the web page to work.</p>
<p>I have exposed the html service as an LoadBalancer and I am also using nginx-ingress controller. I want to use the same LoadBalancer to expose the other apis as well(using a different LoadBalancer for each service seems like a bad and expensive way), it is something that I was able to do using Nginx reverse proxy in the on-premise version of the same application, by giving different paths for each api in the nginx conf file. </p>
<p>Although I am not able to do the same in the cluster, I tried <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Service ingress</a> but somehow I am not able to get the desired result, if I add a path, e.g. "path: "/mobiles-service"" and then add the specific service for it, the http requests do not somehow get redirected to the service. Only the html service works on the root path. Any help would be appreciated.</p>
| <p>First you need to create controller for your Kops cluster running on AWS</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml
</code></pre>
<p>Then check if ingress-nginx service is created by running:</p>
<pre><code>kubectl get svc ingress-nginx -n kube-ingress
</code></pre>
<p>Then create your pods and ClusterIP type services for your <strong>each app</strong> like sample below:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: app1-service
spec:
selector:
app: app1
ports:
- port: <app-port>
</code></pre>
<p>Then create ingress rule file like sample below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: app1-service
servicePort: <app1-port>
- path: /app2
backend:
serviceName: app2-service
servicePort: <app2-port>
</code></pre>
<p>Once you deploy this ingress rule yaml, Kubernetes creates an Ingress resource on your cluster. The Ingress controller running in your cluster is responsible for creating an HTTP(S) Load Balancer to route all external HTTP traffic (on port 80) to the App Services in backend you exposed on specified pathes.</p>
<p>You can see newly created ingress rule by running:</p>
<pre><code>kubectl get ingress
</code></pre>
<p>And you will see output like below:</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
example-ingress * a886e57982736434e9a1890264d461398-830017012.us-east-2.elb.amazonaws.com 80 1m
</code></pre>
<p>In relevant path like <code>http://external-dns-name/app1</code> and <code>http://external-dns-name/app2</code> you will access to your apps and in root <code>/</code> path, you will get <code><default backend - 404></code></p>
|
<p>My deployment pod was evicted due to memory consumption:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 1h kubelet, gke-XXX-default-pool-XXX The node was low on resource: memory. Container my-container was using 1700040Ki, which exceeds its request of 0.
Normal Killing 1h kubelet, gke-XXX-default-pool-XXX Killing container with id docker://my-container:Need to kill Pod
</code></pre>
<p>I tried to grant it more memory by adding the following to my deployment <code>yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: my-container
image: my-container:latest
...
resources:
requests:
memory: "3Gi"
</code></pre>
<p>However, it failed to deploy:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x5 over 13s) default-scheduler 0/3 nodes are available: 3 Insufficient memory.
Normal NotTriggerScaleUp 0s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added)
</code></pre>
<p>The deployment requests only one container.</p>
<p>I'm using <code>GKE</code> with autoscaling, the nodes in the default (and only) pool have 3.75 GB memory.</p>
<p>From trial and error, I found that the maximum memory I can request is "2Gi". Why can't I utilize the full 3.75 of a node with a single pod? Do I need nodes with bigger memory capacity?</p>
| <p>Even though the node has 3.75 GB of total memory, is very likely that the capacity allocatable is not all 3.75 GB.</p>
<p>Kubernetes reserve some capacity for the system services to avoid containers consuming too much resources in the node affecting the operation of systems services .</p>
<p>From the <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources" rel="noreferrer">docs</a>:</p>
<blockquote>
<p>Kubernetes nodes can be scheduled to Capacity. Pods can consume all the available capacity on a node <strong>by default</strong>. This is an issue because nodes typically run quite a few system daemons that power the OS and Kubernetes itself. Unless resources are set aside for these system daemons, pods and system daemons compete for resources and lead to resource starvation issues on the node.</p>
</blockquote>
<p>Because you are using GKE, is they don't use the defaults, running the following command will show how much <strong>allocatable</strong> resource you have in the node:</p>
<p><code>kubectl describe node [NODE_NAME] | grep Allocatable -B 4 -A 3</code></p>
<p>From the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#node_allocatable" rel="noreferrer">GKE docs</a>:</p>
<blockquote>
<p>Allocatable resources are calculated in the following way:</p>
<p>Allocatable = Capacity - Reserved - Eviction Threshold</p>
<p>For memory resources, GKE reserves the following:</p>
<ul>
<li>25% of the first 4GB of memory</li>
<li>20% of the next 4GB of memory (up to 8GB)</li>
<li>10% of the next 8GB of memory (up to 16GB)</li>
<li>6% of the next 112GB of memory (up to 128GB)</li>
<li>2% of any memory above 128GB</li>
</ul>
<p>GKE reserves an additional 100 MiB memory on each node for kubelet eviction.</p>
</blockquote>
<p>As the error message suggests, scaling the cluster will not solve the problem because each node capacity is limited to X amount of memory and the POD need more than that.</p>
|
<p>I hava a java <code>spring.boot</code> application that uses Kubernetes, I'we configured this <code>.yaml</code> file</p>
<pre><code>- name: ACTUATOR_USERNAME
valueFrom:
secretKeyRef:
name: actuator
key: username
- name: ACTUATOR_PASSWORD
valueFrom:
secretKeyRef:
name: actuator
key: password
</code></pre>
<p>added this attributes to my <code>application.propertis</code></p>
<pre><code>security.user.name=${ACTUATOR_USERNAME}
security.user.password=${ACTUATOR_PASSWORD}
</code></pre>
<p>secret is created at server side, how do I retrieve this values inside my class</p>
<pre><code>package com.greenqloud.usage.healthcheck;
import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
@Configuration
@EnableWebSecurity
public class ActuatorSecurity extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.requestMatcher(EndpointRequest.toAnyEndpoint()).authorizeRequests()
.anyRequest().hasRole("USER")
.and()
.httpBasic();
}
@Override
public void configure(AuthenticationManagerBuilder auth) throws Exception {
System.out.println("actuator username: " + System.getenv("ACTUATOR_USERNAME"));
System.out.println("actuator password: " + System.getenv("ACTUATOR_PASSWORD"));
auth.inMemoryAuthentication()
.withUser("actuator").password("{noop}actuator123").roles("USER");
}
}
</code></pre>
<p>the only way I have found is to use the <code>System.out.getenv("ACTUATOR_USERNAME")</code> but I'm sure there is a better way to achieve this?</p>
| <p>I am agree with @Kuikiker about getenv(). But one question why do u want to store credential in env variable. Unless u have some special need I believe you will be better off store them in your application.properties with encrypted value. I usually use jasypt encipher for that (<a href="https://www.baeldung.com/spring-boot-jasypt" rel="nofollow noreferrer">https://www.baeldung.com/spring-boot-jasypt</a>).
Hope this helps.</p>
|
<p>I am trying to access my cluster (running for example <code>kubectl get pods</code>) and I'm getting an error response with this:</p>
<blockquote>
<p>"errorMessage":"Provided refresh token is expired"</p>
</blockquote>
<p><a href="https://console.bluemix.net/docs/containers/cs_cli_install.html#cs_cli_refresh" rel="nofollow noreferrer">This documentation</a> says that you are provided a new refresh token when you run the <code>export KUBECONFIG=....</code> command to get your cluster configuration, which I have done mulitple times now. Is there something else I can do to get a new refresh token?</p>
| <p>Assuming you ran <code>ibmcloud ks cluster-config <cluster_name></code> again first to get the new config download before you ran <code>export KUBECONFIG</code>?</p>
<p>I'd also try logging out/logging in again and making sure you're targeting the correct region with <code>ibmcloud target</code></p>
|
<p>Been searching for days for a Helm chart to install a PyPi server (version irrelavant at this point) to a Kubernetes cluster. Finding practically nothing.</p>
<p>Any and all help would be appreciated. How does one use a helm chart to install a PyPi server to a Kubernetes cluster?</p>
| <p>I didn't succeed in finding a Helm chart for Pypi either, but would it be an option to create a simple K8s deployment on your own using an existing docker image?
You could have a look at <a href="https://hub.docker.com/r/pypiserver/pypiserver" rel="nofollow noreferrer">this image</a> and try to deploy it similar to <a href="https://codeburst.io/getting-started-with-kubernetes-deploy-a-docker-container-with-kubernetes-in-5-minutes-eb4be0e96370" rel="nofollow noreferrer">this example</a></p>
|
<p>I'm trying to setup letsencrypt cert-issuer on kubernetes cluster. My terraform looks like this: </p>
<pre><code>resource "helm_release" "cert_manager" {
keyring = ""
name = "cert-manager"
chart = "stable/cert-manager"
namespace = "kube-system"
depends_on = ["helm_release.ingress"]
set {
name = "webhook.enabled"
value = "false"
}
provisioner "local-exec" {
command = "kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml"
}
provisioner "local-exec" {
command = "kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} label namespace kube-system certmanager.k8s.io/disable-validation=\"true\" --overwrite"
}
provisioner "local-exec" {
command = <<EOT
cat <<EOF | kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} create -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt
http01: {}
EOF
EOT
}
}
</code></pre>
<p>I have simple test pod and service deployed. When I go to <code>http://<cluster-address>/apple</code> it responds with <code>apple</code>. So I try to create ingress for it: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
labels:
app: apple
heritage: Tiller
release: apple
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
tls:
- hosts:
- my.domain.alias.to.cluster.address.io
secretName: my.domain.alias.to.cluster.address.io
</code></pre>
<p>But still, when I go to <code>https://my.domain.alias.to.cluster.address.io/apple</code> my browser warns me, and I can see the certificate is Kubernetes Ingress Controller Fake Certificate.</p>
<p>What am I missing? What should I do to have cert created by letsencrypt there? </p>
<p>UPDATE:</p>
<p>Logs from my cert-manager pod: </p>
<pre><code>I0220 16:34:49.071883 1 sync.go:180] Certificate "my.domain.alias.to.cluster.address.io" for ingress "example-ingress" is up to date
I0220 16:34:49.072121 1 controller.go:179] ingress-shim controller: Finished processing work item "default/example-ingress"
I0220 16:34:49.071454 1 controller.go:145] certificates controller: syncing item 'default/my.domain.alias.to.cluster.address.io'
I0220 16:34:49.073892 1 helpers.go:183] Setting lastTransitionTime for Certificate "my.domain.alias.to.cluster.address.io" condition "Ready" to 2019-02-20 16:34:49.073885527 +0000 UTC m=+889.175312552
I0220 16:34:49.074450 1 sync.go:263] Certificate default/my.domain.alias.to.cluster.address.io scheduled for renewal in 1438h47m42.92555861s
I0220 16:34:49.081224 1 controller.go:151] certificates controller: Finished processing work item "default/my.domain.alias.to.cluster.address.io"
I0220 16:34:49.081479 1 controller.go:173] ingress-shim controller: syncing item 'default/example-ingress'
I0220 16:34:49.081567 1 sync.go:177] Certificate "my.domain.alias.to.cluster.address.io" for ingress "example-ingress" already exists
I0220 16:34:49.081631 1 sync.go:180] Certificate "my.domain.alias.to.cluster.address.io" for ingress "example-ingress" is up to date
I0220 16:34:49.081672 1 controller.go:179] ingress-shim controller: Finished processing work item "default/example-ingress"
I0220 16:34:49.081743 1 controller.go:145] certificates controller: syncing item 'default/my.domain.alias.to.cluster.address.io'
I0220 16:34:49.082384 1 sync.go:263] Certificate default/my.domain.alias.to.cluster.address.io scheduled for renewal in 1438h47m42.917624001s
I0220 16:34:49.087552 1 controller.go:151] certificates controller: Finished processing work item "default/my.domain.alias.to.cluster.address.io"
I0220 16:35:04.571789 1 controller.go:173] ingress-shim controller: syncing item 'default/example-ingress'
</code></pre>
<p>And this is what <code>kubectl describe certificate my.domain.alias.to.cluster.address.io</code> returns:</p>
<pre><code>Name: my.domain.alias.to.cluster.address.io
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-02-20T16:34:49Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: example-ingress
UID: 709a55df-352d-11e9-bf9d-06ede39599be
Resource Version: 278211
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/my.domain.alias.to.cluster.address.io
UID: 709bf1bd-352d-11e9-b941-026486635030
Spec:
Acme:
Config:
Domains:
my.domain.alias.to.cluster.address.io
Http 01:
Ingress:
Ingress Class: nginx
Dns Names:
my.domain.alias.to.cluster.address.io
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt
Secret Name: my.domain.alias.to.cluster.address.io
Status:
Conditions:
Last Transition Time: 2019-02-20T16:34:49Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2019-05-21T15:22:32Z
Events: <none>
</code></pre>
<p>In the logs of ingress controller I can find this: </p>
<pre><code>I0220 16:22:34.428736 8 store.go:446] secret default/my.domain.alias.to.cluster.address.io was updated and it is used in ingress annotations. Parsing...
I0220 16:22:34.429898 8 backend_ssl.go:68] Adding Secret "default/my.domain.alias.to.cluster.address.io" to the local store
I0220 16:22:35.410950 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:22:35.522502 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:22:35 +0000]TCP200000.000
I0220 16:27:39.225810 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:27:39.226685 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"f2f0c9bd-345d-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277488", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/example-ingress
I0220 16:27:39.336879 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:27:39 +0000]TCP200000.001
I0220 16:27:53.090686 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"78ab0815-352c-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277520", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/example-ingress
I0220 16:27:53.091216 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:27:53.212854 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:27:53 +0000]TCP200000.000
I0220 16:28:04.566342 8 status.go:388] updating Ingress default/example-ingress status from [] to [{34.245.112.11 }]
I0220 16:28:04.576525 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"78ab0815-352c-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277542", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/example-ingress
I0220 16:28:05.676217 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"78ab0815-352c-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277546", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/example-ingress
I0220 16:28:07.909830 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:28:08.019070 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:28:08 +0000]TCP200000.000
I0220 16:28:22.557334 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:28:22.557490 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"cm-acme-http-solver-dmnqh", UID:"7f8f4be4-3461-11e9-b941-026486635030", APIVersion:"extensions/v1beta1", ResourceVersion:"277576", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/cm-acme-http-solver-dmnqh
I0220 16:28:22.662971 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:28:22 +0000]TCP200000.000
I0220 16:34:49.057385 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"709a55df-352d-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"278207", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/example-ingress
I0220 16:34:49.057688 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:34:49.175039 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:34:49 +0000]TCP200000.000
I0220 16:35:04.565324 8 status.go:388] updating Ingress default/example-ingress status from [] to [{34.245.112.11 }]
I0220 16:35:04.572954 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"709a55df-352d-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"278236", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/example-ingress
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:18:38:33 +0000] "\x05\x01\x00" 400 157 "-" "-" 0 0.751 [] - - - - e0aec2a9e3e71e136a1c62939e341b49
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:18:39:50 +0000] "\x04\x01\x00P\x05\xBC\xD2\x0C\x00" 400 157 "-" "-" 0 0.579 [] - - - - 7f825a3ef2f94e200b14fe3691e4fdde
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:18:41:30 +0000] "GET http://5.188.210.12/echo.php HTTP/1.1" 400 657 "https://www.google.com/" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36" 359 0.000 [] - - - - 1167890a763ddc360051046c84a47d21
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:19:46:35 +0000] "GET /apple HTTP/1.1" 308 171 "-" "Mozilla/5.0 (Linux; Android 8.0.0; ONEPLUS A3003) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.105 Mobile Safari/537.36" 555 0.000 [default-apple-service-5678] - - - - b1f1bb0da3e465c3a54e963663dffb61
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:20:38:39 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 157 "-" "-" 0 0.065 [] - - - - cd420e70b3f78bee069f8bac97918e36
</code></pre>
| <p>Basically, letsencrypt is not issuing the certificate for you so it's defaulting to the Fake cert. You need to make sure that <code>my.domain.alias.to.cluster.address.io</code> is publicly resolvable, say through a DNS server like <code>8.8.8.8</code> and then it needs to resolve to a publicly accessible IP address. You can debug what's happening by looking at the certmanager pod logs.</p>
<pre><code>$ kubectl logs <certmanagerpod>
</code></pre>
<p>You can also see the details about the certificates (and you might be able to see why it didn't get issued).</p>
<pre><code>$ kubectl get certificates
$ kubectl describe <certificate-name>
</code></pre>
<p>Another aspect is that you could be being rate-limited by <code>https://acme-v02.api.letsencrypt.org/directory</code> which is their prod environment. You could also try: <code>https://acme-staging-v02.api.letsencrypt.org/directory</code> which is their staging environment.</p>
|
<p>I have two network interface in my master node -</p>
<blockquote>
<p>192.168.56.118</p>
<p>10.0.3.15</p>
</blockquote>
<p>While doing kubeadm init on master node, I got following command to add workers</p>
<pre><code>kubeadm join --token qr1czu.5lh1nt34ldiauc1u 192.168.56.118:6443 --discovery-token-ca-cert-hash sha256:e5d90dfa0fff67589551559c443762dac3f1e5c7a5d2b4a630e4c0156ad0e16c
</code></pre>
<p>As you can see, it shows 192.168.56.118 IP to connect from worker.
But while executing the same on worker node, I'm getting following error.</p>
<pre><code>[root@k8s-worker ~]# kubeadm join --token qr1czu.5lh1nt34ldiauc1u 192.168.56.118:6443 --discovery-token-ca-cert-hash sha256:e5d90dfa0fff67589551559c443762dac3f1e5c7a5d2b4a630e4c0156ad0e16c
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.56.118:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.56.118:6443"
[discovery] Requesting info from "https://192.168.56.118:6443" again to validate TLS against the pinned public key
[discovery] Failed to request cluster info, will try again: [Get https://192.168.56.118:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate is valid for 10.96.0.1, 10.0.3.15, not 192.168.56.118]
</code></pre>
<p>I tried with other IP - 10.0.3.15. But it returns connection refused error, despite the fact that the firewall is disabled in master.</p>
<pre><code>[root@k8s-worker ~]# kubeadm join --token qr1czu.5lh1nt34ldiauc1u 10.0.3.15:6443 --discovery-token-ca-cert-hash sha256:e5d90dfa0fff67589551559c443762dac3f1e5c7a5d2b4a630e4c0156ad0e16c
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "10.0.3.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.3.15:6443"
[discovery] Failed to request cluster info, will try again: [Get https://10.0.3.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.3.15:6443: connect: connection refused]
</code></pre>
<p>How can I force the certificate to make 192.168.56.118 as valid? or any idea how can I resolve this issue?</p>
| <p>You need to provide extra apiserver certificate SAN (<code>--apiserver-cert-extra-sans <ip_address></code>) and api server advertise address(<code>--apiserver-advertise-address</code>) while initialising the cluster using <code>kubeadm init</code>. Your kubeadm init command will look like: </p>
<pre><code>kubeadm init --apiserver-cert-extra-sans 192.168.56.118 --apiserver-advertise-address 192.168.56.118
</code></pre>
<p>Once, you initialise cluster with above command you will not face the issue of certificates while joining the cluster</p>
|
<p>I am running docker for desktop with kubernetes enabled. I am using Windows containers (but have also been running Linux containers - I switched modes to Windows). Also, kubernetes is running, but has been using Linux, so I guess the single node in the cluster is using the Linux engine, even though I have switched Docker to use Windows containers. It appears that the local kubernetes cluster is not able to load the Windows image, even though docker is running in Windows container mode.</p>
<p>I am trying to solve the following error: </p>
<p><strong><code>Failed to pull image "iis-site": rpc error: code = Unknown desc = Error response from daemon: pull access denied for iis-site, repository does not exist or may require 'docker login'</code></strong></p>
<h1><strong>Steps to reproduce</strong></h1>
<p>I build a docker image as follows:</p>
<pre><code>FROM microsoft/iis
RUN powershell -NoProfile -Command Remove-Item -Recurse C:\inetpub\wwwroot\*
WORKDIR /inetpub/wwwroot
COPY content/ .
</code></pre>
<p>I have a directory structure like this: </p>
<pre><code>D:\TEMP\IIS
│ Dockerfile
│
└───content
index.html
</code></pre>
<p>index.html looks like this: </p>
<pre><code><html>
<body>
Hello World!
</body>
</html>
</code></pre>
<p>I run up the container as follows:</p>
<pre><code>docker build -t iis-site .
</code></pre>
<p>I can navigate to <a href="http://localhost:8000/" rel="nofollow noreferrer">http://localhost:8000/</a> and I can see my website! (SUCCESS)</p>
<h1><strong>See it in Kubernetes</strong></h1>
<p>But now I want to see it running in kubernetes (local cluster). </p>
<p>I do </p>
<pre><code>kubectl apply -f D:\Temp\windows-deployment.yaml
</code></pre>
<p><strong>D:\Temp\windows-deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: iis-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: iis
spec:
containers:
- name: iis
image: iis-site
ports:
- containerPort: 80
</code></pre>
<p>Now id do:</p>
<p>kubectl get pods</p>
<pre><code>NAME READY STATUS RESTARTS AGE
iis-deployment-5768b4fb85-pfxjk 0/1 ImagePullBackOff 0 18m
sql-deployment-659d64d464-rss5c 1/1 Running 18 40d
streact-deployment-567cf9db9b-g5vkb 1/1 Running 18 39d
web-deployment-669595758-7zcdx 1/1 Running 45 39d
</code></pre>
<p>Now I do </p>
<pre><code>kubectl describe pod iis-deployment-5768b4fb85-pfxjk
</code></pre>
<p><strong><code>Failed to pull image "iis-site": rpc error: code = Unknown desc = Error response from daemon: pull access denied for iis-site, repository does not exist or may require 'docker login'</code></strong></p>
<h1>Additional info</h1>
<pre><code>kubectl describe node docker-for-desktop
Name: docker-for-desktop
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=docker-for-desktop
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Fri, 11 Jan 2019 10:14:26 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Wed, 20 Feb 2019 16:32:37 +0000 Wed, 20 Feb 2019 10:06:38 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, 20 Feb 2019 16:32:37 +0000 Wed, 20 Feb 2019 10:06:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 20 Feb 2019 16:32:37 +0000 Wed, 20 Feb 2019 10:06:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 20 Feb 2019 16:32:37 +0000 Fri, 11 Jan 2019 10:14:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 20 Feb 2019 16:32:37 +0000 Wed, 20 Feb 2019 10:06:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.65.3
Hostname: docker-for-desktop
Capacity:
cpu: 2
ephemeral-storage: 61664044Ki
hugepages-2Mi: 0
memory: 2540888Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 56829582857
hugepages-2Mi: 0
memory: 2438488Ki
pods: 110
System Info:
Machine ID:
System UUID: 8776A14E-A225-4134-838E-B50A6ECAB276
Boot ID: 5a836f34-51a4-4adf-a32d-218a5df09b3c
Kernel Version: 4.9.125-linuxkit
OS Image: Docker for Windows
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.2
Kubelet Version: v1.10.11
Kube-Proxy Version: v1.10.11
ExternalID: docker-for-desktop
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default iis-deployment-5768b4fb85-pfxjk 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default sql-deployment-659d64d464-rss5c 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default streact-deployment-567cf9db9b-g5vkb 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default web-deployment-669595758-7zcdx 0 (0%) 0 (0%) 0 (0%) 0 (0%)
docker compose-74649b4db6-rm9zc 0 (0%) 0 (0%) 0 (0%) 0 (0%)
docker compose-api-fb7b8f78f-drllk 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system etcd-docker-for-desktop 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-docker-for-desktop 250m (12%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-docker-for-desktop 200m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-86f4d74b45-h2p5q 260m (13%) 0 (0%) 110Mi (4%) 170Mi (7%)
kube-system kube-proxy-p8vj7 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-docker-for-desktop 100m (5%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
810m (40%) 0 (0%) 110Mi (4%) 170Mi (7%)
Events: <none>
</code></pre>
<p><strong>Note:</strong>
The above output indicates the following: </p>
<pre><code> Operating System: linux
</code></pre>
<p>So it appears that even though the docker engine is running Windows containers, the local Kubernetes cluster (control plane) is running Linux. </p>
<p>I would greatly appreciate any ideas on how to fix the above error i.e. <strong>pull access denied</strong> so that I can run a local Kubernetes cluster on Windows 10 with <strong>Windows</strong> containers.</p>
| <p>The error you are seeing is not related to whether it can run or cannot run Windows containers, it's because it's not able to pull the image. It doesn't look like you pushed the image to <a href="https://hub.docker.com/" rel="nofollow noreferrer">docker hub</a>. After you build it you need to run:</p>
<pre><code>docker tag -t <yourdockerhubuser>/iis iis
docker push <yourdockerhubuser>/iis
</code></pre>
<p>You will need to set up your docker hub credentials as an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">ImagePullSecrets</a> on your deployment. Then on you K8s deployment:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: iis-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: iis
spec:
containers:
- name: <yourdockerhubuser>/iis
image: iis-site
ports:
- containerPort: 80
imagePullSecrets:
- name: <your-docker-hub-user-secret>
</code></pre>
<p>Ok, this container will not run because you have Linux on your node, so you need to have a Windows node and for that, you will have <a href="https://kubernetes.io/docs/getting-started-guides/windows/" rel="nofollow noreferrer">set up the kubelet</a> on your Windows machine. (I believe Docker for Windows doesn't support a Windows kubelet, and just a Linux kubelet running in VM).</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.