prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I've implemented Traefik on Kubernetes following the <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="noreferrer">User Guide</a>. That gives me an ingress-controller and I was able to create an ingress and the traefik-ingress-service listening on 80 and 8080.</p>
<p>I've also setup a "gce" ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: fasedge-tls
backend:
serviceName: traefik-ingress-service
servicePort: 80
</code></pre>
<p>What this did was create a GCE Load Balancer which terminates my TLS and should forward all requests to the traefik-ingress-service which is of type NodePort. </p>
<p>GCE Load Balancers require a healthcheck. The default is to the path "/". I thought traefik had a "/ping" so I changed the healthcheck to that. But no matter what I do, I can't get the healthcheck to pass and therefor, my GCE Load Balancer is unhealthy and won't forward any requests to traefik.</p>
<p>Some references say to change the Traefik lb service from NodePort to LoadBalancer but that should be doing the exact same thing as my above Ingress except my Ingress handles the SSL termination.</p>
<p>Anyone else do it this way?</p>
| <p>The way to get health check to work on the gce load balancer is to add an argument to the traefik container. The <code>--ping</code> and <code>--ping.entrypoint=http</code> lets me setup a gce health check to the <code>/ping</code> path.</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
app: traefik-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
app: traefik-ingress-controller
template:
metadata:
labels:
app: traefik-ingress-controller
name: traefik-ingress-controller
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik-ingress-controller
image: "traefik:v1.5.2"
args:
- --api
- --kubernetes
- --ping
- --ping.entrypoint=http
</code></pre>
|
<p>When a Pod is evicted I'm supposed to find messages about the eviction at <code>/dev/termination-log</code> (or other path defined in <code>terminationMessagePath</code> property) inside the container filesystem.</p>
<p>I'm trying to track OOMKilled events, but after OOMKilled occured and the Pod was restarted, the <code>/dev/termination-log</code> is empty.<br />
I tried to set <code>terminationMessagePolicy</code> property to <code>FallbackToLogsOnError</code> hoping to find out messages in container log and I did not find anything about the OOMKilled there.</p>
<p>By executing <code>kubectl describe pod podname</code> I can find out that the termination reason was OOMKilled, but I would like to be able to follow this events in order to integrate it with fluentd and logstash.</p>
<p>Our Kubernetes cluster version is v1.9.0 and it's running on-premise. The installation was done using kubeadm on top of CentOS 7.</p>
| <p>You can ssh to the machine the pod was running and exec <code>journalctl -u kubelet</code></p>
|
<p>I've been studying Kubernetes for a few weeks now, and using the kube-lego NGINX examples (<a href="https://github.com/jetstack/kube-lego" rel="noreferrer">https://github.com/jetstack/kube-lego</a>) have successfully deployed services to Kubernetes cluster using Rancher on DigitalOcean.</p>
<p>I've deployed sample static sites, Wordpress, Laravel, Craft CMS, etc. All of which use custom Namespaces, Deployment, Secrets, Containers with external registries, Services, and Ingress Definitions.</p>
<p>Using the example (lego) NGINX Ingress Controller setup, I'm able to apply DNS to the exposed IP address of my K8s cluster, and have the resulting sites appear.</p>
<p>What I don't know, though, is how to allow for multiple hosts to have Ingress Controllers service the same deployments, and thus provide HA Ingress to the cluster. (by applying an external load balancer service, or geo-ip, or what-have-you).</p>
<p>Rancher (stable) allows me to add multiple hosts, I've spun up 3 to 5 at a time, and Kubernetes is configured and deployed across all Hosts. Furthermore, I'll define many replicas and/or deployments (listed above) and they will be spread over the cluster and accessible as would be expected. I've even specified multiple replicas of the Ingress Controller, but of course they all get scheduled on the same host, giving me only one IP address of Ingress.</p>
<p>So how do I allow multiple hosts (each with their own public facing IP address) to allow ingress into the cluster? I've also read about setting up multiple Ingress Controllers, but then you must specify what deployment/services are being serviced by what Ingress Controller, which then totally defeats the purpose.</p>
<p>Maybe I'm missing something, but if K8s multi-host is supposed to provide HA, and the Host with the Ingress Controller goes down, then the service will be rescheduled on the other Hosts, but the IP address that everything is pointing to will be dead, and thus an outage. Any way to have multiple IP Addresses to the same set of deployment/services?</p>
| <p>I investigated my setup a bit more today, and I think I found out why I was having difficulty. The "LoadBalancer" is often mentioned as for use with Cloud Providers (in both docs, and what @fiunchinho describes). I was using it with a Rancher setup, which auto creates an HA-Proxy LoadBalancer ingress for you on the hosts.</p>
<p>By default, it will just schedule it on one of the hosts. You can specify that you want it scheduled globally buy providing an 'annotation' of <code>io.rancher.scheduler.global: "true"</code>.</p>
<p>Like so:</p>
<pre><code>annotations:
# Create load balancers on every host in the environment
io.rancher.scheduler.global: "true"
</code></pre>
<p><a href="http://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/" rel="noreferrer">http://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/</a></p>
<p>I preferred LoadBalancer over NodePort because I wanted the ability to send port 80 (and in the future port 443) to any of the Nodes, and have them successfully fulfil my request by inspecting the Host header, and directing as-needed.</p>
<p>These LBs can also be setup in the Rancher UI under the "Infrastructure Stack" menu. I have successfully removed the single LB, and re-added one with an "Always run one instance of this container on every host" option enabled.</p>
<p>After this was configured, I could make a request to any of the Hosts for any of the Ingresses, and get a response, no matter what host the container was scheduled on.</p>
<p><a href="https://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/" rel="noreferrer">https://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/</a></p>
<p>So cool!</p>
|
<p>(A very similar question was asked about 2 years ago, though it was specifically about secrets, I doubt the story is any different for configmaps... but at the least, I can present the use case and why the existing workarounds aren't viable for us.) </p>
<p>Given a simple, cut-down <code>deployment.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
volumeMounts:
- name: vol
mountPath: /app/Configuration
volumes:
- name: vol
configMap:
name: configs
</code></pre>
<p>and the matching <code>configmap.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: configs
labels:
k8s-app: example
data:
example1.json: |-
{
"key1": "value1"
}
example2.json: |-
{
"key2": "value2"
}
</code></pre>
<p>the keys in <code>configmap.yaml</code>, whatever they may be, are simply created as files, without <code>deployment.yaml</code> needing to be modified or have any specifics other than the mountPath.</p>
<p>The problem is that the actual structure has subfolders to handle region-specific values that override the root ones:</p>
<pre><code>Configuration \ example1.json
Configuration \ example2.json
Configuration \ us \ example1.json
Configuration \ us \ ca \ example2.json
</code></pre>
<p>The number and nature of these could obviously vary, for as many different countries and regions imaginable and for each separately configured module. The intent was to provide a tool to the end user that would allow them to set up and manage these configurations, which would behind the scenes automatically generate the <code>configmap.yaml</code> and update it in kubernetes.</p>
<p>However, unless there's a trick I haven't found yet, this seems to be outside of kubernetes's abilities, in a couple ways.</p>
<p>First of all, there is no syntax that allows one to specify configmap keys that are directories, nor include a subdirectory path in a key:</p>
<pre class="lang-yaml prettyprint-override"><code>data:
# one possible approach (currently complains that it doesn't validate '[-._a-zA-Z0-9]+')
/us/example1.json: |-
{
"key1": "value1"
}
# another idea; this obviously results in 'invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"'
us:
example2.json: |-
{
"key2": "value2"
}
</code></pre>
<p>So what <strong><em>are</em></strong> our options to accomplish this?</p>
<p>Wellll, we could map the keys to specific locations using the <code>items: -key: path:</code> approach in the deployment.yaml's <code>volumes: -configMap:</code> node, </p>
<p>and/or generate several nodes in the deployment.yaml's <code>volumeMounts:</code> node, </p>
<p>using either <code>subPath:</code> (which is basically the same as using <code>items: -key: -path:</code> in the <code>volumes: configMap:</code>), </p>
<p>or individual separate configmaps for each subdirectory, and mounting them all as different <code>volumes</code> in the deployment.yaml.</p>
<p>All of these methods would require massive and incredibly verbose changes to the deployment.yaml, leaking out knowledge it shouldn't have any reason to know about, making it mutable and continually re-generated rather than static, complicating rolling out settings updates to deployed pods, etc. etc. etc. It's just Not Good. And all of that just to have mapped one directory, just because it contains subdirectories... </p>
<p>Surely this CAN'T be the way it's SUPPOSED to work? What am I missing? How should I proceed?</p>
| <p>From a "container-native" perspective, having a large file system tree of configuration files that the application processes at startup to arrive at its canonical configuration is an anti-pattern. Better to have a workflow that produces a single file, which can be stored in a ConfigMap and easily inspected in its final form. See, for instance, nginx ingress. </p>
<p>But obviously not everyone is rewriting their apps to better align with the kubernetes approach. The simplest way then to get a full directory tree of configuration files into a container at deploy time is to use initContainers and emptyDir mounts. </p>
<p>Package the config file tree into a container (sometimes called a "data-only" container), and have the container start script just copy the config tree into the emptyDir mount. The application can then consume the tree as it expects to. </p>
|
<p>I'm trying to create an Ingress for my Kubernetes cluster on Google Compute Engine. It was working fine while I was using the <code>gke</code> controller class. But I had to change it to <code>nginx</code> controller to be able to specify the back end timeout. The problem is that my Ingress is not being provided with an external IP address.</p>
<h1>This is my Ingress manifest:</h1>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-router
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "1200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1200"
nginx.ingress.kubernetes.io/upstream-fail-timeout: "1200"
kubernetes.io/ingress.global-static-ip-name: my-ip
spec:
tls:
- secretName: nginxsecret
rules:
- http:
paths:
- path: /*
backend:
serviceName: frontend
servicePort: 8000
- path: /cron/*
backend:
serviceName: esg
servicePort: 8000
- path: /task/*
backend:
serviceName: esg
servicePort: 8000
- path: /api/connections/update/*
backend:
serviceName: esg
servicePort: 8000
- path: /api/drive/scansheet/*
backend:
serviceName: esg
servicePort: 8000
</code></pre>
<p>Is there any configuration missing?</p>
| <p>Found the issue. It was due to the fact I had not configured the Nginx controller for the Ingress.</p>
|
<p>How to change ip when I run kubeadm init? I create master node on google compute engine and want to connect node from aws and azure, but kubeadm use internal ip address which see only from google cloud platform network. I tried to use --apiserver-advertise-address=external ip, but in this case kubeadm stuck in [init] This might take a minute or longer if the control plane images have to be pulled. Firewall are open.</p>
| <p>If I understand correctly what you are trying to do is using a GCP instance running kubeadm as the master and two nodes located on two other clouds.</p>
<p>What you need for this to work is to have a working load balancer with external IP pointing to your instance and forwarding the TCP packets back and forth.</p>
<p>First I created a static external IP address for my instance:</p>
<pre><code> gcloud compute addresses create myexternalip --region us-east1
</code></pre>
<p>Then I created a a target pool for the LB and added the instance :</p>
<pre><code>gcloud compute target-pools create kubernetes --region us-east1
gcloud compute target-pools add-instances kubernetes --instances kubeadm --instances-zone us-east1-b
</code></pre>
<p>Add a forwarding rule serving on behalf of an external IP and port range that points to your target pool. You'll have to do this for the ports the nodes need to contact your kubeadm instance on. Use the external IP created before.</p>
<pre><code>gcloud compute forwarding-rules create kubernetes-forward --address myexternalip --region us-east1 --ports 22 --target-pool kubernetes
</code></pre>
<p>You can check now your forwarding rule which will look something like this:</p>
<pre><code>gcloud compute forwarding-rules describe kubernetes-forward
IPAddress: 35.196.X.X
IPProtocol: TCP
creationTimestamp: '2018-02-23T03:25:49.810-08:00'
description: ''
id: 'XXXXX'
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: kubernetes-forward
portRange: 80-80
region: https://www.googleapis.com/compute/v1/projects/XXXX/regions/us-east1
selfLink: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/forwardingRules/kubernetes-forward
target: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/targetPools/kubernetes
</code></pre>
<p>Now you can go with the usual process to install kubeadm and set up your cluster in your instance <code>kubeadm init</code> took around 50 seconds on mine.</p>
<p>Afterwards if you got the ports correctly opened in your firewall and forwarded to your master the nodes from AWS and Azure should be able to join.</p>
<p>Congratulations, now you have a multicloud kubernetes cluster! :)</p>
|
<p>What versions of Docker does Kubernetes <code>v1.9</code> support?
Is there a road map for these?</p>
<p>Trend here:</p>
<p><a href="https://stackoverflow.com/questions/48400574/docker-version-supported-in-kubernetes-1-8">Docker version supported in Kubernetes 1.8</a></p>
<p><a href="https://stackoverflow.com/questions/44657320/which-docker-versions-will-k8s-1-7-support">Which Docker versions will K8s 1.7 support?</a> </p>
| <p>The validated docker versions are the same as for <em>v1.8</em></p>
<ul>
<li><strong><code>1.11.2</code></strong> to <strong><code>1.13.1</code></strong></li>
<li><strong><code>17.03.x</code></strong></li>
</ul>
<p>Source: <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#external-dependencies" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#external-dependencies</a></p>
|
<p>I'm planning a kubernetes cluster (to be created on public cloud infrastructure). One important question:
Should worker nodes be exposed to internet (have public IP)? I suppose they shouldn't but I wonder what is the best practice.
Thanks.</p>
| <p>From a security perspective it is always wise to limit the number of publicly reachable nodes as much as possible and there are multiple ways to achieve that. </p>
<p>The most common and easiest way is that you could place your worker nodes behind a LB solution of your cloud provider. The LB is publicly reachable, but your nodes won't be.</p>
<p>If you cannot or don't want to use a LB of your provider, you could also assign some worker nodes as publicly reachable edge routers, which will run <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">ingress pods</a>. The ingresses then act as LBs to your private worker nodes. However, the edge router nodes should ideally only run the ingress pods, in order to have a better isolation of your main container infrastructure from publicly reachable nodes (this can be configured via <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="noreferrer">labels</a>). </p>
|
<p>I have my application running on google kubernetes engine and currently uses pvc for data storage. I am just not able to decide which storage option should we use PVC or Disk? </p>
<p>In case of PVC we can't have snapshot, apart from this is there any strong reason we should go for disk based storage. what is advisable? In which scenarios we should consider using disks rather than pvc</p>
| <p>You are mixing two different yet connected concepts here. Persistent Volume Claim and Volume.</p>
<p>Persistent Volume Claim is not a storage device / service. It is a declaration of need for storage of particular characteristic. In a way, you could say it's an equivalent of an async programming <strong>promise</strong>. It should at some moment "return" a storage in form of a Persistent Volume that will satisfy declared requirements. You don't know when exactly it will (usualy asap) or if it will at all (error).</p>
<p>Persistent Volume is in turn an instance of a Volume, defined and instantiated with a typical Volume definition (ie. AWS EBS id, NFS server details, GlusterFS etc.).</p>
<p>Volume is the way to define some storage that is not a part of the image/container it self.</p>
<p>Now, the fact that sometimes you can confuse PVC for PV/Volume is that PVs can be automatically created by cloud provider or 3rd party provisioner if it has matching storage class (ie. default, but not only).</p>
<p>In most cases when you need persistent storage for your pod, but you want the declaration to be cluster agnostic, you will use PVC and either depend on automated provisioning, or create matching PV in a way feasible for given infra. For example you can support PVC on dev cluster via <code>hostPath</code> volume, but with a central <code>GlusterFS</code> server on prod.</p>
<p>That said, the question PVC or Disk has no relevance as PVC can actualy be Disk. It's more of a question like "local storage (hostPart or emptyDir) vs network storage (cloud block device, fileserver etc.). And the answer to that question is... "it depends".</p>
<p>If loss of stored data on pod rescheduling is not a problem that maybe local storage is good and fast solution (ie. I would consider it for cache storage) if not... well, then you can't use local storage. But that is going outside of the questions initial boundries.</p>
|
<p>According to <a href="https://kubernetes.io/docs/reference/api-overview/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/api-overview/</a>, everything at <code>/api/v1</code> is the "legacy" API group. Although the docs don't go into any further detail than that, my natural inference from the use of the word "legacy" is that:</p>
<ul>
<li>I shouldn't use it in new code, and</li>
<li>All its functionality has been reimplemented in other API groups, possibly with a cleaner interface</li>
</ul>
<p>However, in <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-interactive/" rel="nofollow noreferrer">Module 2 of the Kubernetes tutorial</a>, I'm instructed to use the legacy API group to access my newly-created pod:</p>
<blockquote>
<p>Now we can make an HTTP request to the application running in that pod:</p>
<pre><code>curl http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME/
</code></pre>
</blockquote>
<p>What gives? Does the "legacy" API group actually contain functionality that doesn't exist anywhere else, or is there another, non-legacy way to make requests to my pod that the tutorial just hasn't been updated to reflect?</p>
| <p>All newer Kubernetes API objects can be found in <em>named</em> API groups (like, for example, the <code>Deployment</code> object in the <code>apps/v1</code> API group) -- which is also the group that you use in the <code>apiVersion</code> field when declaring new objects.</p>
<p>Older API objects (like the <code>Pod</code> or <code>Service</code> object) are found in the <em>unnamed</em> API group (also called <em>core</em> or <em>legacy</em>). Their <code>apiVersion</code> is just <code>v1</code>, without any name.</p>
<p>In this context, <em>"legacy"</em> does not appear to mean "please do not use it any more". Instead, it seems to mean something like "please do not add any more new objects to this API group". From reading the documentation, it does not look like the "legacy" API group will be deprecated any time soon (after all, it's not like there's any other API that you could use to create a Pod or a Service).</p>
<p>According to the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/api-group.md" rel="nofollow noreferrer">respective design proposal</a>, the goal is to...</p>
<blockquote>
<p>...[break] the monolithic v1 API into modular groups and allowing groups to be enabled/disabled individually. This allows [the Kubernetes developers] to break the monolithic API server to smaller components in the future. [...] For backward compatibility, v1 objects belong to the group with an empty name, so existing v1 config files will remain valid.</p>
</blockquote>
|
<p>I'm using <strong>gcsfuse</strong> to mount a volume in a container, and I need it to start my node.js application.</p>
<p>To mount the volume I'm using the <strong>lifecycle hooks of kubernetes</strong>, but it doesn't ensure that it will be executed before the entrypoint of my container.</p>
<p>I've been thinking about how should I check when the volume is mounted, and if it goes down.</p>
<p>To check when it is mounted and unmounted I read and search the existence of the volume in <code>/proc/mounts</code>, and adding a watcher to it for changes.</p>
<p>Is there a simplier way to ensure that the volume is mounted in node.js, docker, or kubernetes?</p>
| <p>You can run this dockerfile in privileged mode:</p>
<pre><code>FROM ubuntu
RUN echo "deb http://packages.cloud.google.com/apt gcsfuse-stretch main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN apt-get update && apt install curl -y
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update
RUN apt-get install gcsfuse fuse -y
RUN mkdir -p /mnt/tmp
CMD gcsfuse [BUCKET NAME] /mnt/tmp && /bin/bash
</code></pre>
<p>This way you are sure that the bucket is mounted when the pod initializes.</p>
<p>On the other hand I do not recommend this approach as there is a Node.sj library for Google Cloud Storage [1].</p>
<p>Here is an example of bucket listing:</p>
<pre><code>// Imports the Google Cloud client library
const Storage = require('@google-cloud/storage');
// Creates a client
const storage = new Storage();
// Lists all buckets in the current project
storage
.getBuckets()
.then(results => {
const buckets = results[0];
console.log('Buckets:');
buckets.forEach(bucket => {
console.log(bucket.name);
});
})
.catch(err => {
console.error('ERROR:', err);
});
</code></pre>
<p>[1] <a href="https://github.com/googleapis/nodejs-storage/tree/master/samples" rel="nofollow noreferrer">https://github.com/googleapis/nodejs-storage/tree/master/samples</a></p>
|
<p>What is the difference between persistent volume (PV) and persistent volume claim (PVC) in Kubernetes/ Openshift by referring to documentation?</p>
<p>What is the difference between both in simple terms?</p>
| <p>From the <a href="https://docs.openshift.org/latest/architecture/additional_concepts/storage.html#lifecycle-of-a-volume-and-claim" rel="noreferrer">docs</a></p>
<blockquote>
<p>PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource.</p>
</blockquote>
<p>So a persistent volume (PV) is the "physical" volume on the host machine that stores your persistent data. A persistent volume claim (PVC) is a <em>request</em> for the platform to create a PV for you, and you attach PVs to your pods via a PVC.</p>
<p>Something akin to</p>
<pre><code>Pod -> PVC -> PV -> Host machine
</code></pre>
|
<p>Say I have some (.NET Core) code running in a k8s pod and I ask for:</p>
<pre><code>DateTime.Now
</code></pre>
<p>I assume I will get the date-time from the host that the pod is running on. Is there any way of getting a date-time value that is consistent across the k8s cluster?</p>
| <p>The clock in a container is the same as the host machine because it’s controlled by the kernel. </p>
<p>The timezone is controlled by the OS layer tho and so may be different in the container. You can mount in the time zone file to overcome this.</p>
<p>Synchronization of clocks across machines is a complicated topic. You can go as simple as installing an NTP client on all the nodes or as complex as installing GPS hardware clocks on every node. </p>
<p>If your Kubernetes nodes are VMs this adds another layer of complexity. Most IaaS and hypervisors provide some way of synchronizing the VM clock with the host, but you still need to keep all your host machines in sync. </p>
|
<p>I'm attempting to create a cluster on Google Kubernetes Engine that runs nginx, RStudio server and two Shiny apps, following and adapting <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="noreferrer">this guide</a>.</p>
<p>I have 4 workloads that are all green in the UI, deployed via:</p>
<pre><code>kubectl run nginx --image=nginx --port=80
kubectl run rstudio --image gcr.io/gcer-public/persistent-rstudio:latest --port 8787
kubectl run shiny1 --image gcr.io/gcer-public/shiny-googleauthrdemo:latest --port 3838
kubectl run shiny5 --image=flaviobarros/shiny-wordcloud --port=80
</code></pre>
<p>They were then all exposed as node ports via:</p>
<pre><code>kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl expose deployment rstudio --target-port=8787 --type=NodePort
kubectl expose deployment shiny1 --target-port=3838 --type=NodePort
kubectl expose deployment shiny5 --target-port=80 --type=NodePort
</code></pre>
<p>..that are all green in the UI.</p>
<p>I then deployed this Ingress backend</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: r-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
- path: /rstudio/
backend:
serviceName: rstudio
servicePort: 8787
- path: /shiny1/
backend:
serviceName: shiny1
servicePort: 3838
- path: /shiny5/
backend:
serviceName: shiny5
servicePort: 80
</code></pre>
<p>The result is that the nginx routing works great, I can see "Welcome to nginx" webpage from home, but the three other paths I get:</p>
<ul>
<li>/rstudio/ - <code>Input/output error</code></li>
<li>/shiny1/ - Page not found (the Shiny 404 page)</li>
<li>/shiny5/ - Page not found (the Shiny 404 page)</li>
</ul>
<p>The RStudio and Shiny workloads both work when exposing via the single load balancer, mapped to 8787 and 3838 respectively.</p>
<p>Can anyone point to where I'm going wrong?</p>
<p>Qs:</p>
<ul>
<li>Do the Dockerfiles need to be adapted so they all give a 200 status on port 80 when requesting "/"? Do I need to change the health checker? I tried changing the RStudio login page (that 302 to /auth-sign-in if you are not logged in) but no luck</li>
<li>Both RStudio and Shiny need websockets - does this affect this?</li>
<li>Does session affinity need to be on? I tried adding that with IP but same errors. </li>
</ul>
| <p>As Radek suggested, <code>ingress.kubernetes.io/rewrite-target: /</code> is required to re-write your requests. However, this is not currently supported by the GKE ingress controller and is the reason that you're receiving 404 responses.</p>
<p>Instead, on GKE, you must use an <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md" rel="nofollow noreferrer">nginx ingress controller</a>.</p>
<p>You will then be able to configure ingress for your rstudio and shiny images that obeys the rewrite rule:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: r-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- backend:
serviceName: rstudio
servicePort: 8787
path: /rstudio/*
- backend:
serviceName: shiny1
servicePort: 3838
path: /shiny1/*
- backend:
serviceName: shiny5
servicePort: 80
path: /shiny5/*
</code></pre>
|
<p>I am using Azure Container Services (AKS - not ACS) to stand up some API's - some of which are for public consumption, some of which are not.</p>
<p>For the public access route everything is as you might expect, a load-balancer service bound to a public IP is created, DNS zone contains our A record forwarding to the public IP, traffic is routed through to an NGINX controller and then onwards to the correct internal service endpoints.</p>
<p>Currently the preview version assigns a new VNET to place the AKS resource group within, moving forwards I will place the AKS instance inside an already existing VNET which houses other components (App Services, on an App Service Environment).</p>
<p>My question is how to grant access to the private APIs to other components inside the same VNET, as well as components in other VNETS?</p>
<p>I believe AKS supports an ILB-type load balancer, which I think might be what is required for routing traffic from other VNETS? But what about where the components reside already inside the same VNET? </p>
<p>Thank you in advance!</p>
| <p>If you need to access these services from other services outside the AKS cluster, you still need an ILB to load balance across your service on the different nodes in your cluster. You can either use the ILB created by using the annotation in your service. The alternative is using NodePort and then stringing up your own way to spread the traffic across all the nodes that host the endpoints. </p>
<p>I would use ILB instead of trying to make your own using NodePort service types. The only thing would be perhaps using some type of API Gateway VM inside your vnet where you can define the backend Pool, that may be a solution if you are hosting API's or something through a 3rd party API Gateway hosted on an Azure VM in the same VNet.</p>
<p>Eddie Villalba
MCSD: Azure Solutions Architect | CKA: Certified Kubernetes Administrator</p>
|
<p>I followed the kubernetes-engine tutorial, used local gloud in terminal. Looks everything is working, but I can't reach exposed external-ip <a href="http://104.197.4.162/" rel="nofollow noreferrer">http://104.197.4.162/</a> in my browser, as the tutorial said. Thank you!</p>
<p>$ kubectl get service</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-web LoadBalancer 10.11.245.151 104.197.4.162 80:30135/TCP 1m
</code></pre>
<p>$ kubectl get pods</p>
<pre><code>NAME READY STATUS RESTARTS AGE
hello-web-7d4f9779bf-lw9st 1/1 Running 0 1m
</code></pre>
<p>$ kubectl describe svc hello-web</p>
<pre><code>Name: hello-web
Namespace: default
Labels: run=hello-web
Annotations: <none>
Selector: run=hello-web
Type: LoadBalancer
IP: 10.11.245.151
LoadBalancer Ingress: 104.197.4.162
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30135/TCP
Endpoints: 10.8.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>$ curl 104.197.4.162:80</p>
<pre><code>curl: (7) Failed to connect to 104.197.4.162 port 80: Connection refused
</code></pre>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a></p>
| <p>I think you need opening firewall and access your deployments in compute engine instance by instance external ip address and port. You can use <code>curl ip:port</code> in for check it.</p>
|
<p>Is it possible to delete POD in kubernetes based on creation time or age? </p>
<p>Example : I would like to delete all PODs which are older than 1 day. These PODs are orphaned , therefore no new PODs will be created. </p>
| <p>This command will delete all PODs older than one day :</p>
<pre><code>kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}} {{.metadata.creationTimestamp}}{{"\n"}}{{end}}' | awk '$2 <= "'$(date -d 'yesterday' -Ins --utc | sed 's/+0000/Z/')'" { print $1 }' | xargs --no-run-if-empty kubectl delete pod
</code></pre>
<p>This command will delete all PODs older than 4 hours :</p>
<pre><code>kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}} {{.metadata.creationTimestamp}}{{"\n"}}{{end}}' | awk '$2 <= "'$(date -d'now-4 hours' -Ins --utc | sed 's/+0000/Z/')'" { print $1 }' | xargs --no-run-if-empty kubectl delete pod
</code></pre>
|
<p>I am trying to deploy the "cert-manager" (<a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager</a>) project which is the successor to "kube-lego". I'm finding that the certificates don't match what is being created, and I'm wondering if anybody else has tried this before.</p>
<p>I am creating a tls secretName with "monitoring-xxx-com", and in the ingress-nginx logs I find that it's trying to search for namespace/monitoring-xxx-com and not finding what it expects.</p>
<p>I am wondering whether this is because ingress-nginx is trying to use the pods namespace automatically and cert-manager is creating certs without a namespace, therefore that's why the cert can never be found.</p>
<pre><code>error obtaining PEM from secret kube-system/monitoring-xxx-com: error
retrieving secret kube-system/monitoring-xxx-com: secret kube-
system/monitoring-xxx-com was not found
</code></pre>
<p>and in the certificate created by "cert-manager":</p>
<pre><code>Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: monitoring-xxx-com
</code></pre>
| <p>The secret and the nginx ingress controller are in a different namespace, there is an option where you can set the certificate from another namespace.</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md</a></p>
<pre><code>--default-ssl-certificate string Name of the secret
that contains a SSL certificate to be used as default for a HTTPS catch-all server.
Takes the form <namespace>/<secret name>.
</code></pre>
<p>To find the namespace of your secret:</p>
<pre><code>kubectl describe secrets/monitoring-xxx-com
</code></pre>
<p>Using the default-ssl-certificate in the deployment template</p>
<pre><code>spec:
template:
spec:
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
- "--default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate"
</code></pre>
|
<p>I am unable to get circuit breaking configuration to work on my elb through egress config.</p>
<p><strong>ELB</strong>
elb has success rate of 25% (75% 500 error & 25% with status 200),
the elb has 4 instances, only 1 returns a successful response, other instances are configured to returns 500 error for testing purpose. </p>
<p><strong>Setup</strong></p>
<ul>
<li><p>k8s: v1.7.4</p></li>
<li><p>istio: 0.5.0</p></li>
<li>env: k8s on aws</li>
</ul>
<p><strong>Egress rule</strong></p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: elb-egress-rule
spec:
destination:
service: xxxx.us-east-1.elb.amazonaws.com
ports:
- port: 80
protocol: http
</code></pre>
<p><strong>Destination Policy</strong></p>
<pre><code>kind: DestinationPolicy
metadata:
name: elb-circuit-breaker
spec:
destination:
service: xxxx.us-east-1.elb.amazonaws.com
loadBalancing:
name: RANDOM
circuitBreaker:
simpleCb:
maxConnections: 100
httpMaxPendingRequests: 100
sleepWindow: 3m
httpDetectionInterval: 1s
httpMaxEjectionPercent: 100
httpConsecutiveErrors: 3
httpMaxRequestsPerConnection: 10
</code></pre>
<p><strong>Route rules:</strong> not set</p>
<p><strong>Testing</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sleep
labels:
app: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
template:
metadata:
labels:
app: sleep
spec:
containers:
- name: sleep
image: tutum/curl
command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
</code></pre>
<p>.</p>
<pre><code>export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SOURCE_POD -c sleep bash
</code></pre>
<p>Sending requests in parallel from the pod</p>
<pre><code>#!/bin/sh
set -m # Enable Job Control
for i in `seq 100`; do # start 100 jobs in parallel
curl xxxx.us-east-1.elb.amazonaws.com &
done
</code></pre>
<p><strong>Response</strong>
<a href="https://i.stack.imgur.com/zQXmT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zQXmT.png" alt="enter image description here"></a></p>
| <p>Currently, Istio considers an <code>Egress Rule</code> to designate a single host. This single host will not be ejected due to the load balancer's panic threshold of Envoy (the sidecar proxy implementation of Istio). The default panic threshold of Envoy is 50%. This means that at least two hosts are required for one host to be ejected, so the single host of an <code>Egress Rule</code> will not be ejected.</p>
<p>This practically means that <code>httpConsecutiveErrors</code> does not effect the external services. This lack of functionality should be partially resolved with <code>External Services</code> of Istio that will replace the <code>Egress Rules</code>.</p>
<p>See documentation of the Istio <code>External Services</code> backed by multiple endpoints -<a href="https://github.com/istio/api/blob/master/routing/v1alpha2/external_service.proto#L113" rel="nofollow noreferrer">https://github.com/istio/api/blob/master/routing/v1alpha2/external_service.proto#L113</a></p>
|
<p>I deployed these two services to my cluster under namespace <code>prisma</code>: </p>
<pre><code>kubectl get services -n prisma
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
database ClusterIP 10.23.252.18 <none> 3306/TCP 3d
prisma NodePort 10.23.248.0 <none> 4466:31001/TCP 1d
</code></pre>
<p>But when I do cluster-info, it only shows the master and none of my services are showing up:</p>
<pre><code>kubectl cluster-info -n prisma
Kubernetes master is running at https://my-cluster-ip
</code></pre>
<p>Is this correct?</p>
<p>I expect to see something like:</p>
<pre><code>Kubernetes master is running at https://my-cluster-ip
Prisma is running at https://my-cluster-ip/api/v1/namespaces/prisma/services/prisma/proxy
...
</code></pre>
| <p>Yes, that's correct. kubectl cluster-info --help says:</p>
<pre><code>Display addresses of the master and services with label kubernetes.io/cluster-service=true
</code></pre>
<p>That label, under the kubernetes.io "namespace" (referring to the hierarchical syntax used in labels, not to namespace objects in clusters), has special meaning- it is used for services and other resources that are important parts of the cluster machinery, usually packaged and managed as addons. It is not to be used for ordinary tenant services. </p>
<p>If your services fall into the category of cluster infrastructure, you can add that label to them and their related resources. A good example of the use of this label is in the monitoring tool heapster-</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml</a></p>
|
<p>I'm trying to install SAP Vora 2.0 on AWS Minikube but I'm getting this error: "Docker build failed." Have tried with version 2.1 of Vora but still encountering similar error. </p>
<p>This is the actual error log I'm getting:</p>
<pre><code>Downloading/unpacking kubernetes
Cleaning up...
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 290, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1266, in prepare_files
req_to_install.extras):
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2401, in requires
dm = self._dep_map
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2597, in _dep_map
self.__dep_map = self._compute_dependencies()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2621, in _compute_dependencies
parsed = next(parse_requirements(distvers))
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 49, in <lambda>
next = lambda o: o.next()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2721, in parse_requirements
"version spec")
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2697, in scan_list
raise ValueError(msg, line, "at", line[p:])
ValueError: ("Expected ',' or end-of-list in", 'websocket-client !=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0', 'at', '*,!=0.42.*,>=0.32.0')
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c apt-get update && apt-get install -y --no-install-recommends python python-pip python-twisted krb5-user vim && pip install kubernetes urllib3==1.19.1' returned a non-zero code: 2
Docker build failed !
</code></pre>
| <p>Please follow below SAP note to resolve the issue:
<a href="https://launchpad.support.sap.com/#/notes/2608651" rel="nofollow noreferrer">https://launchpad.support.sap.com/#/notes/2608651</a></p>
<p>Editing the post:</p>
<p>We need to edit 2 files to resolve this issue.</p>
<p>Locate the below file in Vora installation package & edit to add version:
{rootInstallDirectory}/images/vora-dqp/download_and_install_os_packages.sh</p>
<pre><code>if [ -f /usr/bin/apt-get ]; then
...
pip install kubernetes==4.0.0 protobuf==3.5.1 httplib2==0.10.3 oauth2client==4.1.2 PySocks==1.6.8
...
elif [ -f /usr/bin/yum ]; then
pip install kubernetes==4.0.0 protobuf==3.5.1 httplib2==0.10.3 oauth2client==4.1.2 PySocks==1.6.8
...
</code></pre>
<p>Modify 2nd file: {rootInstallDirectory}/images/vora-security-operator/init-container/Dockerfile</p>
<pre><code>RUN apt-get update && \
...
pip install kubernetes==4.0.0 urllib3==1.19.1 PySocks==1.6.8
...
</code></pre>
<p>This issue will be fixed in next Vora patch.</p>
|
<p>I have been trying Docker Swarm and looking into other solutions, such as Kubernetes, but I just can't figure out what would be the best for my use-case, and I could use some help from experts, so your input is very welcome.</p>
<p>I have some requirements for the cloud I want to build, and it (obviously) should be done as cheap, simple and reliable as possible:</p>
<ul>
<li>Host stateless containers, such as web containers, with a production-quality loadbalancer with automatic HTTPS (Let's Encrypt)</li>
<li>Host stateful containers, such as MySQL, in both a non-clustered approach (so: I have only one replica (because I don't need scaling there), but if that server fails, it would be nice if that container moves to another host automatically, without data loss) and in a clustered approach (with Galeria for instance).
<ul>
<li>These databases need fast storage, so preferably they would store their stuff locally at first, and when they move, the volume moves with them.</li>
</ul></li>
<li>Share volumes between hosts automatically with the fastest diskspeed possible for an open source solution.</li>
<li>Preferably stay with my current datacenter, where I host my VPSes (I have three of them, connected with 10GBit links) on SSD storage
<ul>
<li>Explanation: I want to stay with my current datacenter because I like their support, they have easy automatic (non-credit card, because I'm running a Dutch company) payment, they are affordable and they have great certifications (such as ISO 27001:2013, PCI DSS, ISO 9001:2008, NEN 7510 etc).</li>
<li>You may suggest moving to Azure, AWS or GCP, but I would rather not.</li>
<li>I don't mind building my own cluster, as long as it's doable, alone, at (relative) production quality.</li>
</ul></li>
</ul>
<p>Keep in mind that I'm not running a lot of services (only 10-20), but I do need production quality and high-availability.</p>
<p>Also, I prefer to run things that normally aren't run in containers, in the container ecosystem anyway to have more flexibility and having them be restarted when a host fails automatically.</p>
<hr>
<p><em>Things I have already considered</em>:</p>
<ul>
<li>Running Docker Swarm, with Traefik and Gluster: Traefik seems stable (most of the time), automatically get certificates, and Gluster can be used - easily - to share volumes. However: there is no supported production quality Gluster volume driver, and thus I have to use bind mounts, and I run into permission issues.</li>
<li>Running Kubernetes with some ingress controller (for instance cert-manager, but that's not for production, or Traefik, or Voyager?) and the Gluster system for storing volumes, but Kubernetes seems overcomplicated.</li>
<li>Going to AWS or GCP anyway, but their pricing is confusing and they pay by credit card.</li>
<li>Using something like Flynn, because I only have web applications anyway, and the Heroku style system seems fun, but they still have no HTTPS support in a stable release: <a href="https://github.com/flynn/flynn/issues/1995" rel="nofollow noreferrer">https://github.com/flynn/flynn/issues/1995</a></li>
</ul>
| <p>Honestly, from a pure ROI and operational load perspective, it doesn't sound like container runtimes at your current datacenter is the right solution for this problem. </p>
<p>From an operator perspective, a container system makes sense when there are tenancy and heterogeneity problems that are hard to solve with VMs/VPSes, and the plant is at minimum dozens of nodes in size. Running any container infrastructure in an HA manner is a lot of work, and there are a lot of corner cases that require dedicated, specialist attention. The need has to be large enough for it to make sense to make this investment.</p>
<p>The plant as described, with redundancy, can run on a handful/dozen VMs/VPSes. It needs some careful architecting to achieve desired levels of availability, but the patterns for managing databases and stateless apps on VMs for HA with, say, 3x scalability, are pretty well established. </p>
<p>There is still a lot of discovery happening in the container world. With Kubernetes especially, every quarter there is a whole new release with new corner cases to discover. </p>
<p>Of course, it's really fun to learn about it, but it's still at the state where it's marvelous to see it working, not boring. </p>
|
<h2>Objective</h2>
<p>Clarify the behaviour of K8S container cpu usage when limit is set far below available CPU, and confirm if the understanding how to set limit is correct.</p>
<h2>Background</h2>
<p>I have a node of 2CPU, hence 2000m can be the upper limit. Every namespace is set with the LimitRange which limits CPU to 500m for container.</p>
<pre><code>kind: LimitRange
metadata:
name: core-resource-limits
spec:
limits:
- default:
cpu: 500m
memory: 2Gi
defaultRequest:
cpu: 100m
type: Container
</code></pre>
<h2>Indication</h2>
<p>Even when 2 CPU are available (no other process/container waiting) and a container is runnable, it can only use 0.5 CPU, and 1.5 CPU will be left unused. Is this correct?</p>
<h2>How to set LimitRange</h2>
<p>I believe I can set the limit such as 75-80% of available 2 CPU to better utilise the CPU resource. Because in case there are multiple containers trying to claim CPU more than requests, K8S will allocate the CPU among containers based on the request value of each containers, as per the documentations (some from OpenShift but believe it is the same with K8S). Is this correct?</p>
<pre><code>kind: LimitRange
metadata:
name: core-resource-limits
spec:
limits:
- default:
cpu: 1500m
memory: 2Gi
defaultRequest:
cpu: 100m
type: Container
</code></pre>
<p><a href="https://docs.openshift.com/container-platform/3.5/dev_guide/compute_resources.html#dev-cpu-requests" rel="noreferrer">CPU Requests</a></p>
<blockquote>
<p>The CPU request represents a minimum amount of CPU that your container may consume, but <strong>if there is no contention for CPU, it can use all available CPU on the node. If there is CPU contention on the node, CPU requests provide a relative weight across all containers</strong> on the system for how much CPU time the container may use.</p>
</blockquote>
<p><a href="https://docs.openshift.com/container-platform/3.5/dev_guide/compute_resources.html#dev-cpu-limits" rel="noreferrer">CPU Limits</a></p>
<blockquote>
<p>Each container in a pod can specify the amount of CPU it is limited to use on a node. <strong>CPU limits control the maximum amount of CPU that your container may use independent of contention on the node</strong>. If a container attempts to exceed the specified limit, the system will throttle the container. This allows the container to have a consistent level of service independent of the number of pods scheduled to the node.</p>
</blockquote>
<p><a href="https://stackoverflow.com/questions/42326645/kubernetes-understanding-cpu-resources-limits">kubernetes / understanding CPU resources limits</a></p>
<blockquote>
<p>The 6% of CPU means 6% (CPU requests) of the nodes CPU time is reserved for this pod. So it guaranteed that it always get at lease this amount of CPU time. <strong>It can still burst up to 12% (CPU limits), if there is still CPU time left</strong>.</p>
</blockquote>
<p><a href="https://stackoverflow.com/questions/37975366/how-are-kubernetes-cpu-limits-enforced">How are Kubernetes CPU limits enforced?</a></p>
<blockquote>
<p>Checking one of my containers, I see that Kubernetes set CpuPeriod, CpuQuota for them. The Docker docs (<a href="https://docs.docker.com/engine/reference/run/" rel="noreferrer">https://docs.docker.com/engine/reference/run/</a>) mention the CFS documentation (<a href="https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt" rel="noreferrer">https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt</a>), and from the information there, it seems that these are hard limits</p>
</blockquote>
| <h2><a href="https://rads.stackoverflow.com/amzn/click/com/1617293725" rel="nofollow noreferrer" rel="nofollow noreferrer">Kubernetes in Action 1st Edition by Marko Luksa</a><br></h2>
<p>14.1.3. Understanding how CPU requests affect CPU time sharing </p>
<blockquote>
<p>But <strong><em>if one container wants to use up as much CPU as it can</em></strong>, while the other one is sitting idle at a given moment, the first container will be allowed to use the whole CPU time (minus the small amount of time used by the second container, if any). <strong>After all, it makes sense to use all the available CPU if no one else is using it</strong>, right? As soon as the second container needs CPU time, it will get it and the first container will be throttled back.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/jNgNH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jNgNH.jpg" alt="enter image description here"></a></p>
<hr>
<blockquote>
<p>Even when 2 CPU are available (no other process/container waiting) and a container is runnable, it can only use 0.5 CPU, and 1.5 CPU will be left unused. Is this correct?</p>
</blockquote>
<p>So this is correct.</p>
|
<p>My VPS provider doesn't have possibility to have private network between VSes. So master and nodes are interconnected over the internet. Is it safe enough practice? Or it is better to move to AWS?</p>
| <p>While the other answer states it is not safe I would strongly disagree on it.</p>
<p>1: It is perfectly fine to expose master on the public internet, as you would do with any other server. It is by design protected with authentication/cipher. Obviously a regular sec hardening should be in place, but that is a case for any internet facing system. Your masters will also run things like scheduler and controller-manager, all locally, so not really an issue.</p>
<p>2: The traffic between pods in usual kubernetes setup passes via an overlay network like ie. flannel, calico or weave. Speaking from experience, some of them, like in my case Weave Net, support traffic ciphering explicitly to make it safer for the overlay to communicate over public network.</p>
<p>3: Statement that <code>any pods that open ports are by default public</code> is fundamentally wrong. Each pod has it's own network namespace, so even if it listens on 0.0.0.0 to capture any traffic, this happens only within that local namespace so by no means is it exposed externally. Untill you configura kubernetes service of NodePort or LoadBalancer type to explicitly expose this service (and it's backing pods ports) to the internet. And you can control this even more by means of NetworkPolicies.</p>
<p>So yes, you can run kubernetes cluster over public network in a way that is safe.</p>
|
<p>Trying to build k8s cluster on bare metal. I use CoreOS as host OS for my nodes. And I'm a bit confusing with the way I should install flannel for cluster networking.</p>
<p>I see from docs that I can either download it to my host machine and start it using <code>systemd</code> or use a <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">k8s DaemonSet manifest</a>.</p>
<p>Personally I like the idea of DaemonSet because k8s will take care of running flannel on each node. But are there any disadvantages in running flannel as k8s-DaemonSet?</p>
<p>Documentation also says that I should run flannel before any other pods. That's why my first idea was to put it onto the path specified for kubelet <code>--pod-manifest</code> parameter. But since the yaml for flannel contains not <code>Pod</code> but <code>DaemonSet</code> and some RBAC-related entities I get the following error using <code>--pod-manifest</code> dir for deploying flannel:</p>
<blockquote>
<p>Can't process manifest file "/etc/kubernetes/manifests/flannel.yaml": /etc/kubernetes/manifests/flannel.yaml: couldn't parse as pod(invalid pod: &rbac.ClusterRole{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"flannel", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Rules:[]rbac.PolicyRule{rbac.PolicyRule{Verbs:[]string{"get"}, APIGroups:[]string{""}, Resources:[]string{"pods"}, ResourceNames:[]string(nil), NonResourceURLs:[]string(nil)}, rbac.PolicyRule{Verbs:[]string{"list", "watch"}, APIGroups:[]string{""}, Resources:[]string{"nodes"}, ResourceNames:[]string(nil), NonResourceURLs:[]string(nil)}, rbac.PolicyRule{Verbs:[]string{"patch"}, APIGroups:[]string{""}, Resources:[]string{"nodes/status"}, ResourceNames:[]string(nil), NonResourceURLs:[]string(nil)}}, AggregationRule:(*rbac.AggregationRule)(nil)}), please check manifest file.</p>
</blockquote>
<p>So if I want to execute my flannel yaml-manifest before other pods on particular node, where should I put it? Or the only option is to execute <code>kubectl apply -f flannel.yaml</code>?</p>
| <p>That manifest includes a ClusterRole, a ClusterRoleBinding, a ConfigMap, and the DaemonSet resources (which implicitly defines the pod resources), so it can't be run with --pod-manifest, which only wants pod resource definitions. </p>
<p>I don't have comparative experience running flannel under systemd vs as a daemonset. </p>
<p>I would look at the AddOn manager as a way to ensure kubernetes objects are considered cluster services and are started early in cluster lifecycle:</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md</a></p>
|
<p>To install kubernetes using flannel, one initially needs to run:</p>
<pre><code>kubeadm init --pod-network-cidr 10.244.0.0/16
</code></pre>
<p>Questions are:</p>
<ul>
<li>What is the purpose of "pod-network-cidr"?</li>
<li>What's the meaning of such IP "10.244.0.0/16"?</li>
<li>How flannel uses this afterwards?</li>
</ul>
| <p>pod-network-cidr is the virtual network that pods will use. That is, any created pod will get an IP inside that range. </p>
<p>The reason of setting this parameter in flannel is because of the following: <a href="https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml" rel="noreferrer">https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml</a></p>
<p>Let us take a look at the configuration:</p>
<pre><code> net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
</code></pre>
<p>kube-flannel yml file has 10.244.0.0/16 hardcoded as the network value. If you wanted to use another network (for example, the default that kubeadm uses), you would have to modify the yml to match that networking. In this sense, it is easier to simply start kubeadm with 10.244.0.0/16 so the yml works out of the box.</p>
<p>With that configuration, flannel will configure the overlay in the different nodes accordingly. More details here: <a href="https://blog.laputa.io/kubernetes-flannel-networking-6a1cb1f8ec7c" rel="noreferrer">https://blog.laputa.io/kubernetes-flannel-networking-6a1cb1f8ec7c</a></p>
|
<p>I'm trying to wrap my head around how/if Kubernetes manages multiple Pods in terms of a clustered client model. Based on this documentation <a href="https://kubernetes-v1-4.github.io/docs/user-guide/pods/multi-container/" rel="nofollow noreferrer">Multi-container</a> it sounds as though Kubernetes is only concerned with the health of a pod and the containers within it. This means that a single Kubernetes instance could manage multiple client's pods, which contain containers running that client's applications, microservices etc. <a href="https://i.stack.imgur.com/Owza4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Owza4.jpg" alt="enter image description here"></a></p>
<p>Is this correct?</p>
<p>Please see my diagram for a clearer idea of what I'm asking.</p>
| <p>The diagram has the right idea, but not quite the right terminology. </p>
<p>The diagram would be more accurate if the "Pod" label was replaced with "Namespace", and the "Container" label was replaced with "Pod". </p>
<p>A single Kubernetes cluster is intended to be able to support multi-tenancy, where the workloads of individual clients can run with proper security, resource allocation, isolation, and other important tenancy management attributes. </p>
<p>The unit of tenancy, however, is a namespace- a logical layer of abstraction in which are deployed workloads, usually for an individual client- not a pod, and the unit of replication for workload processing is the pod (comprising one or more containers), not an individual container.</p>
|
<p>I have 3 nodes, running all kinds of pods. I would like to have a list of nodes and pods, for an example:</p>
<pre><code>NODE1 POD1
NODE1 POD2
NODE2 POD3
NODE3 POD4
</code></pre>
<p>How can this please be achieved?</p>
<p>Thanks.</p>
| <p>You can do that with <a href="https://kubernetes.io/docs/reference/kubectl/overview/#custom-columns" rel="noreferrer">custom columns</a>:</p>
<pre><code>kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName --all-namespaces
</code></pre>
<p>or just:</p>
<pre><code>kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name --all-namespaces
</code></pre>
|
<p>I'd like to update a value config for a helm release on my cluster.</p>
<p>Something like </p>
<p><code>helm update -f new_values.yml nginx-controller</code></p>
| <pre><code>helm upgrade -f ingress-controller/values.yml nginx-ingress stable/nginx-ingress
</code></pre>
<p>Or more generally:</p>
<pre><code>helm upgrade -f new-values.yml {release name} {package name or path} --version {fixed-version}
</code></pre>
<p>The command above does the job. </p>
<p>Unless you manually specify the version with the <code>--version {fixed-version}</code> argument, <code>upgrade</code> will also update the chart version. You can find the current chart version with <code>helm ls</code>.</p>
<p>Docs: <a href="https://helm.sh/docs/helm/#helm-upgrade" rel="noreferrer">https://helm.sh/docs/helm/#helm-upgrade</a></p>
|
<p>Is there a way to retrieve the oms workspace ID and Key in Azure via the az cli or azure powershell?
I am deploying k8s clusters (in Azure) and automatically want to deploy the oms container agent via helm. I need a workspace ID and key for that and I don't want to create the workspace by hand and manually put the Id and Key in my release job :).</p>
| <p>You could use Azure Power Shell to do this, for example:</p>
<pre><code>$rgname = "shuioms"
$omsname = "shuioms"
##get workspaceid
$oms=Get-AzureRmOperationalInsightsWorkspace -ResourceGroupName shuioms -Name shuioms
$workspaceID = $oms.CustomerId
#get oms key
$key=Get-AzureRmOperationalInsightsWorkspaceSharedKeys -ResourceGroupName shuioms -Name shuioms
</code></pre>
|
<p>So I am fairly new to Kubernetes. I am a Windows user (sorry) and have installed Minikube. I am trying to learn Kubenetes using MiniKube. I have created very simple REST API that should work with port 5000 exposed where there is a simple route /Hello/{somestring} </p>
<p>I have created a POD/Deployment and Service for this successfully in MiniKube like this</p>
<pre><code>minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr
kubectl run simple-sswebapi-pod-v1 --replicas=1 --labels="run=sswebapi-pod-v1" --image=sachabarber/sswebapp:v1 --port=5000
kubectl expose deployment simple-sswebapi-pod-v1 --type=NodePort --name=simple-sswebapi-service
kubectl get services simple-sswebapi-service
</code></pre>
<p>Which I can then grab the url from and paste into my browser like so</p>
<pre><code>minikube service simple-sswebapi-service --url
</code></pre>
<p>Which gives me this URL</p>
<p><strong><a href="http://192.168.0.29:32246" rel="nofollow noreferrer">http://192.168.0.29:32246</a></strong></p>
<p>Which I then try in the browser on my host, all is good my REST API is running as expected</p>
<p><a href="https://i.stack.imgur.com/J0hKp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J0hKp.png" alt="enter image description here"></a></p>
<p>But from what I have read, I believe I should be able to ALSO use a DNS name for the service rather than this url returned above.</p>
<p>In fact I am not sure what this IP address returned as part of the --url command is trying to tell me above. It is not one of the ones listed for the service endpoints for is it for the POD from what I can tell from the Dashboard.</p>
<p>This is the service</p>
<p><a href="https://i.stack.imgur.com/2Gqjn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Gqjn.png" alt="enter image description here"></a></p>
<p>This is the POD</p>
<p><a href="https://i.stack.imgur.com/i6ZqV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i6ZqV.png" alt="enter image description here"></a></p>
<p>Shouldn't there be a DNS name available for the service that I should be able to use instead of this fairly hacky way of grabbing the url from the service I just created. Someone please let me know what this --url even represents. I am lost here</p>
<p>I have checked that the DNS add on is enabled in MiniKube it is, see <strong>kube-dns</strong> in list below</p>
<p><a href="https://i.stack.imgur.com/5DAeg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5DAeg.png" alt="enter image description here"></a></p>
<p>As I say this is also what I see for the service inside of the MiniKube Dashboard</p>
<p><a href="https://i.stack.imgur.com/oh44J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oh44J.png" alt="enter image description here"></a></p>
<p>This confused me even more as I cant seem to tie any of that back to the ONLY IP address that seems to actually work for me, which is the one I grabbed using this line from the service</p>
<pre><code>.\minikube.exe service simple-sswebapi-service --url
</code></pre>
<p>This Ip Address is not shown in the dashboard at all.</p>
<p>I thought the service should be available at DNS name something like:</p>
<p><strong>simple-sswebapi-service.default.svc.cluster.local</strong></p>
<p>Which is the </p>
<ul>
<li>The name of the service</li>
<li>The namespace</li>
<li>svc to tell its a service</li>
</ul>
<p>Just for completeness this is me describing the service in command line</p>
<p><a href="https://i.stack.imgur.com/pUIio.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pUIio.png" alt="enter image description here"></a></p>
<p>What am I missing?</p>
<p>Is my mental mode wrong. I should be able to see this service using a DNS in the host too? Or is the DNS name ONLY available inside the PODS?</p>
| <p>kube-dns is internal DNS. You can only use the DNS name for a service from inside the cluster. </p>
<p>Since your service type is Nodeport, you can connect to the service using the IP of the machine (minikube) on that port.</p>
|
<p>I've followed <a href="https://kubernetes.io/docs/getting-started-guides/kops/" rel="nofollow noreferrer">these instructions for setting up a kubernetes cluster on AWS using kops.</a> </p>
<p>I've then been able to run <code>kubectl create -f ...</code> commands to get an application running. </p>
<p>I can access (what I presume is) the API at <code>https://api.useast1.dev.example.com/</code>, in my browser. </p>
<p>This prompts for authentication, the credentials of which I get by running </p>
<pre><code>kubectl config view --minify
</code></pre>
<p>as per <a href="https://github.com/kubernetes/kops/blob/master/docs/addons.md" rel="nofollow noreferrer">these instructions from the kops github</a>. </p>
<p>The API then shows: </p>
<pre><code>{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
</code></pre>
<p>Now I'm trying to setup GitLab CI, which requests an API endpoint and a Service Token. </p>
<p>I created a service token using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">these instructions from kubernetes</a>, though I haven't done the imagePullSecret part. </p>
<p>However, when I try to anything on the kubernetes cluster (Install Helm Tiller for example), it gives me: <code>Kubernetes error: Unauthorized</code>. I assume this is because I haven't given it any credentials to access that API. </p>
<p>How do I set up kubernetes cluster such that Gitlab doesn't require those credentials to access the API? </p>
| <p>Ok, there was a bit of confusion about what the token is. </p>
<p>Best way to retrieve the token is: </p>
<pre><code>kubectl get secrets
</code></pre>
<p>To list the secrets</p>
<p>Then</p>
<pre><code>kubectl describe secret SECRET_NAME
</code></pre>
<p>To retrieve the token. </p>
|
<p>Running on Google Cloud platform / Container Engine - How do I set it up to point to this Ingress in the following?</p>
<p>I have installed Nginx-ingress on Kubernetes with Helm and it works for the <code>default backend - 404</code>.</p>
<p>I want to be able to use different http uri path, like <code><domain.com>/v1</code>, <code><domain.com>/v2</code> and others.</p>
<p>For my own Chart that I want to use Ingress I have the following in <code>values.yaml</code>:</p>
<pre><code># Default values for app-go.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: gcr.io/<project name>/app-go
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port:
# kubernetes.io/tls-acme: "true",
ingress:
enabled: true
annotations: {
kubernetes.io/ingress.class: "nginx",
kubernetes.io/ingress.global-static-ip-name: "kube-ingress"
}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- <domain.com>
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
</code></pre>
<p>How do I specify annotations for Nginx-ingress for different paths.</p>
<pre><code>helm version
Client: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.0", GitCommit:"14af25f1de6832228539259b821949d20069a222", GitTreeState:"clean"}
</code></pre>
| <p>I went ahead and reproduced your use case. <br>
Assuming the installation of nginx ingress controller though helm went smoothly and when listing resources everything seems to be fine, you need to specify the paths in the ingress yaml file, as follows:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: test.demo.com
http:
paths:
- path: /path1
backend:
serviceName: s1
servicePort: 8080
- path: /path2
backend:
serviceName: s1
servicePort: 8080
- path: /path3
backend:
serviceName: s2
servicePort: 80
- host: demo.test.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
</code></pre>
<p>Then, curl -H -I 'Host: test.demo.com' <a href="http://external-lb-ip/path1" rel="nofollow noreferrer">http://external-lb-ip/path1</a>, for example, should return 200.</p>
|
<p>New to Kubernetes but want to quickly run some Docker containers on different machines, e.g, containers 1, 2 and 3 on node 1 (physical machine 1) and container 4, 5, and 6 on node 2 (physical machine 2). Can someone help me with the config files and commands to get it up and running, and all containers can communicate with each other?</p>
<p>I found the example in <a href="https://gettech1.wordpress.com/2016/10/03/kubernetes-forcefully-run-pod-on-specific-node/" rel="nofollow noreferrer">https://gettech1.wordpress.com/2016/10/03/kubernetes-forcefully-run-pod-on-specific-node/</a> close to what I want, but there is only one pod. How do I do it with two pods (assuming that I can add more containers in each pod) and run the two pods together in one deployment (so that containers are within the same network, therefore, can communicate with each other)?</p>
<p>I also want to run a Docker container with a bind mount with "shared" bind-propagation, how can I specify it?</p>
<p>Personally, I found the Kubernetes documentation a little hard to navigate with layers of concepts referencing each other. Anyone can point to a clean tutorial would be a help too. I'd like to learn how to run containers on multiple machines, then how to autoscale by adding more containers in a pod, adding more pods on a node and adding more nodes in a cluster. Then the different type of networking and volume management.</p>
| <p>The simple way to assign <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer"><code>Pods</code> to <code>Nodes</code></a> is to use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">label selectors</a>. </p>
<p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer"><code>Labels</code> and <code>Selectors</code></a> are a concept you will need to understand throughout Kubernetes. </p>
<p>First add labels to the nodes:</p>
<pre><code>kubectl local nodes node-a podwants=somefeatureon-nodea
kubectl local nodes node-b podwants=somefeatureon-nodeb
</code></pre>
<p>A <code>nodeSelector</code> can then be set in the Pod definitions <code>spec</code>. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: my-app
spec:
nodeSelector:
podwants: somefeatureon-nodea
container:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
</code></pre>
<p>As a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer"><code>Pod</code></a> will always be co-located in Kubernetes and containers in the <code>Pod</code> will all be able to access each other, <code>Pod</code> to <code>Pod</code> communication is done via exposing the <code>Pod</code> as a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer"><code>Service</code></a>. Note the <code>Service</code> also uses a label selector to find it's <code>Pods</code></p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: web-svc
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Then you can <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">discover the available <code>Services</code></a> in other <code>Pods</code> via environment variables or via DNS if you have <a href="https://kubernetes.io/docs/concepts/cluster-administration/addons/" rel="nofollow noreferrer">added CoreDNS</a> to your cluster. </p>
<pre><code> WEB_SVC_SERVICE_HOST=x.x.x.x
WEB_SVC_SERVICE_PORT=80
</code></pre>
<p>You won't often define and schedule <code>Pods</code> themselves. You will probably use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> that describes your <code>Pods</code> and will help you scale them. </p>
<p>Once you've got the simple case down the documentation follows on to describe <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">Node affinity</a> which allows you to define more complex rule sets. Even down to the level of making scheduling decisions based on what <code>Pods</code> are currently scheduled on the <code>Node</code>.</p>
|
<p>I have a deployment object with the following <code>rake jobs:work</code> command:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: staging-delayed-job-deployment
namespace: staging
spec:
template:
metadata:
labels:
env: staging
name: delayed-job
spec:
containers:
- name: job
image: gcr.io/ej-gc-dev/redacted:<%= ENV['IMAGE_TAG'] %>
command: ["/bin/bash", "-l", "-c"]
args: ["RAILS_ENV=production bundle exec rake jobs:work"]
</code></pre>
<p>When I run <code>kubectl logs</code> I get nothing. How do I get the <code>rake jobs:work</code> output to show in <code>kubectl logs</code> ? i.e if run directly in the pod it gives output like this: </p>
<pre><code>[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Starting job worker
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=67) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=67) COMPLETED after 0.4903
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=68) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=68) COMPLETED after 0.9115
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=69) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=69) COMPLETED after 0.1752
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=70) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=70) COMPLETED after 0.4770
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] 4 jobs processed at 1.7649 j/s, 0 failed
</code></pre>
<p>I want that to show up when I run <code>kubectl logs</code> </p>
| <p>I needed to add an initializer: <code>config/initializers/delayed_job.rb</code> and set the worker's logger to stdout like so: </p>
<p><code>Delayed::Worker.logger = Logger.new(STDOUT)</code></p>
|
<p>I've already some environments setted with docker and docker compose, so I have diferent containers with diferent services, and all of them are orchestrated by docker-compose, and it works ok in my local dev environment. Recently I've read about kubernetes and minikube, but aparently, kubernetes is a tool more for servers/clusters of containers management.</p>
<p>The question is, for a local dev environment (php, mysql and apache for example), what container orchestration mechanism should I use?</p>
| <p>There are lots of decision criteria. If there is a target environment for this application- use local tools that best approximate that target environment. If the target environment runs kubernetes, best to use minikube. If the target environment is some flavor of clustered docker, use compose. If there is no target environment, use whatever you're most comfortable with or want to learn. </p>
|
<p>I have successfully created a Kubernetes POD/Service using MiniKube on windows. But I would now like to ensure that DNS is working correctly.</p>
<p>The DNS service is shown as running</p>
<pre><code>.\kubectl get pod -n kube-system
</code></pre>
<p>Which shows me the kube-dns pod is running</p>
<p><a href="https://i.stack.imgur.com/bqce1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bqce1.png" alt="enter image description here"></a></p>
<p>I also have the DNS add on shown as running</p>
<p><a href="https://i.stack.imgur.com/lIIIE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lIIIE.png" alt="enter image description here"></a></p>
<p>So I then want to verify that DNS is working, Ideally I want to test that PODs that have a service on top of them can lookup the service by DNS name.</p>
<p>But I started simple like this, where I get my running POD</p>
<p><a href="https://i.stack.imgur.com/lwW0V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lwW0V.png" alt="enter image description here"></a></p>
<p>So now that I have my POD name, I want to try do simple DNS lookup in it using the following commmand</p>
<pre><code>.\kubectl exec simple-sswebapi-pod-v1-f7f8764b9-xs822 -- nslookup google.com
</code></pre>
<p>Where I am using the <strong>kubectl exec</strong> to try and run this nslookup in the POD that was found (running I should point out above).</p>
<p>But I get this error</p>
<p><a href="https://i.stack.imgur.com/Pvazu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pvazu.png" alt="enter image description here"></a></p>
<p>Why would it not be able to find <strong>nslookup</strong> inside POD. All the key things seem to be ok</p>
<ul>
<li>Kube-DNS pod is running (as shown above)</li>
<li>DNS AddOn is installed and running (as shown above)</li>
</ul>
<p>What am I missing, is there something else I need to enable for DNS lookups to work inside my PODs?</p>
| <p>To do it like this your container needs to include the command you want to use inside of the built image.</p>
<p>Sidenote: <code>kubectl debug</code> is coming to kube in near future <a href="https://github.com/kubernetes/kubernetes/issues/45922" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/45922</a> which will help solve things like that by enabling you to attach a custom container to existing pod and debug in it</p>
|
<p>This is my deployment for the django app with rest framework:</p>
<pre><code>#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: my-api-service e
name: my-api-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: my-api-selector
spec:
containers:
-
name: nginx
image: nginx
command: [nginx, -g,'daemon off;']
imagePullPolicy: IfNotPresent
volumeMounts:
-
name: shared-disk
mountPath: /static
readOnly: true
-
name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
ports:
-
name: nginx
containerPort: 80
-
env:
-
name: STATIC_ROOT
value: /src/static/
-
name: MEDIA_ROOT
value: /src/media/
-
name: CLIENT_ORIGIN
value: https://marketforce.platinumcredit.co.ke
-
name: DJANGO_SETTINGS_MODULE
value: config.production
-
name: DEBUG
value: "true"
image: localhost:5000/workforce-api:0.2.0
command: [ "./entrypoint.sh" ]
name: my-api-container
imagePullPolicy: IfNotPresent
ports:
-
name: my-api-port
containerPort: 9000
protocol: TCP
volumeMounts:
-
name: shared-disk
mountPath: /src/static
initContainers:
-
name: assets
image: localhost:5000/workforce-api:0.2.0
command: [bash, -c]
args: ["python manage.py collectstatic --noinput"]
command: [bash, -c]
args: ["sleep 10"]
command: [bash, -c]
args: ["cp -r static/* /data"]
imagePullPolicy: IfNotPresent
volumeMounts:
-
mountPath: /data
name: shared-disk
volumes:
-
name: shared-disk
emptyDir: {}
-
name: nginx-config
configMap:
name: nginx-config
</code></pre>
<p>My service:</p>
<pre><code># Service
apiVersion: v1
kind: Service
metadata:
name: my-api-service
labels:
label: my-api-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
name: my-api-selector
</code></pre>
<p>And here's my nginx configuration: </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream api {
server 127.0.0.1:9000 fail_timeout=0;
}
server {
access_log /var/log/nginx/api.my.log;
error_log /var/log/nginx/api.my.mesozi.com-http-error.log;
listen 80;
server_name localhost;
location /static/ {
autoindex on;
alias /static/;
}
location /media/ {
alias /src/media/;
}
location = /favicon.ico {
access_log off;
log_not_found off;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:9000/;
}
}
}
</code></pre>
<p>As you can see in my deployment file I am running both nginx and my app in the same pod. Everything's running well in minikube except it's like the styles are not being applied. Looking at <code>network</code> in the browser all static file requests are OK <code>200</code>. What am I missing?</p>
<p>I guess displaying styles for my API is not any useful but I just want to get it tow work cause it should.</p>
| <p>Looking at your static files section:</p>
<pre><code>location /static/ {
autoindex on;
alias /static/;
}
</code></pre>
<p>Is <code>/static/</code> an absolute path to your static web files?, if not, use the full path on alias since root has not been defined earlier.</p>
<p>Again why do you have autoindex flag turned on for this kind of files, it should be set to off like:</p>
<pre><code>autoindex off;
</code></pre>
<p>Alternatively you could try using the root definition for static files:</p>
<pre><code>location /static/ {
root /var/;
autoindex off;
}
</code></pre>
<p>Then under <code>/var</code> have a static folder <code>/var/static</code>. The location part is appended to the path specified in the root.</p>
|
<p>We are using managed Kubernetes in Azure (AKS) and have run out of public IP addresses. We only need one, but AKS creates a new public IP every time we deploy a service and it does not delete it when the service is deleted. For example: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: somename
spec:
ports:
- port: 443
targetPort: 443
selector:
app: somename
# Also tried this to reuse public IP in AKS MC resource group
# https://learn.microsoft.com/en-my/azure/aks/static-ip
# loadBalancerIP: x.x.x.x
type: LoadBalancer
</code></pre>
<p>Every time this is deployed (<code>kubectl create -f svc.yml</code>) a new public IP is created. When it is deleted (<code>kubectl delete -f svc.yml</code>) the IP remains. Trying to reuse one of the existing IPs with loadBalanceIP as in the comments above fails, "Public ip address ... is referenced by multiple ipconfigs in resource ...".</p>
<p>We have created a service request but it takes ages, so I'm hoping this will be faster. I don't dare to just delete the public IPs in the AKS managed resource as that may cause issues down the line.</p>
<p>Is there a way to safely release or reuse the public IPs? We are using Kubernetes version 1.7.12. We have deleted the deployment referenced by the service as well, it makes no difference.</p>
| <p>It should delete the IPs after some time (like 5 minutes tops). So the issue you are having is a bug. You can check k8s events to find the error and look at it.</p>
<p>Also, its perfectly safe to delete Azure resources. k8s wont freak out if they are gone.</p>
<p>Tested with k8s 1.9.1</p>
|
<p>I have multiple MSA on k8s on GKE. Each is on separate subdomain like:</p>
<ul>
<li>msa1.example.com</li>
<li>msa2.example.com</li>
</ul>
<p>I have it in single ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: lalala-ip-1
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- msa1.example.com
secretName: msa1-tls
backend:
serviceName: sink
servicePort: 80
rules:
- host: msa1.example.com
http:
paths:
- path: /.well-known/*
backend:
serviceName: letsencrypt
servicePort: 80
- path: /*
backend:
serviceName: lalala
servicePort: 80
- host: msa2.example.com
http:
paths:
- path: /*
backend:
serviceName: lalala2
servicePort: 80
</code></pre>
<p>... and all is nice. </p>
<p>The thing is, that I want to have each MSA in separate file. </p>
<p>Problem is this <code>kubernetes.io/ingress.global-static-ip-name: lalala-ip-1</code> line. If I have it in two ingresses only first started is bounded to IP, but other ones no. </p>
<p>Is there a way, to share IP on GKE ingress controller between two ingresses? </p>
| <p>A way around it could be to run your own nginx-ingress controller in your cluster and expose it via LoadBalancer service type. Then you would have 1 IP for your ingress and be able to serve all ingresses via nginx controller by adding annotation <code>kubernetes.io/ingress.class: "nginx"</code></p>
<p>Reference: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/</a></p>
|
<p>I have a Kubernetes node with a small root disk and a large disk plumbed into /var/lib/docker</p>
<p>Kubernetes does GC on disk images when the used space reaches X%. Whats it looking at though? I can't find it in the docs.</p>
<p>Just to clarify, if you've got / with 20GB of space and /var/lib/docker with a second disk of 100GB, if K8S looks a / the % free is much less, potentially than /var/lib/docker, but / doesn't change much where as the mapped in drive, does.</p>
| <p>Configuration info for kubelet cleanup of unused images and containers is here:</p>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/</a></p>
<p>Default for X% in the question is 90%; default cleanup target is 80%.</p>
|
<p>In <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">this</a> documentation of Kubernetes is says:</p>
<pre><code>To enable RBAC, start the apiserver with --authorization-mode=RBAC
</code></pre>
<p>How do you upgrade an existing cluster and/or how to see if RBAC is enabled?</p>
<p>I have created my cluster on Google k8 clusters and only have kubectl.</p>
<p>I have seen <a href="https://stackoverflow.com/questions/46552593/enabling-rbac-on-kubernetes-on-azure">this</a> but it kind of did not help.</p>
| <p>Could you SSH to the master node/nodes and edit <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> </p>
<p>You should see something like below in the file ></p>
<pre><code>command:
- "/hyperkube"
- "apiserver"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
- "--address=0.0.0.0"
- "--allow-privileged"
- "--insecure-port=8080"
- "--secure-port=443"
- "--cloud-provider=azure"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--service-cluster-ip-range=10.0.0.0/16"
- "--etcd-servers=http://127.0.0.1:2379"
- "--etcd-quorum-read=true"
- "--advertise-address=10.240.255.15"
- "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
- "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
- "--client-ca-file=/etc/kubernetes/certs/ca.crt"
- "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
- "--storage-backend=etcd2"
- "--v=4"
</code></pre>
<p>Just add <code>--authorization-mode=RBAC</code> and reboot the node and it should work.</p>
<p>Something like </p>
<pre><code>command:
- "/hyperkube"
- "apiserver"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
- "--address=0.0.0.0"
- "--allow-privileged"
- "--insecure-port=8080"
- "--secure-port=443"
- "--cloud-provider=azure"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--service-cluster-ip-range=10.0.0.0/16"
- "--etcd-servers=http://127.0.0.1:2379"
- "--etcd-quorum-read=true"
- "--advertise-address=10.240.255.15"
- "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
- "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
- "--client-ca-file=/etc/kubernetes/certs/ca.crt"
- "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
- "--storage-backend=etcd2"
- "--v=4"
- "--authorization-mode=RBAC"
</code></pre>
<p>Note that is this surely not official way but it did work for me running Kubernetes 1.8</p>
|
<p>I'm using k8s on GCP. It is required to set ingress to setup TLS connection so I set ingress for my application, it works !!!<br>
BTW what is Ingress controller like as <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">Nginx Ingress Controller</a>? </p>
<p>I'm sure that ingress create new load balancer and serve application services through http request. but what is the point to set nginx controllers you can use annotation to set up small configuration for nginx.</p>
| <p>You may have some misunderstandings about Ingress.
Ingress should be the rules for the traffic, which indicate the destination of a request will go through in the cluster.
Ingress Controller is the implementation for the Ingress. GCE and Nginx are both supported by k8s. They will take care of L4 or L7 proxy.</p>
|
<p>Say I am running my app in GKE, and this is a multi-tenant application.</p>
<p>I create multiple Pods that hosts my application.</p>
<p>Now I want:
Customers 1-1000 to use Pod1
Customers 1001-2000 to use Pod2
etc.</p>
<p>If I have a gcloud global IP that points to my cluster, is it possible to route a request based on the incoming ipaddress/domain to the correct Pod that contains the customers data?</p>
| <p>You can guarantee session affinity with services, but not as you are describing. So, your customers 1-1000 won't use pod-1, but they will use all the pods (as a service makes a simple load balancing), but each customer, when gets back to hit your service, will be redirected to the same pod.</p>
<p>Note: always within time specified in (default 10800):</p>
<pre><code>service.spec.sessionAffinityConfig.clientIP.timeoutSeconds
</code></pre>
<p>This would be the yaml file of the service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
</code></pre>
<p>If you want to specify time, as well, this is what needs to be added:</p>
<pre><code> sessionAffinityConfig:
clientIP:
timeoutSeconds: 10
</code></pre>
<hr>
<p>Note that the example above would work hitting ClusterIP type service directly (which is quite uncommon) or with Loadbalancer type service, but won't with an Ingress behind NodePort type service. This is because with an Ingress, the requests come from many, randomly chosen source IP addresses.</p>
|
<p>I successfully deployed my web app on kubernetes in Google cloud. It is serving via http. I followed all guides on how to add ssl certificate and it was added according to Google cloud console however, it only work as http , when you try to access the web app as HTTPS. the browser says "This site can’t be reached"</p>
<p>my ingress YAML looks like this</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
- secretName: testsecret
backend:
serviceName: s1
servicePort: 80
</code></pre>
<p>for Secret</p>
<pre><code>apiVersion: v1
data:
tls.crt: [crt]
tls.key: [key]
kind: Secret
metadata:
name: testsecret
namespace: default
type: Opaque
</code></pre>
| <p>I used this command to upload my ssl certificate </p>
<pre><code>kubectl create secret tls tls-secret --key=/tmp/tls.key --cert=/tmp/tls.crt
</code></pre>
<p>instead of yaml file Secret below and it works better. At least for Google Cloud </p>
<pre><code>apiVersion: v1
data:
tls.crt: [crt]
tls.key: [key]
kind: Secret
metadata:
name: testsecret
namespace: default
type: Opaque
</code></pre>
<p>Make sure when you go to <code>Kubernates Engine -> Configuration</code> in Google Cloud Console that your secret type is <code>Secret: kubernetes.io/tls</code> and not only <code>Secret</code>. when you create your secret using yaml it is created as secret only and not Secret: kubernetes.io/tls. </p>
<p>For more information you can take a look at these following links:
<a href="https://github.com/kubernetes/ingress-gce#backend-https" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce#backend-https</a></p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">enter link description here</a></p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#remarks" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#remarks</a></p>
|
<p>I am trying to do some experiments with Kubernetes in google cloud.</p>
<p>I have docker image in google cloud registry and need to deploy that image to a kubernetes cluster.</p>
<p>Here are the steps I need to perform.</p>
<ol>
<li>Create a Kubernetes cluster.</li>
<li>Copy the image from GCR and deploy to Kubernetes cluster.</li>
<li>Expose the cluster to internet via load balancer.</li>
</ol>
<p>I know, it is possible to do via google cloud sdk cli. Is there way to do these steps via Java/node js?</p>
| <p>There is a RESTful kubernetes-engine API:</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/reference/api-organization" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/reference/api-organization</a></p>
<p>e.g. create a cluster:</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.zones.clusters/create" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.zones.clusters/create</a></p>
<p>The container registry should be standard docker APIs.</p>
<p>Both Java and Node have kubernetes clients:</p>
<p><a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a>
<a href="https://github.com/godaddy/kubernetes-client" rel="nofollow noreferrer">https://github.com/godaddy/kubernetes-client</a></p>
|
<p>I used kubeadm to deploy my Kubernetes dashboard.
When I tried to deploy the <em>nginx-ingress-controller</em> in my dev namespace with default service-account, I was getting <em>liveness probe</em> and readiness failing with status code.</p>
<p>nginx-ingress-controller image is</p>
<pre><code>gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15
</code></pre>
<p>I get the same error in the test namespace also.
In my logs it's showing</p>
<pre><code> Received SIGTERM, shutting down
shutting down controller queues
I1201 00:19:48.745970 7 nginx.go:237] stopping NGINX process...
I1201 00:19:48.746923 7 shared_informer.go:112] stop requested
E1201 00:19:48.746985 7 listers.go:63] Timed out waiting for caches to sync
[notice] 22#22: signal process started
shutting down Ingress controller...
Handled quit, awaiting pod deletion
I NGINX process has stopped
Exiting with 0
</code></pre>
<p>Why am I getting failures in cluster scope; where is my failure?</p>
| <p>You're most likely adding too little resources try removing the resources for debugging</p>
|
<p>I'm trying to deploy Nexus3 as a Kubernetes pod in IBM Cloud service. I am getting this error, probably because the PVC is mounted as read only for that user. I have had this problem other times in Postgres for example but I can't recall how to solve it:</p>
<pre><code>mkdir: cannot create directory '../sonatype-work/nexus3/log': Permission denied
mkdir: cannot create directory '../sonatype-work/nexus3/tmp': Permission denied
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to No such file or directory
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (No such file or directory)
Unable to update instance pid: Unable to create directory /nexus-data/instances
</code></pre>
<p>These are the PVC and POD yaml:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexus-pvc
annotations:
volume.beta.kubernetes.io/storage-class: "ibmc-file-retain-bronze"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
apiVersion: v1
kind: Pod
metadata:
name: nexus
labels:
name: nexus
spec:
containers:
- name: nexus
image: sonatype/nexus3
ports:
- containerPort: 8081
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-pvc
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
</code></pre>
| <p>The nexus3 Dockerfile is structured such that it runs as a non-root user. However, the NFS file storage requires root user to access and write to it. There are a couple of ways to fix this. One, you can restructure your Dockerfile to temporarily add the non-root user to root and change the volume mount permissions. Here are instructions for that: <a href="https://console.bluemix.net/docs/containers/cs_storage.html#nonroot" rel="nofollow noreferrer">https://console.bluemix.net/docs/containers/cs_storage.html#nonroot</a></p>
<p>Another option is to run an initContainer (<a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a>) that changes the mount path ownership before the main container runs. The initContainer would look something like this:</p>
<pre><code>initContainers:
- name: permissionsfix
image: ubuntu:latest
command: ["/bin/sh", "-c"]
args:
- >
chown 1000:1000 /mount;
volumeMounts:
- name: volume
mountPath: /mount
</code></pre>
|
<p>I'm encountering the following error for my ingress controller. </p>
<pre><code>Warning GCE googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 9.0, quotaExceeded
</code></pre>
<p>My limit is set as 9, and this has previously worked so I'm not sure why this error is being encountered now. </p>
<p>I did delete the cluster and then created a new one, what do these backend services refer to? How could I remove any old ones that have not been deleted?</p>
| <p>You could also ask for a small up on the <a href="https://console.cloud.google.com/iam-admin/quotas" rel="noreferrer">backend # quota page</a>.</p>
<p>If it's small enough it will get auto accepted.</p>
|
<p>I'm getting below error while creating cassandra operator for kubernetes.</p>
<pre><code># kubectl create -f example-cassandra-cluster.yaml
error: unable to recognize "example-cassandra-cluster.yaml": no matches for cassandra.database.instaclustr.com/, Kind=CassandraCluster
</code></pre>
<p><code>example-cassandra-cluster.yaml</code> is:</p>
<pre><code>apiVersion: "cassandra.database.instaclustr.com/v1beta2"
kind: "CassandraCluster"
metadata:
name: "example-cassandra-cluster"
spec:
size: 3
version: "3.11"
pod:
resources:
limits:
memory: "512Mi"
</code></pre>
<p>Have anyone encountered this error before? </p>
| <p>The</p>
<pre><code>apiVersion: "cassandra.database.instaclustr.com/v1beta2"
</code></pre>
<p>indicates that the CassandraCluster kind</p>
<pre><code>kind: "CassandraCluster"
</code></pre>
<p>depends on a Custom Resource Definition.</p>
<p>That CRD has to be created in the cluster before instantiating its objects.</p>
<p><a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#create-a-customresourcedefinition" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#create-a-customresourcedefinition</a></p>
|
<p>Is there option to connect to external database cluster from POD? I need to connect to elastic search, zookeeeper, Kafka and couchbase, each of them has its own cluster. Per my understanding <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">the documentation</a>, I can define multi external IPs, but I cannot find how will k8s behave if one of them is down. I am working with pure k8s 1.6 now, and we will migrate to 1.7 soon. Information about OpenShift 3.7 will be also welcome because I cannot find anything specific in <a href="https://docs.openshift.com/container-platform/3.6/dev_guide/integrating_external_services.html" rel="nofollow noreferrer">its documentation</a>.</p>
| <p>The k8s doc on your link has more info on exposing services running on k8s but not externally</p>
<p>You generally want to expose your service using a DNS entry and manage the HA for that service separately. </p>
<p>For example you can a single DNS entry <code>mykafka.mydomain.com</code> and then assign IP addresses to that entry:</p>
<pre><code>kafka1 ip
kafka2 ip
kafka3 ip
</code></pre>
<p>You can see that approach on the Openshift docs in the <code>USING AN EXTERNAL DOMAIN NAME</code> section. Yes, its not clear from the docs whether k8s/openshift does a round robin on the multiple IPs for an external service and if automatically fails over.</p>
<p>Hope it helps.</p>
|
<p>Why does k8s secrets need to be base64 encoded when configmaps does not?</p>
<p>When creating a configmap you simply do somthing like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
SOME_KEY: a string value
</code></pre>
<p>But when you want to create a secret you have to
<code>echo -n "some secret string" | base64</code> and then put the result of that in a file looking something like this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
SOME_KEY: c29tZSBzZWNyZXQgc3RyaW5n
</code></pre>
<p>I really wonder why there is this difference? Are kubernetes secrets simply base64 encoded strings? I would expect that secrets were stored encrypted in kubernetes. </p>
| <p>Secrets can contain binary data (the type is <code>map[string][]byte</code>), and byte arrays are base64-encoded in JSON serialization.</p>
<p>ConfigMaps only contain string data (the type is <code>map[string]string</code>), so the JSON serialization just outputs the string.</p>
<p>In 1.10, ConfigMaps have a new <code>binaryData</code> field that allows storing binary data, which is base64-encoded, just like secrets. <a href="https://github.com/kubernetes/kubernetes/pull/57938" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/57938</a></p>
|
<p>I'm looking at Prometheus metrics in a Grafana dashboard, and I'm confused by a few panels that display metrics based on an ID that is unfamiliar to me. I assume that <code>/kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c</code> points to a single pod, and I assume that <code>/kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c/<another-long-string></code> resolves to a container in the pod, but how do I resolve this ID to the pod name and a container i.e. how to do I map this ID to the pod name I see when I run <code>kubectl get pods</code>?</p>
<p>I already tried running <code>kubectl describe pods --all-namespaces | grep "99b2fe2a-104d-11e8-baa7-06145aa73a4c"</code> but that didn't turn up anything.</p>
<p>Furthermore, there are several subpaths in <code>/kubepods</code>, such as <code>/kubepods/burstable</code> and <code>/kubepods/besteffort</code>. What do these mean and how does a given pod fall into one or another of these subpaths?</p>
<p>Lastly, where can I learn more about what manages <code>/kubepods</code>?</p>
<p>Prometheus Query: </p>
<pre><code>sum (container_memory_working_set_bytes{id!="/",kubernetes_io_hostname=~"^$Node$"}) by (id)
</code></pre>
<p>/<a href="https://i.stack.imgur.com/NpQQL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NpQQL.png" alt="Grafana Screenshot"></a></p>
<p>Thanks for reading.</p>
<p>Eric</p>
| <p>OK, now that I've done some digging around, I'll attempt to answer all 3 of my own questions. I hope this helps someone else.</p>
<p><strong>How to do I map this ID to the pod name I see when I run kubectl get pods?</strong></p>
<p>Given the following, <code>/kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c</code>, the last bit is the pod UID, and can be resolved to a pod by looking at the <code>metadata.uid</code> property on the pod manifest:</p>
<pre><code>kubectl get pod --all-namespaces -o json | jq '.items[] | select(.metadata.uid == "99b2fe2a-104d-11e8-baa7-06145aa73a4c")'
</code></pre>
<p>Once you've resolved the UID to a pod, we can resolve the second UID (container ID) to a container by matching it with the <code>.status.containerStatuses[].containerID</code> in the pod manifest:</p>
<pre><code>~$ kubectl get pod my-pod-6f47444666-4nmbr -o json | jq '.status.containerStatuses[] | select(.containerID == "docker://5339636e84de619d65e1f1bd278c5007904e4993bc3972df8628668be6a1f2d6")'
</code></pre>
<p><strong>Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths?</strong></p>
<p>Burstable, BestEffort, and Guaranteed are Quality of Service (QoS) classes that Kubernetes assigns to pods based on the memory and cpu allocations in the pod spec. More information on QoS classes can be found here <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/</a>.</p>
<p>To quote:</p>
<blockquote>
<p>For a Pod to be given a QoS class of Guaranteed:</p>
<ul>
<li><p>Every Container in the Pod must have a memory limit and a memory request, and they must be the same.</p></li>
<li><p>Every Container in the Pod must have a cpu limit and a cpu request, and they must be the same.</p></li>
</ul>
<p>A Pod is given a QoS class of Burstable if:</p>
<ul>
<li><p>The Pod does not meet the criteria for QoS class Guaranteed.</p></li>
<li><p>At least one Container in the Pod has a memory or cpu request.</p></li>
</ul>
<p>For a Pod to be given a QoS class of BestEffort, the Containers in the
Pod must not have any memory or cpu limits or requests.</p>
</blockquote>
<p><strong>Lastly, where can I learn more about what manages /kubepods?</strong></p>
<p><code>/kubepods/burstable</code>, <code>/kubepods/besteffort</code>, and <code>/kubepods/guaranteed</code> are all a part of the cgroups hierarchy, which is located in /sys/fs/cgroup directory. Cgroups is what manages resource usage for container processes such as CPU, memory, disk I/O, and network. Each resource has its own place in the cgroup hierarchy filesystem, and in each resource sub-directory are /kubepods subdirectories. More info on cgroups and Docker containers here: <a href="https://docs.docker.com/config/containers/runmetrics/#control-groups" rel="noreferrer">https://docs.docker.com/config/containers/runmetrics/#control-groups</a></p>
|
<p>I am new to Kubernetes and Minikube. Both look amazing tools, but I wonder if is there any way to have a single .yml file to deploy my services/deployments in all the environments, including local dev env...</p>
<p>The first limitation I see is related to service discovery since I would like to have my services behind a load balancer on the cloud, but at development environment, I can't since minikube don't support it, so I have to fall back to NodePort.</p>
<p>Can you provide me with some info about that matter?</p>
| <p>There are other common differences between environments- names; credentials for any database or other permissioned resources; allocation of RAM/CPU; replica counts. There are limitations that minikube has as a runtime, compared to production k8s. </p>
<p>So- though one <em>can</em> use the same single yaml file in different environments, typically that's not what one <em>wants</em>.</p>
<p>What one usually wants is to have the general architectural shape of the solution be the same across environments, have differences extracted into minimalist configuration, then rendered using templates into environment-specific files to be used at deployment time. </p>
<p>The tool most commonly used to support this kind of approach is helm:</p>
<p><a href="https://helm.sh/" rel="nofollow noreferrer">https://helm.sh/</a></p>
<p>Helm is basically a glorified templating wrapper around kubectl (though it has an in-cluster component). With helm, you can use the same base set of resource files, extract environment differences into config files, and then use helm to deploy as appropriate to each environment.</p>
|
<p>I am taking a deep look inside Flink to see how I can use it on a project and had a question for the creators / high level thinkers... why does Flink use Yarn as the default resource manager? </p>
<p>Was Kubernetes considered? Or is it one of those things where we started on Yarn, it works pretty well...</p>
<p>I have come across many projects and articles that allow Kubernetes and Yarn to work together in cluding the Myraid project that allows yarn to be deployed with Mesos (but I am on Kubernetes...)</p>
<p>I have a very large compute cluster 2000 or so nodes that I use and I want to use the super cool CEP features of Flink feeding off a Kafka infrastructure (also deployed on to this kubernetes environment).</p>
<p>I am looking to understand the reasons behind using Yarn as the resource manager underneath Flink and if would be possible (with some effort and contribution to the project) to make Kubernetes an option alongside Yarn.</p>
<p>Please note - I am new to Yarn - just reading up about it. Also new to Flink and learning about the deployment and scale-out architecture.</p>
| <p>Flink is not tied to YARN. It can also run on Apache Mesos and there are also users running it on Kubernetes. In the current version (Flink 1.4.1), there are a few things to consider when running Flink in Kubernetes (see this <a href="https://berlin-2017.flink-forward.org/kb_sessions/flink-in-containerland/" rel="nofollow noreferrer">talk by Patrick Lucas</a>).</p>
<p>The Flink community is also currently working on improving Flink's support for container setups. The effort is called <a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077" rel="nofollow noreferrer">FLIP-6</a> and will be included in the next release (Flink 1.5.0).</p>
|
<p>I'm deploying K8s to the Oracle Cloud Infrastructure where while I can make sure that the public internet facing IP stays static even when the instances are restarted. But for some reason the private IP of the instances always changes. Which brings me to the question - can Kubernetes work with nodes who's IP changes after restarts?</p>
<p>This could be quite a noob question but I did try to read up online and I couldn't find a conclusive answer.</p>
| <p>Yes, kubernetes can handle that case easily, and on OCI it works just fine. The individual worker nodes will (using the kubelet on that host) call to the master IP, which we would recommend using a load balancer to front to achieve a static IP and allow you to change, scale, and otherwise adjust your master kubernetes control plane nodes as you wish, without disrupting the workers.</p>
<p>You can get a pretty slick setup currently with the terraform tooling for kubernetes that is published here:</p>
<p><a href="https://github.com/oracle/terraform-kubernetes-installer" rel="nofollow noreferrer">https://github.com/oracle/terraform-kubernetes-installer</a></p>
|
<p>I have successfully created a Kubernetes POD/Service using MiniKube on windows. But I would now like to ensure that DNS is working correctly.</p>
<p>The DNS service is shown as running</p>
<pre><code>.\kubectl get pod -n kube-system
</code></pre>
<p>Which shows me the kube-dns pod is running</p>
<p><a href="https://i.stack.imgur.com/bqce1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bqce1.png" alt="enter image description here"></a></p>
<p>I also have the DNS add on shown as running</p>
<p><a href="https://i.stack.imgur.com/lIIIE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lIIIE.png" alt="enter image description here"></a></p>
<p>So I then want to verify that DNS is working, Ideally I want to test that PODs that have a service on top of them can lookup the service by DNS name.</p>
<p>But I started simple like this, where I get my running POD</p>
<p><a href="https://i.stack.imgur.com/lwW0V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lwW0V.png" alt="enter image description here"></a></p>
<p>So now that I have my POD name, I want to try do simple DNS lookup in it using the following commmand</p>
<pre><code>.\kubectl exec simple-sswebapi-pod-v1-f7f8764b9-xs822 -- nslookup google.com
</code></pre>
<p>Where I am using the <strong>kubectl exec</strong> to try and run this nslookup in the POD that was found (running I should point out above).</p>
<p>But I get this error</p>
<p><a href="https://i.stack.imgur.com/Pvazu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pvazu.png" alt="enter image description here"></a></p>
<p>Why would it not be able to find <strong>nslookup</strong> inside POD. All the key things seem to be ok</p>
<ul>
<li>Kube-DNS pod is running (as shown above)</li>
<li>DNS AddOn is installed and running (as shown above)</li>
</ul>
<p>What am I missing, is there something else I need to enable for DNS lookups to work inside my PODs?</p>
| <p>So more on this I installed busybox into a POD to allow me to use nslookup and this enabled me to do this</p>
<p><a href="https://i.stack.imgur.com/8ULrL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ULrL.png" alt="enter image description here"></a></p>
<p>So this looks cool, but should I not be able to ping that service either by its IP address or by its DNS name which seems to be resolving just fine as shown above.</p>
<p>If I <strong>ping google.com</strong> inside busybox command prompt all is ok, but when I do a ping for either this IP address of this service of the DNS names, it never gets anywhere.</p>
<p>DNS lookup is clearly working. What am I missing?</p>
|
<p>I'm implementing <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">this example</a> with <code>az aks</code>. I want to use ingress in order to use easily the reverse proxy as in the example with a container redirected to <code>/tea</code> and the other to <code>/coffee</code> based on the ingress simple rules.</p>
<pre><code> rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
<p>I follow the steps, however, azure wont give an ip address to my ingress as you can see.</p>
<pre><code>$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
demo-ingress demo.mydomain.com 80, 443 17h
</code></pre>
<p>I think azure only give addresses to the loadbalancers. Is there any workaround or solution for this? Can I somehow say <code>azure</code> to give an IP to my ingress?</p>
<p>As some additional info, I also tried `helm:</p>
<pre><code>helm install stable/nginx-ingress --set controller.publishService.enabled=true
</code></pre>
<p>However I'm a complete novice and it seems to do nothing.</p>
| <p>It took me a while to publish the answer. I wanted to understand a bit more what was happening in the container.</p>
<p>Anyway, the problem with the <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">implementation</a> I was testing, it is that a azure compatible ingress controller has to be installed (that's kind of my best guess, I'm not an expert though). The steps to install the ingress controller are <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md" rel="nofollow noreferrer">here</a>.</p>
<p>All the documents concerned are in the folder <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/install" rel="nofollow noreferrer">/install</a>. What I did in the end was a <code>makefile</code> containing the following instructions:</p>
<pre><code>kubectl apply -f install/common/ns-and-sa.yaml
kubectl apply -f install/common/default-server-secret.yaml
kubectl apply -f install/common/nginx-config.yaml
kubectl apply -f install/rbac/rbac.yaml
kubectl apply -f install/deployment/nginx-ingress.yaml
kubectl apply -f install/daemon-set/nginx-ingress.yaml
kubectl create -f install/service/nodeport.yaml
kubectl apply -f install/service/loadbalancer.yaml
kubectl get svc nginx-ingress --namespace=nginx-ingress
</code></pre>
<p>It's a bit cumbersome, but as long that it is only one command in my makefile, it's ok.</p>
<p>I hope this helps anyone who encounters the same problem.</p>
|
<p>I am trying to use below command to curl to the pod itself using podip as currently I don't want kubelet to request my pod for health check.</p>
<pre><code>livenessProbe:
exec:
command:
- curl
- $POD_IP:9990/admin/ping
initialDelaySeconds: 3
periodSeconds: 5
</code></pre>
<p>but the env variable $POD_IP is not recognized here,</p>
<pre><code>Could not resolve host: $POD_IP
</code></pre>
<p>How to configure this so that env var can be read by curl in the command.</p>
<p><strong>reference:</strong>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#execaction-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#execaction-v1-core</a></p>
| <p>try following :</p>
<pre><code>command:
- bash
- -c
- curl $POD_IP:9990/admin/ping
</code></pre>
|
<p>First time using Kubernetes. I have an API and a database, and I want the two pods to communicate with each other. </p>
<p>Based on the docs, I should create a service.</p>
<p>I have created a service for each of the two pods, though still not able to connect to the pod using the services IP address. </p>
<p>For example if the MySQL service that is created has an IP address of 11.22.33.44, I can run the following command to try to connect to the pod of that service:</p>
<pre><code>mysql -h11.22.33.44 -uuser -ppassword foo
</code></pre>
<p>...and it will hang and eventually the connection will time out.</p>
<p>I create the pod and service like so:</p>
<pre><code>kubectl create -f ./mysql.yaml
</code></pre>
<p><strong>mysql.yaml</strong>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 80
targetPort: 3306
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
spec:
containers:
- name: mysql
image: my-custom-mysql-image:latest
ports:
- containerPort: 3306
protocol: TCP
name: mysql
env:
- name: MYSQL_DATABASE
value: "foo"
- name: MYSQL_USER
value: "user"
- name: MYSQL_ROOT_PASSWORD
value: "password"
- name: MYSQL_HOST
value: "127.0.0.1"
</code></pre>
| <p>your service has a selector defined</p>
<pre><code>selector:
app: mysql
</code></pre>
<p>yet your Pod has no labels whatsoever, hence the service can not identify it as its backend and has no endpoint to direct traffic for ClusterIP. You should also stick to standard port number on service as well, so like this :</p>
<pre><code>ports:
- protocol: TCP
port: 3306
targetPort: 3306
</code></pre>
|
<p>I am running a Kubernetes 1.8 on Azure AKS. I have docker images deployed to a pod. I have created a Azure public ip(static) which I give to the NGinx ingress-controller when I create it. Kubectl get ingress shows that the ip has been assigned to it and /healthz endpoint returns 200 OK.</p>
<p>kubectl describe ingress shows:</p>
<pre><code> Rules:
Host Path Backends
service1.default.example.com / service1:80 (<none>)
</code></pre>
<p>When I request the host I get a 502 bad gateway. If I use ip, I get 404 not found. Same ip, which returns 200 ok from /healthz.</p>
<p>Containers and setup in general has been tested in Minikube and is working. </p>
<p>What could be the problem that my containers are not exposed through the ingress? Describes looks the same when compared to the Minikube's.</p>
| <p>What's happening is that the wiring between the ingress and the service your pods back is not correct. The logs from the ingress pod will probably help. There could be a few specific things wrong.</p>
<p>The reason that requesting the IP directly gives a response is that the IP does not match an ingress -> service rule, so you are just getting the default back end for the ingress- which gives you the 404. </p>
<p>So the ingress is fine. And the service and pods themselves are probably fine. But something is wrong with the wiring between the ingress and the service.</p>
|
<p>I am new to kubernetes administration. While trying to list & setup new cronjobs, one of the users is getting the following error: </p>
<blockquote>
<p>Error from server (Forbidden): cronjobs.batch is forbidden: User
cannot list cronjobs.batch in the namespace </p>
</blockquote>
<p>The role while creating this user:
</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <user>
name: <user>-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>The role binding while creating this user:
</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: <user>-role-binding
namespace: <user>
subjects:
- kind: User
name: <user>
apiGroup: ""
roleRef:
kind: Role
name: <user>-role
apiGroup: ""
</code></pre>
<p>What could the issue possibly be?</p>
| <p>The <code>Cronjob</code> resource belongs to the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#cronjob-v1beta1-batch" rel="nofollow noreferrer"><code>batch</code> API group</a>. </p>
<p>In your RBAC role, you have only granted access to the <code>core</code> (empty name), <code>extensions</code> and <code>apps</code> API groups.</p>
<p>To enable your user to access CronJob objects, add the <code>batch</code> API group to your RBAC role:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <user>
name: <user>-role
rules:
- apiGroups: ["", "extensions", "apps", "batch"]
resources: ["*"]
verbs: ["*"]
</code></pre>
|
<p>When I run multiple instances of WordPress on Google Kubernetes Engine and drop session affinity I get weird behavior in the cart, items disappear and come back. And people get logged out. (When I use session affinity, 100% of my traffic gets sent to one pod).</p>
<p>It seemed to be an issue of session persistence, but from what I can tell, WordPress relies on cookies to store login and cart info rather than sessions so this shouldn't be an issue. Locally when I use docker, destroy the container, and restart my cart remains so this seems to confirm that. </p>
<p>What is going on? And more importantly, what can I do to fix it?</p>
| <p>It looks like woocommerce uses PHP sessions for Cart info:</p>
<p><a href="https://woocommerce.github.io/code-reference/classes/WC-Cart.html#108" rel="nofollow noreferrer">https://woocommerce.github.io/code-reference/classes/WC-Cart.html#108</a>
<a href="https://woocommerce.github.io/code-reference/classes/WC-Cart-Session.html" rel="nofollow noreferrer">https://woocommerce.github.io/code-reference/classes/WC-Cart-Session.html</a></p>
<p>By default, that data would be stored on the specific pod file system. There are ways of telling PHP in multihost environments to use a common session store.</p>
|
<p>I have followed this <a href="https://blog.kublr.com/how-to-install-a-single-master-kubernetes-k8s-cluster-34ec16efefff" rel="nofollow noreferrer">tutorial</a> and this <a href="https://blog.alexellis.io/kubernetes-in-10-minutes/" rel="nofollow noreferrer">tutorial</a> and <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">this one</a> but am facing the same issue for last 3 days.</p>
<p>I am able to set up the master node correctly with the following steps:</p>
<pre><code>kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export kubever=$(kubectl version | base64 | tr -d ‘\’)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
</code></pre>
<p>and everything seems fine in </p>
<pre><code>kubectl get all --namespace=kube-system
</code></pre>
<p>then, </p>
<p>on the worker node:</p>
<pre><code>kubeadm join --token 864655.fdf6d0b389867b79 192.168.100.17:6443 --discovery-token-ca-cert-hash sha256:a2d840808b17b53b9612e6271ccde489f13dbede7d354f97188d0faa9e210af2
</code></pre>
<p>The output seems fine and is as below:</p>
<pre><code>[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.100.17:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.100.17:6443"
[discovery] Requesting info from "https://192.168.100.17:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.100.17:6443"
[discovery] Successfully established connection with API Server "192.168.100.17:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
</code></pre>
<p><em>BUT</em> as soon as I run this command, all hell breaks loose. The </p>
<pre><code>kubectl get all --namespace=kube-system
</code></pre>
<p>starts showing that all pods are kind of restarting all the time. the status keeps changing between Pending and Running, and at time some of the pods will even disappear and may have ContainerCreating status etc.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
po/etcd-ubuntu 0/1 Pending 0 0s
po/kube-controller-manager-ubuntu 0/1 Pending 0 0s
po/kube-dns-6f4fd4bdf-cmcfk 3/3 Running 0 13m
po/kube-proxy-2chb6 1/1 Running 0 13m
po/kube-scheduler-ubuntu 0/1 Pending 0 0s
po/weave-net-ptdxr 2/2 Running 0 11m
</code></pre>
<p>I have also tried the second tutorial, with flannel, and get the exact same issue.</p>
<p><em>My Set Up</em></p>
<p>I created two new VMs with a fresh installation of Ubuntu 17.10 on VMware with 2 processor/2core 6 GB of ram and 50 GB hard disk each. My physical machine is a i7-6700k with 32gb of ram.
I installed kubeadm, kubelet and docker on both of them and then followed the steps as mentioned above.</p>
<p>I have also tried switching between NAT and Bridge on VMware and nothing changed.</p>
<p>The initial IP of both VMs with bridge network was 192.168.100.12 and 192.168.100.17.
The <code>hostname -I</code> for master:</p>
<pre><code>192.168.100.17 172.17.0.1 10.32.0.1 10.32.0.2
</code></pre>
<p>The <code>hostname -I</code> for worker-node:</p>
<pre><code>192.168.100.12 172.17.0.1 10.44.0.0 10.32.0.1
</code></pre>
<p><code>journalctl -xeu kubelet</code> shows the following:</p>
<p><a href="https://gist.github.com/saad749/9a771a3460bf88c274498b5bc4b7fd84" rel="nofollow noreferrer">https://gist.github.com/saad749/9a771a3460bf88c274498b5bc4b7fd84</a></p>
<p>While trying with flannel (and still the same issue), the result from</p>
<pre><code>kubectl describe nodes
</code></pre>
<p>is</p>
<p><a href="https://gist.github.com/saad749/d24c453c8b4e663e9abf572a0fb38bf4" rel="nofollow noreferrer">https://gist.github.com/saad749/d24c453c8b4e663e9abf572a0fb38bf4</a></p>
<p><strong>Am I missing any step before kubeadm init? Should I change the IP addresses (to what)? Are there any specific logs I should look into? Is there a more comprehensive tutorial for this?
All Issues start after kubeadm join on the worker node, I can deploy the kubernetes on the master node or any other stuff, and it works fine.</strong></p>
<p><strong>UPDATE:</strong></p>
<p>Even after applying the suggestions from errordeveloper, The same issue persists.</p>
<p>I add the following flag to kubeadm init:</p>
<pre><code>--apiserver-advertise-address 192.168.100.17
</code></pre>
<p>I updated the kubeadm.conf to following and did reload and restart:
<a href="https://gist.github.com/saad749/c7149c87ec3e75a40586f626cf04279a" rel="nofollow noreferrer">https://gist.github.com/saad749/c7149c87ec3e75a40586f626cf04279a</a></p>
<p>and also tried changing the cluster dns
<a href="https://gist.github.com/saad749/5fa66bebc22841e58119333e75600e40" rel="nofollow noreferrer">https://gist.github.com/saad749/5fa66bebc22841e58119333e75600e40</a></p>
<p>This the log from after initializing the master:</p>
<pre><code>kube-master@ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-ubuntu 1/1 Running 0 22s 192.168.100.17 ubuntu
kube-system kube-apiserver-ubuntu 1/1 Running 0 29s 192.168.100.17 ubuntu
kube-system kube-controller-manager-ubuntu 1/1 Running 0 13s 192.168.100.17 ubuntu
kube-system kube-dns-6f4fd4bdf-wfqhb 3/3 Running 0 1m 10.32.0.7 ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 1m 192.168.100.17 ubuntu
kube-system kube-scheduler-ubuntu 1/1 Running 0 34s 192.168.100.17 ubuntu
kube-system weave-net-fkgnh 2/2 Running 0 32s 192.168.100.17 ubuntu
</code></pre>
<p>The hostname -i results:</p>
<pre><code>kube-master@ubuntu:~$ hostname -I
192.168.100.17 172.17.0.1 10.32.0.1 10.32.0.2 10.32.0.3 10.32.0.4 10.32.0.5 10.32.0.6 10.244.0.0 10.244.0.1
kube-master@ubuntu:~$ hostname -i
192.168.100.17
</code></pre>
<p>Results from:</p>
<pre><code>kubectl describe nodes
</code></pre>
<p><a href="https://gist.github.com/saad749/8f460650182a04d0ddf3158a52761a9a" rel="nofollow noreferrer">https://gist.github.com/saad749/8f460650182a04d0ddf3158a52761a9a</a></p>
<p>The Internal IP seems correct now.</p>
<p>After joining from second node, this happens:</p>
<pre><code>kube-master@ubuntu:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 49m v1.9.3
kube-master@ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-controller-manager-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system kube-dns-6f4fd4bdf-wfqhb 0/3 ContainerCreating 0 49m <none> ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 49m 192.168.100.17 ubuntu
kube-system kube-scheduler-ubuntu 1/1 Running 0 1s 192.168.100.17 ubuntu
kube-system weave-net-fkgnh 2/2 Running 0 48m 192.168.100.17 ubuntu
</code></pre>
<p>ifconfig -a results:</p>
<p><a href="https://gist.github.com/saad749/63a5a52bd3246ff72477b2aca7d158d0" rel="nofollow noreferrer">https://gist.github.com/saad749/63a5a52bd3246ff72477b2aca7d158d0</a></p>
<p>journalctl -xeu kubelet results</p>
<p><a href="https://gist.github.com/saad749/8a60870b35f93df8565e66cb208aff32" rel="nofollow noreferrer">https://gist.github.com/saad749/8a60870b35f93df8565e66cb208aff32</a></p>
<p>Sometimes, the pods IP is shown at 192.168.100.12 which is the IP of the non-master second node.</p>
<pre><code>kube-master@ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system kube-apiserver-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system kube-controller-manager-ubuntu 1/1 Running 0 0s 192.168.100.12 ubuntu
kube-system kube-dns-6f4fd4bdf-wfqhb 2/3 Running 0 3h 10.32.0.7 ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 3h 192.168.100.12 ubuntu
kube-system kube-scheduler-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system weave-net-fkgnh 2/2 Running 1 3h 192.168.100.17 ubuntu
kube-master@ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-dns-6f4fd4bdf-wfqhb 3/3 Running 0 3h 10.32.0.7 ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 3h 192.168.100.12 ubuntu
kube-system weave-net-fkgnh 2/2 Running 0 3h 192.168.100.12 ubuntu
kubectl describe nodes
Name: ubuntu
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=ubuntu
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Fri, 02 Mar 2018 08:21:47 -0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 08:21:43 -0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 08:21:43 -0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 08:21:43 -0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 11:28:25 -0800 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.100.12
Hostname: ubuntu
Capacity:
cpu: 4
memory: 6080832Ki
pods: 110
Allocatable:
cpu: 4
memory: 5978432Ki
pods: 110
System Info:
Machine ID: 59bf65b835b242a3aa182f4b8a542219
System UUID: 0C3C4D56-4747-D59E-EE09-F16F2793677E
Boot ID: 658b4a08-d724-425e-9246-2b41995ecc46
Kernel Version: 4.13.0-36-generic
OS Image: Ubuntu 17.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.9.3
Kube-Proxy Version: v1.9.3
ExternalID: ubuntu
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system kube-dns-6f4fd4bdf-wfqhb 260m (6%) 0 (0%) 110Mi (1%) 170Mi (2%)
kube-system kube-proxy-h4hz9 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-fkgnh 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
280m (7%) 0 (0%) 110Mi (1%) 170Mi (2%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rebooted 12m (x814 over 2h) kubelet, ubuntu Node ubuntu has been rebooted, boot id: 16efd500-a2a5-446f-ba25-1187857996e0
Normal NodeHasNoDiskPressure 10m kubelet, ubuntu Node ubuntu status is now: NodeHasNoDiskPressure
Normal Starting 10m kubelet, ubuntu Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet, ubuntu Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 10m kubelet, ubuntu Node ubuntu status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 10m kubelet, ubuntu Node ubuntu status is now: NodeHasSufficientMemory
Normal NodeNotReady 10m kubelet, ubuntu Node ubuntu status is now: NodeNotReady
Warning Rebooted 2m (x870 over 2h) kubelet, ubuntu Node ubuntu has been rebooted, boot id: 658b4a08-d724-425e-9246-2b41995ecc46
Warning Rebooted 15s (x60 over 10m) kubelet, ubuntu Node ubuntu has been rebooted, boot id: 16efd500-a2a5-446f-ba25-1187857996e0
</code></pre>
<p><strong>What am I doing wrong?</strong></p>
| <p>So after following the advice from @errordeveloper and still hitting the wall, I was able to solve the issue that turns out to be pretty simple.</p>
<p>Both my VMs had the same hostname. </p>
<pre><code>hostname -f
</code></pre>
<p>would return </p>
<pre><code>ubuntu
</code></pre>
<p>on both, and that causes issue with kubernetes, apparently.</p>
<p>I changed the name on my non-master node with </p>
<pre><code>hostnamectl set-hostname kminion
</code></pre>
<p>and in the following files:</p>
<pre><code>/etc/hostname
/etc/hosts
</code></pre>
<p>and everything went smooth onward!</p>
|
<p>All my <code>kubectl get</code> commands are failing with the following error <code>The connection to the server localhost:8080 was refused - did you specify the right host or port?</code></p>
<p>I checked the ip of my <code>/etc/kubernetes/admin.conf</code> and it has the following apart from other stuff: <code>server: https://10.23.23.19:6443</code></p>
<p>Since that is the IP of my machine, I thought of running the command like this <code>kubectl get pods --server=10.23.23.19:6443</code></p>
<p>Above gives me the error - <code>Unable to connect to the server: net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02"</code></p>
<p>What am I doing wrong? I have a 3 node cluster and all this is on the master. When 2 other nodes joined, they did join successfully is what I saw as the status on the screen though.</p>
<p>I used this tutorial(with slight deviations of my own to get started) - <a href="https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/" rel="nofollow noreferrer">https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/</a></p>
| <p><code>/etc/kubernetes/admin.conf</code> is the location that the provisioning system placed your config in, but it is not the one used by kubectl by default. The usual place that kubectl looks at for configuration is <code>~/.kube/config</code> so either put your config file there or hint to kubectl that it needs to look in a different place with <code>--kubeconfig <path></code> param.</p>
|
<p>I am having trouble trying to understand the Kubernetes authentication model, specially what "users" are.</p>
<p>Suppose I am on a computer, which is inside a kubernetes cluster. I want to do a request to the API server, using <code>kubectl</code>.</p>
<p>So:
- I need to have the public key from the api-server HTTPS port. So let's assume that is provided to me.
- Then, in my requeste, there's a need for me to populate the "user" field?</p>
<p>As per this part of the documentation, the user field is a method: <a href="https://kubernetes.io/docs/admin/authentication/#authentication-strategies" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/authentication/#authentication-strategies</a></p>
<p>But then here <a href="https://kubernetes.io/docs/admin/accessing-the-api/#authorization" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/accessing-the-api/#authorization</a> we read that actually kubernetes has no concept of a user.</p>
<p>So:</p>
<ul>
<li>What/where do I even put in the user field?</li>
<li>If, since I control the client request content, couldn't I simply enter any username there? Couldn't I just try guess any username repeatedly until I find one with the authorisation for what I want?</li>
</ul>
<p>Thanks.</p>
| <p>The user to be used depends on the kubeconfig for you to use (e.g. ~/.kube/config) and the current context. For example if your ~/.kube/config is below, kubernetes-admin is the user.</p>
<pre><code>apiVersion: v1
kind: Config
current-context: kubernetes-admin@kubernetes
preferences: {}
contexts:
- context: <---- Current context to identify which cluster, user, and namespace (*) to use.
cluster: kubernetes
user: kubernetes-admin <----- user for the context
name: kubernetes-admin@kubernetes
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://172.31.4.117:6443
name: kubernetes
users:
- name: kubernetes-admin <----
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<p>You can add users. Please refer to Use Case 1: Create User With Limited Namespace Access in <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">Configure RBAC In Your Kubernetes Cluster</a>. This "User" is not "service account".</p>
<hr />
<h1 id="references-5we4">References</h1>
<ul>
<li><a href="https://docs.bitnami.com/tutorials/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">Configure RBAC in your Kubernetes Cluster</a></li>
<li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">Configure Access to Multiple Clusters</a></li>
</ul>
|
<p>According to the Kubernetes documentation, services of the <code>ClusterIP</code> type will populate a <code>DNS</code> <code>A</code> type record with the following schema:</p>
<pre><code>pod-ip-address.my-namespace.pod.cluster.local
</code></pre>
<p>I am having difficultly parsing this schema into a resolvable address for my application.</p>
<p>For example, suppose I have the following service:</p>
<pre><code>subway-explorer-gmaps-proxy-service ClusterIP 10.35.252.232 <none> 9000/TCP 19m
</code></pre>
<p>What will the corresponding DNS record be?</p>
| <p>If your application is in the same namespace as the service you wish to consume, you can use the servicename:</p>
<pre><code>subway-explorer-gmaps-proxy-service
</code></pre>
<p>as the DNS name. Kube dns will resolve to the service IP. </p>
<p>If your application is not in the same namespace as that service, services get a DNS name of</p>
<pre><code>$service.$namespace.svc.cluster.local
</code></pre>
<p>e.g. if the service was created in the default namespace, it will get </p>
<pre><code>subway-explorer-gmaps-proxy-service.default.svc.cluster.local
</code></pre>
<p>Those names are resolvable anywhere in the cluster.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
|
<p>I'm using the Kubernetes Jenkins plugin in order to create Jenkins slaves on demand. The slaves job is to deploy and provision my apps to the Kubernetes cluster.</p>
<p>I created a pipeline project and wrote a very simple Jenkinsfile:</p>
<pre><code>podTemplate(label: 'jenkins-pipeline', containers: [
containerTemplate(name: 'jnlp', image: 'lachlanevenson/jnlp-slave:3.10-1-alpine', args: '${computer.jnlpmac} ${computer.name}', workingDir: '/home/jenkins', resourceRequestCpu: '200m', resourceLimitCpu: '300m', resourceRequestMemory: '256Mi', resourceLimitMemory: '512Mi'),
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:v2.6.0', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.4.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'curl', image: 'appropriate/curl:latest', command: 'cat', ttyEnabled: true)
],
volumes:[
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath:
'/var/run/docker.sock'),
]){
node ('jenkins-pipeline') {
def pwd = pwd()
def chart_dir = "${pwd}/chart"
checkout([$class: 'SubversionSCM', additionalCredentials: [], excludedCommitMessages: '', excludedRegions: '', excludedRevprop: '', excludedUsers: '', filterChangelog: false, ignoreDirPropChanges: false, includedRegions: '', locations: [[credentialsId: '4041436e-e9dc-4060-95d5-b28be47b1a14', depthOption: 'infinity', ignoreExternalsOption: true, local: '.', remote: 'https://svn.project.com/repo/trunk/RnD/dev/server/src/my-app']], workspaceUpdater: [$class: 'CheckoutUpdater']])
stage ('deploy canary to k8s') {
container('helm') {
def version = params.${VERSION}
def environment = params.${ENVIRONMENT}
// Deploy using Helm chart
sh "helm upgrade --install ${version} ${chart_dir} --set imageTag=${version},replicas=1,environment=${environment} --namespace=dev"
}
}
}
}
</code></pre>
<p>The Jenkins slave spins up on Kubernetes but the job fails with this stack trace:</p>
<pre><code>[Pipeline] stage
[Pipeline] { (deploy canary to k8s)
[Pipeline] container
[Pipeline] {
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: java.util.Collections$UnmodifiableMap.$() is applicable for argument types: (org.jenkinsci.plugins.workflow.cps.CpsClosure2) values: [org.jenkinsci.plugins.workflow.cps.CpsClosure2@7d7d26fa]
Possible solutions: is(java.lang.Object), any(), get(java.lang.Object), any(groovy.lang.Closure), max(groovy.lang.Closure), min(groovy.lang.Closure)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:58)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:49)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:18)
at WorkflowScript.run(WorkflowScript:20)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
at sun.reflect.GeneratedMethodAccessor512.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
at com.cloudbees.groovy.cps.Next.step(Next.java:74)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:165)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
</code></pre>
<p>I understand that the error comes from a type mismatch but I'm having a hard time understanding in which part of the Jenkinsfile and what I should do about it.</p>
<p>Can anyone please help me?</p>
| <p>This</p>
<pre><code>def version = params.${VERSION}
def environment = params.${ENVIRONMENT}
</code></pre>
<p>Should be this</p>
<pre><code>def version = params."${VERSION}"
def environment = params."${ENVIRONMENT}"
</code></pre>
|
<p>So, i have a k8s cluster running on AWS, provisioned using kops. I have created a secret locally, using kubectl:</p>
<pre><code> kubectl create secret generic aws-es --from-file=./aws_key.txt --from-file=./aws_secret_key.txt
</code></pre>
<p>My service.yml has this env:</p>
<pre><code> - name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-es
key: aws_key
</code></pre>
<p>And when i update the service in the cluster with:</p>
<pre><code> kubectl apply -f service.yml
</code></pre>
<p>I get the error running pod:</p>
<pre><code> Error: secrets "aws-es" not found
Error syncing pod
</code></pre>
<p>Obviously, my kops installation cannot see the locally created secret, is there a way for me to propagate that secret to kops' s3 storage?</p>
| <p>Fixed it. The problem was i created secrets in default namespace, while my pods were running in kube-system namespace.</p>
|
<p>I have created a kubernetes cluster and deployed jenkins by following file </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-ci
spec:
replicas: 1
template:
metadata:
labels:
run: jenkins-ci
spec:
containers:
- name: jenkins-ci
image: jenkins:2.32.2
ports:
- containerPort: 8080
</code></pre>
<p>and service by </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins-cli-lb
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
nodePort: 30000
# label keys and values that must match in order to receive traffic for this service
selector:
run: jenkins-ci
</code></pre>
<p>Now i can access jenkins UI in my browser without any problems. My issue I came into situation in which need to restart jenkins service manually??</p>
| <p>Just <code>kubectl delete pods -l run=jenkins-ci</code> - Will delete all pods with this label (your jenkins containers).
Since they are under Deployment, it will re-create the containers. Network routing will be adjusted automatically (again because of the label selector).</p>
<p>See <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a></p>
|
<p>I'm looking to mount docker socket from one container to another without involving the host. Is it possible? I searched around and couldn't find an example of such a situation. The issue is that host uses a very old version of docker so I setup docker within the container which works okay. Now I need other docker containers to use the socket from the base container and not the host. Is there any way to achieve this (in kubernetes)?</p>
| <p>The only way comes in mind is to use hostPath volume with type socket, and mount it into multiple containers:
<a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p>
<p>Even if it works you will end up with "other containers" launching containers within your "newer docker" container, which is not a good practice. I would suggest spinning another node with newer docker, connecting it to your master, and spinning the part of load that requires access to docker sock there. You can use nodeSelector to schedule properly:
<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration</a></p>
<p>You can port this onto k8s further by turning your control container into operator <a href="https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017" rel="nofollow noreferrer">https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017</a> (use k8s API instead of docker sock)</p>
|
<p>I want to create a customized docker image and be able to use kubernetes to pull my customized docker image from private docker registry. Here are my setups:</p>
<p>Environment:
docker registry ip:10.179.143.115
kubernetes master ip: 10.179.143.113</p>
<ol>
<li>generate a certificate:</li>
</ol>
<blockquote>
<pre><code>curl -O https://raw.githubusercontent.com/driskell/log-courier/1.x/src/lc-tlscert/lc-tlscert.go
go build lc-tlscert.go
./lc-tlscert
mkdir certs
mv selfsigned.* certs/
</code></pre>
</blockquote>
<ol start="2">
<li>Create docker registry:</li>
</ol>
<blockquote>
<p>docker run -d --restart=always --name registry -v
`pwd`/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e
REGISTRY_HTTP_TLS_CERTIFICATE=/certs/selfsigned.crt -e
REGISTRY_HTTP_TLS_KEY=/certs/selfsigned.key -p 443:443 registry:2</p>
</blockquote>
<ol start="3">
<li>Create my customized docker vm(just tag the vm with another name for testing purpose)</li>
</ol>
<blockquote>
<pre><code>docker pull tomcat
docker tag tomcat 10.179.143.115/test-tomcat
docker push 10.179.143.115/test-tomcat
</code></pre>
</blockquote>
<ol start="4">
<li>On Kubernetes master:</li>
</ol>
<blockquote>
<pre><code>copy selfsigned.*(crt and key file) to /usr/local/share/ca-certificates/
sudo update-ca-certificates
sudo service docker restart
</code></pre>
<p>root@kubernetes-master:~# docker images</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/kube-apiserver-amd64 v1.9.3 360d55f91cbf 3 weeks ago 210 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.9.3 83dbda6ee810 3 weeks ago 138 MB
gcr.io/google_containers/kube-proxy-amd64 v1.9.3 35fdc6da5fd8 3 weeks ago 109 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.9.3 d3534b539b76 3 weeks ago 62.7 MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 5 weeks ago 44.6 MB
gcr.io/google_containers/etcd-amd64 3.1.11 59d36f27cceb 2 months ago 194 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.7 db76ee297b85 4 months ago 42 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.7 5d049a8c4eec 4 months ago 50.3 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.7 5feec37454f4 4 months ago 41 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 22 months ago 747 kB
root@kubernetes-master:~# docker pull 10.179.143.115/test-tomcat
Using default tag: latest
latest: Pulling from test-tomcat
f0f063e89695: Pull complete
d9b7671d4a80: Pull complete
6eb55822688c: Pull complete
a85cc2721f25: Pull complete
ee9e2e7b610a: Pull complete
562dd1fb5637: Pull complete
e8e2e3cceeee: Pull complete
86cbf3cde839: Pull complete
3678522c43a2: Pull complete
50ea7ae5efa3: Pull complete
e81b257a8ae8: Pull complete
5b298dc937bc: Pull complete
Digest: sha256:332fa1b89534f0b0e45c636a26edb8520b15bcdfc05ef5450efae3e71d1b1361
Status: Downloaded newer image for 10.179.143.115/test-tomcat:latest
</code></pre>
</blockquote>
<p>5.However, when I want to create a kubernete pod:</p>
<blockquote>
<pre><code> test.yaml:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: 10.179.143.115/test-tomcat
</code></pre>
</blockquote>
<pre><code>kubectl create -f test.yaml
root@kubernetes-master:~# kubectl describe pods test
Name: test
Namespace: default
Node: kubernetes-node/10.179.143.114
Start Time: Fri, 02 Mar 2018 15:02:20 -0500
Labels: <none>
Annotations: <none>
Status: Pending
IP:
Containers:
test:
Container ID:
Image: 10.179.143.115/test-tomcat
Image ID:
Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lvz9r (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-lvz9r:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lvz9r
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned test to kubernetes-node
Normal SuccessfulMountVolume 31s kubelet, kubernetes-node MountVolume.SetUp succeeded for volume "default-token-lvz9r"
Normal Pulling 9s (x2 over 21s) kubelet, kubernetes-node pulling image "10.179.143.115/test-tomcat"
Warning Failed 9s (x2 over 21s) kubelet, kubernetes-node Failed to pull image "10.179.143.115/test-tomcat": rpc error: code = Unknown desc = Error response from daemon: Get https://10.179.143.115/v1/_ping: x509: certificate signed by unknown authority
Warning Failed 9s (x2 over 21s) kubelet, kubernetes-node Error: ErrImagePull
Normal SandboxChanged 9s (x2 over 20s) kubelet, kubernetes-node Pod sandbox changed, it will be killed and re-created.
</code></pre>
<p>6.Error message is :</p>
<pre><code>Failed to pull image "10.179.143.115/test-tomcat": rpc error: code = Unknown desc = Error response from daemon: Get https://10.179.143.115/v1/_ping: x509: certificate signed by unknown authority
</code></pre>
<p>Plesase bear with my bad formatting and thanks in advance!</p>
| <p>Thanks for all the help guys! Here is a follow up on how I make it works.</p>
<p>When I copy all the certs to kubenetes master, I am able to pull and push docker images from my private registry. But it was not working when I want to create kubernetes pods. And I realized that I need also to copy all the certs to my kubernetes slave and that is the place where kubernetes actually pull the images from private docker registry. After I copy the certs to "/usr/local/share/ca-certificates/" and run "sudo update-ca-certificates; sudo service docker restart" , I am able to create pods now! </p>
|
<p>Is it possible in Kubernetes to create a pod template and reuse it later when specifying a pod within a deployment? For example:</p>
<p>Say I have pod template...</p>
<pre><code>apiVersion: v1
kind: PodTemplate
metadata:
name: my-pod-template
template:
metadata:
labels:
app: "my-app"
spec:
containers:
- name: my-app
image: jwaldrip/my-app:latest
</code></pre>
<p>Could I then use it in a deployment as so?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
metadata:
name: my-pod-template
</code></pre>
<p>This would be super helpful when deploying something like Jobs, where I want to own the creation of a job with the given template.</p>
| <p>There is not. </p>
<p>Specifically in the case of Pods, there are PodPresets:</p>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/podpreset/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/podpreset/</a></p>
<p>But those don't apply to other objects. </p>
<p>One way to enforce the shape or attributes of arbitrary objects is to establish tooling that correctly creates those objects, then create credentials for that tooling, and use RBAC to only allow those credentials to create those objects. </p>
<p><a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/authorization/rbac/</a></p>
<p>Another way would be to create an Admission Controller to watch the attempted creation of the desired objects, and verify/reject those that don't meet the criteria:</p>
<p><a href="https://kubernetes.io/docs/admin/admission-controllers/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/admission-controllers/</a></p>
|
<p>I'm trying to setup Kubernetes locally.</p>
<p>I've just run <code>kubeadm init</code>, and it's provided me with a <code>join</code> command, however it seems I can't join the cluster on the same machine. As running the provided join command results in:</p>
<pre><code>[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
</code></pre>
<p>Is this possible? If so how?</p>
<p>This is for production purposes, I'm aware of minikube but I'd like to investigate if this is possible without it (less overhead of needing to run a VM also I'd imagine).</p>
| <p>I would suggest using <code>kubeadm</code> to provision the master and then remove the taint that is causing master to be unschedulable for normal workloads. Gives you great flexibility for future expansion if you need, and it's a well established tool in kube comuunity.</p>
|
<p>AKS- Can't log into one of the worker nodes (VM). I assigned the public IP as per <a href="https://gist.github.com/tsaarni/624d5406e442f08fe11083169c059a68" rel="nofollow noreferrer">https://gist.github.com/tsaarni/624d5406e442f08fe11083169c059a68</a> but still no luck. I get the error below:</p>
<blockquote>
<p>JohnDoeMac:.kube john_doe$ ssh [email protected]
Permission denied (publickey).</p>
</blockquote>
<p>Here subscription ID looks like: e84ff951-xxxxxxxxxxxx</p>
| <p>if you create AKS from Azure portal, you can specify the user name of VM.</p>
<p>at that case, user name is not <strong>azureuser</strong> any more</p>
<p>you can find out the user name and public key from Azure portal</p>
<p><a href="https://i.stack.imgur.com/GTelq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GTelq.jpg" alt="enter image description here" /></a></p>
|
<p>In dockers, I had two containers <code>Mosquitto</code> abd <code>userInfo</code></p>
<p><code>userInfo</code> is a container which performs some logic and then send the result to mosquitto container. Mosquitto container then use this information to send it to IOT hub. To start these containers in Docker, I created a network and started both the container in the same network. So I can easily use the hostname of <code>mosquitto</code> container inside <code>userinfo</code> container to send data. I need to do this same in kubernetes.</p>
<p>So in kubernetes, what I did, I deployed the <code>Mosquitto</code> so its POD was created then I created its service and use it inside the <code>userInfo</code> pod to send data to <code>mosquitto</code>. But this is not working.</p>
<p>I created the service by using </p>
<pre><code>kubectl expose deployment mosquitto
</code></pre>
<p>I need to send data of <code>userInfo</code> to <code>Mosquitto</code>. </p>
<ul>
<li><p>How can I achieve this.? </p></li>
<li><p>Do I need to create network as I was doing in <code>dockers</code> or is there any other way.?</p></li>
</ul>
<p>I also tried creating a pod with two containers i.e. <code>mosquitto</code> & <code>userInfo</code>, but this was also not working.</p>
<p>Thanks</p>
| <p>A Kubernetes pod may contain multiple containers. People generally run multiple containers in a pod when the two containers are tightly coupled, and it sounds like this is what you're looking for. These containers are guaranteed to be hosted on the same machine (they can contact each other via localhost), share the same port space, and can also use the same volumes.
<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod</a></p>
|
<p>I have 2 pods running in my kubernates cluster. One is simple a wordpress application and the 2nd one contains a mysql DB. Now wordpress is communicating with mysql DB. </p>
<p>I want to find this dependancies between pods. Is there any kubectl command or any tool like prometheus by which I can find dependancies between pods inside kubernates cluster?</p>
| <p>No, there is no native kubernetes primitive which can define dependencies between pods. An easy thing you can do is to define labels like <code>dependsOn</code> and attach them to the corresponding pod.</p>
<p>For example, your <code>wordpress</code> pod can have a label which says <code>dependsOn: mysql</code> where <code>mysql</code> can either be the name or another label of your <code>mysql</code> pod.</p>
<p>But this will only help a human reader understand what this pod is dependent on. Kubernetes works on the principle of eventual consistency. Even if mysql doesn't start before wordpress, eventually they will start working together and system will become consistent. The wordpress pod will crash when it cannot find mysql and Kubernetes will keep restarting crashing pods.</p>
<p>If you want to define dependencies between applications on Kubernetes and require deployments to happen in a particular order etc. you can take a look at tools like <a href="http://aptomi.io/" rel="nofollow noreferrer">Aptomi</a>.</p>
|
<p>I am trying to create a custom Docker image based on Rocker using Dockerfile. In the Dockerfile I am pulling my own R package from a custom GitLab server using:</p>
<p><code>RUN R -e "devtools::install_git('[custom gitlab server]', quiet = FALSE)"</code></p>
<p>Everything usually works, but I have noticed that when the GitLab server is down, or the machine running Docker is low on RAM memory, the package does not install correctly and returns an error message in the R console. This behavior is to be expected. However, Docker does not notice the error produced by R and continues evaluating the rest of the Dockerfile. I would like Docker to fail building the image when this occurs. In that way, I could ultimately prevent automatic deployment of the incomplete Docker container by Kubernetes.</p>
<p>So far I have thought of two potential solutions, but I am struggling with the execution:</p>
<ol>
<li><strong>R level</strong>: Wrap <code>tryCatch()</code> around <code>devtools::install_git</code> to catch the error. But then what? Use <code>stop</code>? Will this cause the Docker building process to stop as well? Could <code>withCallingHandlers()</code> be used?</li>
<li><strong>Dockerfile level</strong>: Use a shell command to check for errors? I cannot find the contents of <code>R --help</code> as I do not have a Linux machine at the moment. So I am not sure of what <code>R -e</code> actually does (execute I presume) and which other commands could be passed along with <code>R</code>.</li>
</ol>
<p>It seems that a similar issue is discussed <a href="https://github.com/bcgov/simple-R-shiny/issues/7" rel="noreferrer">here</a> and <a href="https://github.com/rocker-org/rocker/issues/24" rel="noreferrer">here</a>, but the I do not understand how they have solved it.</p>
<p>Thus how to make sure no Docker image ends up running on the Kubernetes cluster without the custom package?</p>
| <p>The Docker build process should stop once one of the commands in the <code>Dockerfile</code> returns a non zero status.</p>
<p><code>install_git</code> doesn't seem to throw an error when the package wasn't installed successfully, so the execution keeps on.</p>
<p>An obvious way to go would be to wrap the installation inside a dedicated <code>R</code> script and throw an error if it didn't finish successfully, which would then stop the build.</p>
<p>So I would suggest something like this ...</p>
<p>Create installation script <code>install_gitlab.R</code>:</p>
<pre><code>### file install_gitlab.R
## change repo- and package name!!
repo <- '[custom gitlab server]'
pkgname <- 'testpackage'
devtools::install_git(repo, quiet = FALSE)
stopifnot(pkgname %in% installed.packages()[,'Package'])
</code></pre>
<p>Modify your <code>Dockerfile</code> accordingly (replace the <code>install_git</code> line):</p>
<pre><code>...
Add install_gitlab.R /runscripts/install_gitlab.R
RUN Rscript /runscripts/install_gitlab.R
...
</code></pre>
<p>One thing to keep in mind is, this approach assumes the package you're trying to install is <strong>NOT</strong> installed prior to calling the command.</p>
|
<p>Suppose I have 2 network policies, for nodes matching labels "app=database". Suppose:</p>
<ul>
<li>First policy has rules that block all ingress traffic.</li>
<li>Second policy has a rule allows ingress traffic on port 5660.</li>
</ul>
<p>Even though this is a simple example, how does kubernetes decides which rule wins?
In more complex scenarios with several overlapping rules maybe covering similar pods, how would this be managed? E.g.: can we define priorities in Network policies?</p>
<p>Thanks.</p>
| <p>Kubernetes network policies right now does not allow <em>deny</em> policies. There are only <em>allow</em> policies. You basically put together all the allowed policies for the pod to get the allowed connections.</p>
<p>When there are one or more network policies on a pod, then all the connections <em>allowed by at least one of the network policies</em>, will be allowed.</p>
<p>So how does the default deny works. It just tells that the allowed connections are none.</p>
<p>A more detailed explanation is available <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods" rel="noreferrer" title="this">here</a>.</p>
|
<p>In the basic example of the documentation for declaring a network policy:
<a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource</a></p>
<p>So this sets several rules, as per the documentation:</p>
<pre><code>So, the example NetworkPolicy:
- isolates “role=db” pods in the “default” namespace for both ingress
and egress traffic (if they weren’t already isolated)
- allows connections to TCP port 6379 of “role=db” pods in the “default”
namespace from any pod in the “default” namespace with the
label “role=frontend”
- allows connections to TCP port 6379 of “role=db” pods
in the “default” namespace from any pod in a namespace with
the label “project=myproject”
...
</code></pre>
<p>Does this means that the pods of "role=db" label can receive connections from:</p>
<ul>
<li>other pods with labels “role=frontend” AND namespace with label “project=myproject”; or</li>
<li>other pods with labels “role=frontend” OR namespace with label “project=myproject”.</li>
</ul>
<p>Thanks!</p>
| <p>The <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/10-allowing-traffic-with-multiple-selectors.md" rel="nofollow noreferrer">kubernetes network recipe "<strong>ALLOW traffic from apps using multiple selectors</strong>"</a> is clear:</p>
<blockquote>
<ul>
<li>Rules specified in <strong><code>spec.ingress.from</code></strong> are <strong>OR</strong>'ed.</li>
<li>This means the pods selected by the selectors are combined are whitelisted altogether.</li>
</ul>
</blockquote>
|
<p>below is the metrics returned by the kubelet summary endpoint</p>
<pre><code>"node":{
"nodeName":"shayeeb-virtualbox",
"systemContainers":[ ],
"startTime":"2018-03-05T04:52:39Z",
"cpu":{
"time":"2018-03-05T05:06:00Z",
"usageNanoCores":989865279,
"usageCoreNanoSeconds":861395314766
},
"memory":{
"time":"2018-03-05T05:06:00Z",
"availableBytes":697614336,
"usageBytes":1809657856,
"workingSetBytes":1378811904,
"rssBytes":935657472,
"pageFaults":56928,
"majorPageFaults":70
},
...
</code></pre>
<p>the cpu metric is returned in ns but i need to calculate the cpu usage from the above metric and further more I need to calculate the memory usage from the above memory metrics.I am stuck here I couldnt find any details about the above metrics .</p>
| <p>As mentioned in <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#monitoring-compute-resource-usage" rel="nofollow noreferrer">K8s Managing Compute Resources for Containers/ "Monitoring compute resource usage"</a>:</p>
<blockquote>
<p>The resource usage of a Pod is reported as part of the Pod status.</p>
<p>If <strong><a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-monitoring/README.md" rel="nofollow noreferrer">optional monitoring</a></strong> is configured for your cluster, then Pod resource usage can be retrieved from the monitoring system.</p>
</blockquote>
<p>That optional monitoring system would be <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer"><strong><code>kubernetes/heapster</code></strong></a>, which enables Container Cluster Monitoring and Performance Analysis for Kubernetes (versions v1.0.6 and higher).<br />
It includes... <a href="https://github.com/kubernetes/heapster/blob/master/docs/overview.md" rel="nofollow noreferrer">a lot of metrics</a>.</p>
|
<p>I have set up a Kubernetes cluster. The cluster contains, among other things, a cluster and deployment surfacing an API webservice (based on the <a href="https://github.com/ResidentMario/subway-explorer-gmaps-proxy" rel="nofollow noreferrer"><code>subway-explorer-gmaps-proxy</code></a> container).</p>
<p>I've deployed the service externally, using the <code>LoadBalancer</code> service type (this is on GCP):</p>
<pre><code>$kubectl get svc subway-explorer-gmaps-proxy-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
subway-explorer-gmaps-proxy-service LoadBalancer 10.35.252.232 35.224.78.225 9000:31396/TCP 19h
</code></pre>
<p>My understanding (and correct me if I'm wrong!) is that this service should now be queryable outside of the cluster, by visiting <code>http://35.224.78.225</code> in the browser.</p>
<p>When running the Docker container locally, I can verify things are working correctly by navigating to the following URL:</p>
<pre><code>http://localhost:49161/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
</code></pre>
<p>Looking at the <code>kubectl get</code> output, I expect visiting the following URL in the browser will serve me the content I'm looking for:</p>
<pre><code>http://35.224.78.225:31396/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
</code></pre>
<p>But when I visit this URL, nothing gets served.</p>
<p>I suspect there is a non-fatal error in the deployment configuration. What is an effective way of debugging this effective way of debugging this problem? Are there access logs or a <code>stdout</code> stream somewhere I can check to see what's wrong?</p>
| <p>You can try running through the official docs on debugging services: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/</a></p>
<p>Beyond that, have you confirmed you're querying the load balancer on the right port? While I don't deploy on GCP, when launching a load balancer for a kubernetes service on AWS it'll accept traffic on port 80/443 and forward it to the NodePort of the service, which I'm guessing is 31396 for your case. What are the ports listed in <code>kubectl get svc subway-explorer-gmaps-proxy-service -o yaml</code>? </p>
|
<p>I need to load balance a cluster of Kubernetes API servers (version 1.7) on DigitalOcean, but the problem is that the Kubernetes API server seemingly only supports HTTPS and the DigitalOcean load balancer can only do <a href="https://developers.digitalocean.com/documentation/v2/#load-balancers" rel="noreferrer">HTTP or TCP health checks</a>.</p>
<p>Is there any way to perform health checks of the Kubernetes API server either via HTTP or TCP?</p>
| <p>do a kubectl proxy and then use postman or any tool to send a get request to
<a href="http://127.0.0.1:8001/healthz/poststarthook/apiservice-status-available-controller" rel="noreferrer">http://127.0.0.1:8001/healthz/poststarthook/apiservice-status-available-controller</a></p>
<p>you can use other too </p>
<ul>
<li><code>/healthz</code>,</li>
<li><code>/healthz/autoregister-completion</code>,</li>
<li><code>/healthz/ping</code>,</li>
<li><code>/healthz/poststarthook/apiservice-registration-controller</code>,</li>
<li><code>/healthz/poststarthook/apiservice-status-available-controller</code>,</li>
<li><code>/healthz/poststarthook/bootstrap-controller</code>,</li>
<li><code>/healthz/poststarthook/ca-registration</code>,</li>
<li><code>/healthz/poststarthook/extensions/third-party-resources</code>,</li>
<li><code>/healthz/poststarthook/generic-apiserver-start-informers</code>,</li>
<li><code>/healthz/poststarthook/kube-apiserver-autoregistration</code>,</li>
<li><code>/healthz/poststarthook/start-apiextensions-controllers</code>,</li>
<li><code>/healthz/poststarthook/start-apiextensions-informers</code>,</li>
<li><code>/healthz/poststarthook/start-kube-aggregator-informers</code>,</li>
<li><code>/healthz/poststarthook/start-kube-apiserver-informers</code>,</li>
</ul>
|
<p>I recently launched with gke and kubernetes in production.
I have regular outages with no obvious reasons.
No event shows anything, pods are not restarting and seems stable.
I have a similar qa env that has no issue at all whereas it's way smaller.</p>
<p>Where can I find potential infos on the outage reason?</p>
| <p>Stack driver makes you pay and configure it... kubernetes comes with a tool for it... just use this:</p>
<p><code>kubectl top nodes</code></p>
<pre><code>al@host:~/$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-learn-pool-1-10f60e0a-s44c 104m 11% 1008Mi 86%
</code></pre>
<p>You can also go under clusters -> Cluster -> nodes -> Node </p>
<p><a href="https://i.stack.imgur.com/EyHV2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EyHV2.png" alt="enter image description here"></a></p>
<p>Update: Stack Driver deprecated all load monitoring plugins. It's K8s or the highway now.</p>
|
<p>im getting </p>
<blockquote>
<p>Error: YAML parse error on myApp-infra/templates/my.yaml: error
converting YAML to JSON: yaml: line 20: found unexpected ':'</p>
</blockquote>
<p>below is helm install --dry-run --debug ./myApp output</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: spark-slave-service
labels:
app: spark-slave
spec:
selector:
app: spark-slave
clusterIP: None
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: spark-slave-deployment
spec:
selector:
matchLabels:
app: spark-slave
serviceName: "spark-slave-service"
replicas: 3 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: spark-slave
spec:
containers:
- name: spark-slave-container
image: <image url>
command: [<mycommand>
volumeMounts:
- mountPath: "/tmp/data"
name: slave-pvc
volumeClaimTemplates:
- metadata:
labels:
app: spark-slave
name: slave-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "rook-block"
</code></pre>
| <p>When helm encounters parse errors while processing multiple YAML documents in a single file (like your case) the error message can be a bit misleading. Although it says <code>line 20</code>, that point is in reference to the beginning of one of the YAML documents in the file, not the beginning of the file itself. With most parse errors, you should check the line it mentions as well as the previous line for issues. In your case, it looks like Line 19 of the StatefulSet document on the <code>command</code> line would cause it.</p>
|
<p>I'm trying to understand the Kubernetes design on how/if worker nodes communicate with each other. I found documentation on the <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/" rel="noreferrer">node-master communication</a> but nothing on node-node. I've understood that pods can communicate even if they are on different nodes unless a <code>NetworkPolicy</code> prevents this. What I'm wondering is if the information flow in the master-slave architecture is strictly between worker node and master or also between worker nodes.</p>
<p><strong>Question 1</strong>: Do worker nodes communicate with each other or does that only occur between pods? Or rather, do the nodes communicate even if their pods do not?</p>
<p><strong>Question 2</strong>: Say we have 2 worker nodes and that we have ssh:ed into one of the nodes, what information would be available about the other node or the master? </p>
<p>Thanks!</p>
| <blockquote>
<p>Do worker nodes communicate with each other or does that only occur
between pods? Or rather, do the nodes communicate even if their pods
do not?</p>
</blockquote>
<p>A worker node represents a collection of a few processes: <code>kubelet</code>, <code>kube-proxy</code>, and a container runtime (docker or rkt). If by communicate you are referring to sharing node state, health etc. as in a P2P system then <strong>no</strong>. </p>
<p>Pods communicate with pods (or services) and nodes are also able to reach pod and service ip addresses (this routing is handled by <code>kube-proxy</code> using <code>iptables</code>) and overlay networking.</p>
<p>However, in practice kubernetes relies on distributed KV store <code>etcd</code> for keeping system critical information. <code>etcd</code> may be deployed on the same nodes as the worker processes which requires node to node communication.
<br></p>
<blockquote>
<p>Say we have 2 worker nodes and that we have ssh:ed into
one of the nodes, what information would be available about the other
node or the master?</p>
</blockquote>
<p>There is no information kept about the other worker node or master node.<br>
You could glean some information from the <code>kubelet</code> config files or see connection activity to the master node (<code>apiserver</code> component specifically) in the <code>kubelet</code> logs. </p>
<hr>
<p>In general, the master node(s) run an <code>apiserver</code> pod which is the access point to the kubernetes cluster state (stored in <code>etcd</code>). Pods, <code>kubectl</code>, etc. use the <code>apiserver</code> to get information as required.</p>
|
<p>The <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/%22Kubernetes%20Docs%22" rel="nofollow noreferrer">Kubernetes Docs</a> say the following:</p>
<blockquote>
<p>In general, Pods do not disappear until someone destroys them. This
might be a human or a controller. The only exception to this rule is
that Pods with a phase of Succeeded or Failed for more than some
duration (determined by the master) will expire and be automatically
destroyed.</p>
</blockquote>
<p>What is the default value for this duration and how do I set it? My pods also never enter the Succeeded or Failed phase, rather they enter Completed or Error phase respectively. Is this to be expected; are the docs out of date?</p>
<p>I check the pod phases using <code>kubectl get pods --show-all</code>, where information about them seems to persist. Is there any additional cleanup necessary? Running <code>kubectl get pods</code> without <code>--show-all</code> does not show any pods after they destroyed.</p>
<p>I am creating pods with <code>kubectl apply -f k8/dummy-pod.yaml</code> and the following yaml file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dummy.3
labels:
vara: a
role: idk
spec:
hostNetwork: true
restartPolicy: Never
containers:
- image: gcr.io/gv-test-196801/dummy:v2
name: dummy-1
</code></pre>
| <p>I believe this documentation is out of date.<br>
Pod garbage collection using TTL <a href="https://github.com/kubernetes/kubernetes/pull/12055" rel="nofollow noreferrer">was abandoned</a> in favor of a threshold number of terminated pods. <code>--terminated-pod-gc-threshold</code> on the kube controller manager (<a href="https://kubernetes.io/docs/reference/generated/kube-controller-manager/" rel="nofollow noreferrer">docs here</a>). </p>
<p>Currently deleting a <code>DaemonSet, Deployment, ReplicaSet or StatefulSet</code> will orphan its pods by default.<br>
You can work around this by <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#foreground-cascading-deletion" rel="nofollow noreferrer">enabling cascading deletes</a>
<br>
This behavior will change in 1.10</p>
<blockquote>
<p>Prior to apps/v1 the default garbage collection policy for Pods in a
DaemonSet, Deployment, ReplicaSet, or StatefulSet, was to orphan the
Pods. That is, if you deleted one of these kinds, the Pods that they
owned would not be deleted automatically unless cascading deletion was
explicitly specified</p>
</blockquote>
<p>see <a href="http://blog.kubernetes.io/" rel="nofollow noreferrer">kubernetes blog</a></p>
|
<p>We are migrating legacy java and .net applications from on-premises VMs to an on-premises Kubernetes cluster. </p>
<p>Many of these applications make use of windows file shares to transfer files from and to other existing systems. Deploying to Kubernetes has less priority than re-engineering all the solutions to avoid using samba shares, so if we want to migrate we will have to find a way of keeping many things as they are.</p>
<p>We have setup a 3-node cluster on 3 centos 7 machines using Kubeadm and Canal. </p>
<p>I could not find any actively maintained plugin or library to mount SMB except for azure volumes.</p>
<p>What I came up with was to mount the SMB shares on each centos node using the same mountpoint on all nodes, i.e.: "/data/share1", then I created a local PersistentVolume </p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: samba-share-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/data/share1"
</code></pre>
<p>and a claim, </p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: samba-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
</code></pre>
<p>and assigned the claim to the application.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: samba-share-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: samba-share-deployment
tier: backend
spec:
containers:
- name: samba-share-deployment
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: samba-share-volume
volumes:
- name: samba-share-volume
persistentVolumeClaim:
claimName: samba-share-claim
</code></pre>
<p>it works from each replica, yet there are huge warnings about using local volumes in production. I do not know any other way to do this or what are the actual caveats of using this configuration.</p>
<p>Can I do it another way? Can this be ok if I monitor the mountpoints and disable the node in kubernetes if a mount fails?</p>
| <p>I asked the same question on r/kubernetes and a user commented with this. We are trying this now and it seems ok.</p>
<p><a href="https://www.reddit.com/r/kubernetes/comments/7wcwmt/accessing_windows_smbcifs_shares_from_pods/duzx0rs/" rel="noreferrer">https://www.reddit.com/r/kubernetes/comments/7wcwmt/accessing_windows_smbcifs_shares_from_pods/duzx0rs/</a></p>
<blockquote>
<p>We had to deal with a similar situation and I ended up developing a
custom Flexvolume driver to mount CIFS shares into pods from examples
I found online.</p>
<p>I have written a repo with the solution that works for my use case.</p>
<p><a href="https://github.com/juliohm1978/kubernetes-cifs-volumedriver" rel="noreferrer">https://github.com/juliohm1978/kubernetes-cifs-volumedriver</a></p>
<p>You still need to intall cifs-utils and jq on each Kubernetes host as
a pre-requisite. But it does allow you to create PersistentVoluems
that mount CIFS volumes and use them in your pods.</p>
<p>I hope it helps.</p>
</blockquote>
|
<p>I could understand, different ways to access docker image from local machine to Minikube VM.</p>
<p><a href="https://stackoverflow.com/questions/46065342/kubernetes-minikube-cant-get-docker-image-from-local-registry">(Kubernetes + Minikube) can't get docker image from local registry</a></p>
<p>All these examples are for Mac/Linux user.</p>
<p>I'm looking for an equivalent suggestion for Windows user. </p>
<p>What's windows equivalent to -> eval $(minikube docker-env) </p>
| <p>I found relatively easy way to point docker client(docker-machine) to minikube’s docker environment by running below commands in PowerShell -></p>
<p>PS C:\Users\ABC> minikube docker-env</p>
<p>PS C:\Users\ABC> minikube docker-env | Invoke-Expression</p>
|
<p>Before creating an object in Kubernetes (Service, ReplicationController, etc.), I'd like to test that the JSON or YAML specification of the object is valid. But I don't want to actually create the object.</p>
<p>Is there some to do a "dry run" that would be equivalent to running <code>kubectl create --validate=true -f file.json</code>, but would just let me know that it passes validation, and not actually create it?</p>
<p>Ideally, it would be great if I could do this via API, and not require the use of kubectl. But I could make it work if it required me to use kubectl.</p>
<p>Thanks.</p>
| <p>This works for me (kubernetes 1.7 and 1.9):</p>
<pre><code>kubectl apply --validate=true --dry-run=client --filename=file.yaml
</code></pre>
|
<p>I have installed Heapster in my Kubernetes cluster.I can give resource usage from command line, for example <code>kubectl top pods</code>, and Kubernetes web panel.</p>
<p>I'm trying to get resource usage via web API from Heapster. Actually I'd like to get resource usage (e.g: ram and cpu) a node, pod or namespace from Web API. </p>
<p>There is a web api <code>http://localhost:8001/swagger-2.0.0.json</code> in Kubernetes but there isn't any API for resource usage or Heapster data.</p>
<p>Is there any way to get resource usage via web API in Kubernetes ?</p>
<p>thanks</p>
| <p><strong>Question has been answered in the above comment section.</strong></p>
<p>In order to access the Node as well as Pod Metrics, It's better to use <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">Metrics_server</a> which is the successor of heapster.</p>
<p>The metrics server collects CPU and memory usage for nodes and pods by pooling data from Kubelet. </p>
<p>View nodes and pods metrics:</p>
<p><code>kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"</code></p>
<p><code>kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods"</code></p>
|
<p>I've built a docker image within the minikube VM. However I don't understand why Kubernetes is not finding it?</p>
<pre><code>minikube ssh
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
diyapploopback latest 9590c4dc2ed1 2 hours ago 842MB
</code></pre>
<p>And if I describe the pod:</p>
<pre><code>kubectl describe pods abcxyz12-6b4d85894-fhb2p
Name: abcxyz12-6b4d85894-fhb2p
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 13:49:51 +0000
Labels: appId=abcxyz12
pod-template-hash=260841450
Annotations: <none>
Status: Pending
IP: 172.17.0.6
Controllers: <none>
Containers:
nginx:
Container ID:
Image: diyapploopback:latest
Image ID:
Port: 80/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
mariadb:
Container ID: docker://fe09e08f98a9f972f2d086b56b55982e96772a2714ad3b4c2adf4f2f06c2986a
Image: mariadb:10.3
Image ID: docker-pullable://mariadb@sha256:8d4b8fd12c86f343b19e29d0fdd0c63a7aa81d4c2335317085ac973a4782c1f5
Port:
State: Running
Started: Wed, 07 Mar 2018 14:21:00 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 07 Mar 2018 13:49:54 +0000
Finished: Wed, 07 Mar 2018 14:18:43 +0000
Ready: True
Restart Count: 1
Environment:
MYSQL_ROOT_PASSWORD: passwordTempXyz
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: abcxyz12
ReadOnly: false
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
31m 31m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz12-6b4d85894-fhb2p to minikube
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
31m 29m 4 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
31m 16m 63 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
31m 6m 105 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
21s 21s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
20s 20s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
20s 20s 1 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
16s 16s 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
16s 15s 2 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
16s 15s 2 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
19s 1s 2 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
</code></pre>
<p>Seems I'm able to run it directly (only for debugging/diagnoses purposes..):
<code>kubectl run abcxyz123 --image=diyapploopback --image-pull-policy=Never</code></p>
<p>If I describe the above deployment/container I get:</p>
<pre><code>Name: abcxyz123-6749977548-stvsm
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 14:26:33 +0000
Labels: pod-template-hash=2305533104
run=abcxyz123
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controllers: <none>
Containers:
abcxyz123:
Container ID: docker://c9b71667feba21ef259a395c9b8504e3e4968e5b9b35a191963f0576d0631d11
Image: diyapploopback
Image ID: docker://sha256:9590c4dc2ed16cb70a21c3385b7e0519ad0b1fece79e343a19337131600aa866
Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 07 Mar 2018 14:42:45 +0000
Finished: Wed, 07 Mar 2018 14:42:48 +0000
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
17m 17m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz123-6749977548-stvsm to minikube
17m 17m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Pulled Container image "diyapploopback" already present on machine
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Created Created container
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Started Started container
16m 1m 66 kubelet, minikube spec.containers{abcxyz123} Warning BackOff Back-off restarting failed container
</code></pre>
| <pre><code>imagePullPolicy: IfNotPresent
</code></pre>
<p>The above was not present (and it is required...) in my image config within the deployment...</p>
|
<p>I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.</p>
<p>Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using <code>kops validate cluster</code>, it fails with the following error:</p>
<blockquote>
<p>unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host</p>
</blockquote>
<p>I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.</p>
| <p>From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname <code>api.ucla.dt-api-k8s.com</code> is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).</p>
<p>A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer. </p>
|
<p>I was following this <a href="https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/#real-world-example-configuring-redis-using-a-configmap/" rel="nofollow noreferrer">tutorial</a> to setup a configmap for a redis.conf. After I create the Redis deployment, I check to ensure that the redis.conf file is in each of the pods, and they are there. The problem is that when go in the redis-cli and check the configuration there, the redis.conf values aren't used. The default values are being used as if the Redis did not start up with the redis.conf file. </p>
<p>redis.conf</p>
<pre><code>maxclients 2000
requirepass "test"
</code></pre>
<p>redis-config configmap</p>
<pre><code>{
"apiVersion": "v1",
"data": {
"redis-config": "maxclients 2000\nrequirepass \"test\"\n\n"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2018-03-07T15:28:19Z",
"name": "redis-config",
"namespace": "default",
"resourceVersion": "2569562",
"selfLink": "/api/v1/namespaces/default/configmaps/redis-config",
"uid": "29d250ea-221c-11e8-969f-06c0c8d545d2"
}
}
</code></pre>
<p>k8 redis manifest.json</p>
<pre><code>{
"kind" : "Deployment",
"apiVersion" : "extensions/v1beta1",
"metadata" : {
"name" : "redis-master",
"creationTimestamp" : null
},
"spec" : {
"replicas" : 2,
"template" : {
"metadata" : {
"creationTimestamp" : null,
"labels" : {
"app" : "redis",
"role" : "master",
"tier" : "backend"
}
},
"spec" : {
"hostNetwork" : true,
"nodeSelector" :{ "role": "cache"},
"containers" : [{
"name" : "master",
"image" : "redis",
"ports" : [{
"containerPort" : 6379,
"hostPort" : 6379,
"protocol" : "TCP"
}
],
"volumeMounts" : [{
"mountPath" : "/redis-master",
"name": "config"
}
],
"resources" : {},
"terminationMessagePath" : "/dev/termination-log",
"imagePullPolicy" : "IfNotPresent"
}],
"volumes" : [{
"name" : "config",
"configMap" : {
"name" : "redis-config",
"items": [{
"key": "redis-config",
"path": "redis.conf"
}]
}
}
],
"restartPolicy" : "Always",
"terminationGracePeriodSeconds" : 30,
"dnsPolicy" : "ClusterFirst",
"securityContext" : {}
}
}
},
"status" : {}
}
</code></pre>
<p>Now I know the tutorial uses a Pod kind, and I am using a Deployment kind, but I don't think that is the issue here.</p>
| <p>It looks like you are pulling the default redis container. If you check the redis Dokerfiles, for example <a href="https://github.com/docker-library/redis/blob/d53b982b387634092c6f11069401679034054ecb/4.0/alpine/Dockerfile" rel="nofollow noreferrer">https://github.com/docker-library/redis/blob/d53b982b387634092c6f11069401679034054ecb/4.0/alpine/Dockerfile</a>, at the bottom, they have:</p>
<pre><code>CMD ["redis-server"]
</code></pre>
<p>which will start redis with the default configuration.</p>
<p>Per redis documentation:
<a href="https://redis.io/topics/quickstart" rel="nofollow noreferrer">https://redis.io/topics/quickstart</a>
under "Starting Redis" section, if you want to provide a different configuration, you would need to start redis with:</p>
<pre><code>redis-server <config file>
</code></pre>
<p>Additionally the example in Kubernetes documentation uses a different redis containter: </p>
<pre><code>image: kubernetes/redis
</code></pre>
<p>And from the Dokerfile: <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/storage/redis/image/Dockerfile" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/examples/storage/redis/image/Dockerfile</a>, it seems like that one starts Redis with the provided configuration.</p>
|
<p>We have a couple of clusters running on GKE and up until now I've only been maintaining a CPU request/limit for pods. We've recently run into issues where the cluster autoscaling isn't responding when pods begin to be evicted for low memory, and we can visibly see in the GKE console that there is memory pressure on at least one of the nodes. </p>
<p>I was hoping someone could tell me: is there some sort of calculation that we can make as a starting point for how much memory we should request/limit per pod of each of our services, or is that was more trial/error? Is there some statistic service that can track what's being used in the cluster now?</p>
<p>Thanks!</p>
| <p>There is no magic trick for calculating limits. You need to start with reasonable limits and refine using trial and error.</p>
<p>I can suggest a video from YouTube that explains quite well a method to refine your limits: <a href="https://youtu.be/-lsJyni7EQA" rel="noreferrer">https://youtu.be/-lsJyni7EQA</a></p>
<p>Basically it suggests to start with low limits and load test your application (one pod instance) until it breaks.
Than, raise the limits and load test again until you find good values.</p>
|
<p>I have a frontend application built with React and backend on nodejs.
Both have a separate Docker image and therefore a separate deployment on k8s (gce).</p>
<p>Each deployment has a corresponding k8s service, let's say <code>fe-serice</code> and <code>be-service</code>.</p>
<p>I am trying to setup an Ingress so that both services are exposed on a single domain in the following manner:</p>
<ul>
<li><code>/api/*</code> - are routed to <code>be-service</code></li>
<li>everything else is routed to <code>fe-service</code></li>
</ul>
<p>Here is my yaml file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-host
http:
paths:
- path: /*
backend:
serviceName: fe-service
servicePort: 80
- path: /api/*
backend:
serviceName: be-service
servicePort: 5000
</code></pre>
<p>Here is what I get with curl:</p>
<p><code>curl [ip] --header "Host: my-host"</code> -> React app (as expected)</p>
<p><code>curl [ip]/foo --header "Host: my-host"</code> -> nginx 404 (why?)</p>
<p><code>curl [ip]/api --header "Host: my-host"</code> -> nginx 404 (why?)</p>
<p><code>curl [ip]/api/ --header "Host: my-host"</code> -> nodejs app</p>
<p><code>curl [ip]/api/foo --header "Host: my-host"</code> -> nodejs app</p>
<p>As far as I can see a part with <code>api/</code> works fine, but I can't figure out everything else, I tried different combinations with/without wildcards, but it still does not work in the way I want it to work.</p>
<p>What am I missing? Is this even possible?
Thanks in advance!</p>
| <p>I can't explain why /foo is not working </p>
<p>But</p>
<p>/api/* does not cover /api, it covers only anything after /api/</p>
|
<p>I have an environment variable called <code>GOOGLE_MAPS_DIRECTIONS_API_KEY</code>, populated by a Kubernetes secret <code>YAML</code>: </p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: google-maps-directions-api-secret
type: Opaque
data:
GOOGLE_MAPS_DIRECTIONS_API_KEY: QUl...QbUpqTHNJ
</code></pre>
<p>The secret was created by copy-pasting the result of running <code>echo -n "AIz..." | base64</code> on my API key. I've provided the beginning and the end of the key in this code snippet, to show that there is no newline in the key included in the secret file.</p>
<p>Here is what I see when I run <code>cat google-maps-directions-api-key-secret.yaml | hexdump -C</code>:</p>
<pre><code>00000000 61 70 69 56 65 72 73 69 6f 6e 3a 20 76 31 0a 6b |apiVersion: v1.k|
00000010 69 6e 64 3a 20 53 65 63 72 65 74 0a 6d 65 74 61 |ind: Secret.meta|
00000020 64 61 74 61 3a 0a 20 20 6e 61 6d 65 3a 20 67 6f |data:. name: go|
00000030 6f 67 6c 65 2d 6d 61 70 73 2d 64 69 72 65 63 74 |ogle-maps-direct|
00000040 69 6f 6e 73 2d 61 70 69 2d 73 65 63 72 65 74 0a |ions-api-secret.|
00000050 74 79 70 65 3a 20 4f 70 61 71 75 65 0a 64 61 74 |type: Opaque.dat|
00000060 61 3a 0a 20 20 47 4f 4f 47 4c 45 5f 4d 41 50 53 |a:. GOOGLE_MAPS|
00000070 5f 44 49 52 45 43 54 49 4f 4e 53 5f 41 50 49 5f |_DIRECTIONS_API_|
00000080 4b 45 59 3a 20 51 55 6c 36 59 56 4e 35 51 7a 68 |KEY: QUl6YVN5Qzh|
...
000000b0 51 62 55 70 71 54 48 4e 4a |QbUpqTHNJ|
000000b9
</code></pre>
<p>But! When I step into a Node.JS interpreter inside of the pod, I see the following:</p>
<pre><code>> process.env.GOOGLE_MAPS_DIRECTIONS_API_KEY
'AIz...jLsI\n'
</code></pre>
<p>There is an auxiliary newline character appended to the end of the string!</p>
<p>This is, frankly, extremely frustrating. I have several questions on this subject.</p>
<ul>
<li>Can you spot my error? E.g. at what point in the secret propagation pipeline am I accidentally inserting that newline?</li>
<li>What Unix command should I use to print a newline character to console in such a way that it is interpreted literally (as a <code>\n</code>), so that I can <em>actually see it</em>?</li>
<li>Is it considered bad practice to inject code removing trailing newlines from environment variables into my container image? I know this is not technically correct, but this hurts like hell.</li>
</ul>
| <p>If you previously created the secret without the <code>-n</code> option to <code>echo</code>, verify the Secret persisted in the API (<code>kubectl get secret/google-maps-directions-api-secret -o yaml</code>) matches the secret in your yaml file, and also verify the consuming app has been redeployed since the secret was updated with the correct value</p>
|
<p>I have two docker images <code>Mosquitto</code> & <code>user-info</code>. <code>userInfo</code> is a container which performs some logic and then send the result to <code>mosquitto</code>. Mosquitto then use this information to send it to IOT hub. Inside <code>user-info</code> I have mentioned to use <code>hostname=mosquitto</code> so the <code>user-info</code> send all data to mosquitto</p>
<p>I started first by creating a pods with these 2 containers. So I wrote a yaml file with <code>kind: Pod</code> and everything went ok. As these container were inside the same pod, so they were easily able to communicate to each other and hence <code>user-info</code> was able to send data to <code>mosquitto</code>.</p>
<p>Now going forward I do not want to create pods and would like to go with <code>kind: Deployment</code> but I wonder creating multiple pods is not possible in deployment. So if I create two deployment files for <code>mosquitto-deployment.yaml</code> & <code>user-info-deployment.yaml</code>, they both will create two different pods. So how can I make these pods communicate?</p>
<p>I read about service and we can communicate using service but I am having hard time with services. If I create a service for mosquitto, do I need to create service for <code>user-info</code> or it can directly communicate to mosquitto service. Also, is it not possible to use single deployment.yaml file for creating all the pods rather than using 2-3 deployment.yaml files.</p>
| <p>A quote from the book "Up and running with Kubernetes" should give you a hint;</p>
<blockquote>
<p>In general, the right question to ask yourself when designing Pods is,
“Will these containers work correctly if they land on different
machines?” If the answer is “no,” a Pod is the correct grouping for
the containers. If the answer is “yes,” multiple Pods is probably the
correct solution.</p>
</blockquote>
<p>From what I've read about your project, those two containers would be more of a fit for a single pod - and that way they can communicate through localhost.</p>
<p>In general, you will never need a k8 manifest with the kind pod, thats just for experimental purposes.</p>
<p>For a typical app you will need an deployment and service object.</p>
<p>Tom</p>
|
<p>I'm setting up my Jenkins server, and on simple requests in the web interface, like creating a folder, a pipeline, a job, etc., I periodically get the following error:</p>
<pre><code>HTTP ERROR 403
Problem accessing /job/Mgmt/createItem. Reason:
No valid crumb was included in the request
</code></pre>
<p>The server is using the Jenkins/Jenkins container, orchestrated by Kubernetes on a cluster on AWS created with kops. It sits behind a class ELB.</p>
<p>Why might I be experiencing this? I thought the crumb was to combat certain CSRF requests, but all I'm doing is using the Jenkins web interface.</p>
| <p>Enabling proxy compatibility may help to solve this issue.
Go to Settings -> Security -> <strong>Enable proxy compatibility</strong> in CSRF Protection section</p>
<blockquote>
<p>Some HTTP proxies filter out information that the default crumb issuer uses to calculate the nonce value. If an HTTP proxy sits between your browser client and your Jenkins server and you receive a 403 response when submitting a form to Jenkins, checking this option may help. Using this option makes the nonce value easier to forge.</p>
</blockquote>
|
<p>I have a PHP daemon script downloading remote images and storing them local temporary before uploading to object storage.</p>
<p>PHP internal memory usage remains stable but the memory usage reported by Docker/Kubernetes keeps increasing.</p>
<p>I'm not sure if this is related to PHP, Docker or expected Linux behavior.</p>
<p>Example to reproduce the issue:</p>
<p>Docker image: <code>php:7.2.2-apache</code></p>
<pre><code><?php
for ($i = 0; $i < 100000; $i++) {
$fp = fopen('/tmp/' . $i, 'w+');
fclose($fp);
unlink('/tmp/' . $i);
unset($fp);
}
</code></pre>
<p>Calling <code>free -m</code> inside container before executing the above script:</p>
<pre><code> total used free shared buff/cache available
Mem: 3929 2276 139 38 1513 1311
Swap: 1023 167 856
</code></pre>
<p>And after executing the script:</p>
<pre><code> total used free shared buff/cache available
Mem: 3929 2277 155 38 1496 1310
Swap: 1023 167 856
</code></pre>
<p>Apperantly the memory is released but calling <code>docker stats php-apache</code> from host indicate something other:</p>
<pre><code>CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ccc19719078f php-apache 0.00% 222.1MiB / 3.837GiB 5.65% 1.21kB / 0B 1.02MB / 4.1kB 7
</code></pre>
<p>The initial memory usage reported by <code>docker stats php-apache</code> was 16.04MiB.</p>
<p>What is the explanation? How do I free the memory?</p>
<p>Having this contianer running in a Kubernetes cluster with resource limits causes the pod to fail and restart repeatedly.</p>
| <p>Yes, a similar issue has been reported <a href="https://github.com/moby/moby/issues/10824#issuecomment-347499296" rel="noreferrer">here</a>.</p>
<p><a href="https://github.com/moby/moby/issues/10824#issuecomment-347725280" rel="noreferrer">Here's the answer</a> of coolljt0725, one of the contributors, answering why a <code>RES</code> column in <code>top</code> output shows something different, than <code>docker stats</code> (I'm just gonna quote him as is):</p>
<blockquote>
<p>If I understand correctly, the memory usage in docker stats is exactly read from containers's memory cgroup, you can see the value is the same with 490270720 which you read from cat /sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0e27c1b6ff100981d9a69/memory.usage_in_bytes, and the limit is also the memory cgroup limit which is set by -m when you create container. The statistics of RES and memory cgroup are different, the RES does not take caches into account, but the memory cgroup does, that's why MEM USAGE in docker stats is much more than RES in top</p>
</blockquote>
<p>What a user suggested <a href="https://github.com/moby/moby/issues/10824#issuecomment-84771847" rel="noreferrer">here</a> might actually help you to see the real memory consumption:</p>
<blockquote>
<p>Try set the param of <code>docker run --memory</code>,then check your
<code>/sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes</code>
It should be right.</p>
</blockquote>
<p><code>--memory</code> or <code>-m</code> is described <a href="https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources" rel="noreferrer">here</a>:</p>
<blockquote>
<p><code>-m</code>, <code>--memory=""</code> - Memory limit (format: <code><number>[<unit>]</code>). Number is a positive integer. Unit can be one of <code>b</code>, <code>k</code>, <code>m</code>, or <code>g</code>. Minimum is <code>4M</code>.</p>
</blockquote>
<p>And now how to avoid the unnecessary memory consumption. Just as you posted, unlinking a file in PHP does not necessary drop memory cache immediately. Instead, running the Docker container in privileged mode (with <code>--privileged</code> flag) it is then possible to call <code>echo 3 > /proc/sys/vm/drop_caches</code> or <code>sync && sysctl -w vm.drop_caches=3</code> periodcally to clear the memory pagecache.</p>
<p>And as a bonus, using <code>fopen('php://temp', 'w+')</code> and storing the file temporary in memory avoids the entire issue.</p>
|
Subsets and Splits