prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm fairly novice in GCP and would like to ask a question:</p>
<p>I have two private clusters in the same region with internal LB (all in one VPC), currently pods from both clusters are able to communicate with each other over HTTP.</p>
<p>As far as I understand from the documentation - internal LB is a regional product, therefore if the private clusters were located in different regions the above scenario wouldn't be possible.</p>
<p>What do I need to do in order to make pods of two private clusters which are located on different regions to be able to communicate with each other?</p>
<p>My guess is that I have to define external LB for both of those clusters and using firewall rules allow communication only cluster to cluster via external IP and block all communication from the outside world.</p>
| <p>since these are different <a href="https://cloud.google.com/vpc/docs/vpc#ip-ranges" rel="nofollow noreferrer">IP ranges</a> (at least in auto mode), it may not help that it is global VPC - when this should be the case, you'd have to add a <a href="https://cloud.google.com/vpn/docs/how-to/creating-route-based-vpns" rel="nofollow noreferrer">VPN tunnel</a>, in order to route these network segments. also consider the possibility to add two tunnels; one for ingress and one for egress traffic.</p>
<p>an alternative to VPN tunnels might be <a href="https://cloud.google.com/vpc/docs/vpc-peering" rel="nofollow noreferrer">VPC Network Peering</a>, were the main difference is:</p>
<blockquote>
<p>Peered VPC networks remain administratively separate. Routes, firewalls, VPNs, and other traffic management tools are administered and applied separately in each of the VPC networks.</p>
</blockquote>
|
<p>I found in my pod that if a container fails or is killed due to failing liveness probe, the container is restarted, but the pod is not.</p>
<p>This means that <code>initContainers</code> are not run again in a case of a crashed container.</p>
<p>In my case, I do need to run one of the <code>initContainers</code> every time the main container fails.</p>
<p>Is this possible? Am I missing something?</p>
| <p>Currently, this is simply not supported: <code>livenessProbe</code> is a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">"container level" probe</a> and if this probe fails only the said container is restarted, not the whole Pod</p>
<p>Though, you'll be glad to know that this behaviour is currently being implemented on this PR: <a href="https://github.com/kubernetes/community/pull/2342" rel="nofollow noreferrer">https://github.com/kubernetes/community/pull/2342</a>.</p>
<p>As a workaround before it's done and you eventually update, you'd have to rethink why you really need your <code>initContainers</code> in the first place, and consider a different co-ordination between your pod containers (be they initContainers or not) through a shared volume or some other scenarios depending on your use case.</p>
|
<p>Service showing pending status after exposing the deployment.</p>
<pre><code>packet@ubuntu:/home/gss$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
wms1 LoadBalancer 10.106.19.103 <pending> 8000:32461/TCP 17h
</code></pre>
<p>Installed kubeadm with one master and 4 worker nodes.
created deployment with the command:</p>
<pre><code>sudo docker run -p 8000:8000 w1
</code></pre>
<p>here w1 is my image name.</p>
<p>created service with the command:</p>
<pre><code>kubectl expose deployment wms1 --type=LoadBalancer --port=8000
</code></pre>
| <p>To retrieve external ip for your application in Kubernetes Cluster you have to use cloud provider like Google Kubernetes Engine or Amazon Web Services. </p>
<p>Please check:
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers</a></p>
<p>Otherwise you can use Type NodePort and in that case Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service.</p>
<p>For detailed information: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p>
|
<p>I configured kubernetes cluster with one master and 4 worker nodes using KUBEADM tool IN LOCAL. All nodes are running fine. deployed an app and able access that app from browser. I have tried many ways to create a dashboard using kubectl but i am failed.</p>
<p><strong>TRY1:</strong> tried directly with the below command:</p>
<pre><code>$ sudo kubectl proxy --address="172.20.22.101" -p 8001
</code></pre>
<p>tried to access the dashboard using the url <a href="http://172.20.22.101:8001/api/v1" rel="nofollow noreferrer">http://172.20.22.101:8001/api/v1</a>, but it is saying unauthorized.</p>
<p><strong>TRY2:</strong> created dashboard-admin.yaml file with the below content:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
</code></pre>
<p>And run the below command:</p>
<pre><code>$ kubectl create -f dashboard-admin.yaml
</code></pre>
<p>It's shown me: <code>clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created.</code></p>
<p>Running the below command: </p>
<pre><code>$ sudo kubectl proxy --address="172.20.22.101" -p 443
</code></pre>
<p>its running fine. I am accessing the <a href="http://172.20.22.101:443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://172.20.22.101:443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a> URL from browser. it's showing same unauthorized error.</p>
| <p>run kubectl proxy command with <strong>--accept-hosts</strong> option </p>
<pre><code> kubectl proxy --address="172.20.22.101" -p 8001 --accept-hosts="^*$"
</code></pre>
<p>and it will work fine.</p>
<p><strong>Note: this is not recommended for production grade kubernetes clusters, since you're accessing the dashboard through plain http.</strong></p>
<p>More secure alternative is to run access the dashboard through ssh tunnel like this.</p>
<p>in one terminal run:</p>
<pre><code>kubectl proxy
</code></pre>
<p>in another terminal run a ssh tunnel to localhost:8001 (the default kubernetes dashboard port)</p>
<pre><code>ssh -NT -l SSH_USER -p SSH_PORT K8S_CONTROLLER_IP_ADDR -L 8001:localhost:8001
</code></pre>
|
<p>Kubernetes provides a <code>activeDeadlineSeconds</code> field for both <code>JobSpec</code> and <code>PodSpec</code></p>
<p>What is the difference between the two? I've put together a little job with <code>activeDeadlineSeconds</code> set to 20, and in its Pod definition I have set the <code>activeDeadlineSeconds</code> field to 45. These are kind of arbitrary but meant to be spaced out. When I create/apply the Job then run <code>kubectl get pods -a --watch</code>, I can see that the 20 deadline isn't having any effect but the second one is (I see the <code>DeadlineExceeded</code> output).</p>
<p>Just to be extra certain, I added <code>terminationGracePeriodSeconds: 10</code> in the PodSpec and see the same thing.</p>
<p>What is the purpose of the <code>activeDeadlineSeconds</code> in the Job? It doesn't seem to be sending any signal to my container.</p>
<p>Note: I'm simply running the <code>sleep</code> command on an <code>ubuntu</code> image. This command should exit when receiving the TERM signal sent by Kubernetes (so I expect a TERM signal at 20 seconds then the pod to die shortly thereafter)</p>
<p>Condensed YAML definition:</p>
<pre><code>apiVersion: batch/v2alpha1 # K8s 1.7.x
kind: CronJob
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Allow
jobTemplate:
spec: # JobSpec
activeDeadlineSeconds: 20 # This needs to be shorter than the cron interval ## TODO - NOT WORKING!
parallelism: 1
template: # PodTemplateSpec
spec:
activeDeadlineSeconds: 45
terminationGracePeriodSeconds: 10
containers:
- name: ubuntu-container
image: ubuntu
command: ['bash', '-c', 'sleep 500000']
</code></pre>
<p>References: </p>
<ul>
<li><a href="https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#jobspec-v1-batch" rel="noreferrer">https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#jobspec-v1-batch</a></li>
<li><a href="https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#podspec-v1-core" rel="noreferrer">https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#podspec-v1-core</a></li>
<li><a href="https://unix.stackexchange.com/questions/429594/why-sigterm-is-able-to-kill-sleep">https://unix.stackexchange.com/questions/429594/why-sigterm-is-able-to-kill-sleep</a> </li>
</ul>
| <p>Community wiki answer for future:</p>
<p>As per @Clorichel this issue was fixed in k8s v1.8
<a href="https://github.com/kubernetes/kubernetes/issues/32149" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/32149</a></p>
<p>My advice is to upgrade your cluster to the latest version, if is possible to have access to newest features and bug fixes.</p>
|
<p>I was trying to use io.fabric8 api to create a few resources in kubernetes using a pod-spec.yaml.</p>
<pre><code>Config config = new ConfigBuilder()
.withNamespace("ag")
.withMasterUrl(K8_URL)
.build();
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
LOGGER.info("Master: " + client.getMasterUrl());
LOGGER.info("Loading File : " + args[0]);
Pod pod = client.pods().load(new FileInputStream(args[0])).get();
LOGGER.info("Pod created with name : " + pod.toString());
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
</code></pre>
<p>The above code works if the resource type is of POD. Similarly for other resource type it is working fine.
But if the yaml has multiple resource type like POD and service in the same file, how to use fabric8 Api ? </p>
<p>I was trying to use <code>client.load(new FileInputStream(args[0])).createOrReplace();</code> but it is crashing with the below exception:</p>
<pre><code>java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3042)
at java.net.URI.<init>(URI.java:588)
at io.fabric8.kubernetes.client.utils.URLUtils.join(URLUtils.java:48)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:53)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:32)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:202)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:62)
at com.nokia.k8s.InterpreterLanuch.main(InterpreterLanuch.java:66)
</code></pre>
<p>Yaml file used</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
generateName: zep-ag-pod
annotations:
kubernetes.io/psp: restricted
spark-app-name: Zeppelin-spark-shared-process
namespace: ag
labels:
app: zeppelin
int-app-selector: shell-123
spec:
containers:
- name: ag-csf-zep
image: bcmt-registry:5000/zep-spark2.2:9
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c","echo Hi && sleep 60 && echo Done"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
securityContext:
fsGroup: 2000
runAsUser: 1510
serviceAccount: csfzeppelin
serviceAccountName: csfzeppelin
---
apiVersion: v1
kind: Service
metadata:
name: zeppelin-service
namespace: ag
labels:
app: zeppelin
spec:
type: NodePort
ports:
- name: zeppelin-service
port: 30099
protocol: TCP
targetPort: 8080
selector:
app: zeppelin
</code></pre>
| <p>You don't need to specify resource type whenever loading a file with multiple documents. You simply need to do:</p>
<pre><code> // Load Yaml into Kubernetes resources
List<HasMetadata> result = client.load(new FileInputStream(args[0])).get();
// Apply Kubernetes Resources
client.resourceList(result).inNamespace(namespace).createOrReplace()
</code></pre>
|
<p>I cannot find a way to remove GPU (accelerator resource) from Google Kubernetes Engine (GKE) cluster. There is no official documentation on how to make change to it. Can you suggest a proper way to do so? The UI is gray out and it cannot allow me to make change from the console. </p>
<p>Here is the screenshot when I click to edit cluster.
<a href="https://i.stack.imgur.com/BlxR5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BlxR5.png" alt="screenshot of GKE edit node"></a></p>
<p>Thank you</p>
| <p>You cannot edit settings of a Node Pool once it is created.</p>
<p>You should create a new node pool with the settings you want (GPU, machine type etc) and delete the old node pool.</p>
<p>There's a tutorial on how to migrate to a new node pool smoothly here: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool</a> If you don't care about pods terminating gracefully, you can create a new pool and just delete the old one.</p>
<p>You can find more content about this at <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime</a>.</p>
|
<p>I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules. </p>
<p>My <code>main.tf</code> call just two modules, <code>gke</code> for the google kubernetes engine and <code>storage</code> which creates a persistent volume on the cluster created previously.</p>
<p>Module <code>gke</code> has an <code>outputs.tf</code> which outputs the following:</p>
<pre><code>output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
</code></pre>
<p>Then in the <code>main.tf</code> for the storage module, I have:</p>
<pre><code>client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
host = "${var.host}"
</code></pre>
<p>Then in the root <code>main.tf</code> I have the following:</p>
<pre><code>client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
host = "${module.gke.host}"
</code></pre>
<p>From what I see, it looks right. The values for the certs, key and host variables should be outputted from the <code>gke</code> module by <code>outputs.tf</code>, picked up by <code>main.tf</code> of root, and then delivered to <code>storage</code> as a regular variable.</p>
<p>Have I got it the wrong way around? Or am I just going crazy, something doesn't seem right.</p>
<p>I get questioned about the variable not being filled when I run a plan.</p>
<p>EDIT:</p>
<p>Adding some additional information including my code.</p>
<p>If I manually add dummy entries for the variables it's asking for I get the following error:</p>
<pre><code>Macbook: $ terraform plan
var.client_certificate
Enter a value: 1
var.client_key
Enter a value: 2
var.cluster_ca_certificate
Enter a value: 3
var.host
Enter a value: 4
...
(filtered out usual text)
...
* module.storage.data.google_container_cluster.kube-cluster: 1 error(s) occurred:
* module.storage.data.google_container_cluster.kube-cluster: data.google_container_cluster.kube-cluster: project: required field is not set
</code></pre>
<p>It looks like it's complaining that the data.google_container_cluster resource needs the project attribute. But it doesn't it's not a valid resource. It is for the provider, but it's filled out for provider.</p>
<p>Code below:</p>
<p>Folder structure:</p>
<pre><code>root-folder/
├── gke/
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── storage/
│ ├── main.tf
│ └── variables.tf
├── main.tf
├── staging.json
├── terraform.tfvars
└── variables.tf
</code></pre>
<p>root-folder/gke/main.tf:</p>
<pre><code>provider "google" {
credentials = "${file("staging.json")}"
project = "${var.project}"
region = "${var.region}"
zone = "${var.zone}"
}
resource "google_container_cluster" "kube-cluster" {
name = "kube-cluster"
description = "kube-cluster"
zone = "europe-west2-a"
initial_node_count = "2"
enable_kubernetes_alpha = "false"
enable_legacy_abac = "true"
master_auth {
username = "${var.username}"
password = "${var.password}"
}
node_config {
machine_type = "n1-standard-2"
disk_size_gb = "20"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
</code></pre>
<p>root-folder/gke/outputs.tf:</p>
<pre><code>output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
</code></pre>
<p>root-folder/gke/variables.tf:</p>
<pre><code>variable "region" {
description = "GCP region, e.g. europe-west2"
default = "europe-west2"
}
variable "zone" {
description = "GCP zone, e.g. europe-west2-a (which must be in gcp_region)"
default = "europe-west2-a"
}
variable "project" {
description = "GCP project name"
}
variable "username" {
description = "Default admin username"
}
variable "password" {
description = "Default admin password"
}
</code></pre>
<p>/root-folder/storage/main.cf:</p>
<pre><code>provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
data "google_container_cluster" "kube-cluster" {
name = "${var.cluster_name}"
zone = "${var.zone}"
}
resource "kubernetes_storage_class" "kube-storage-class" {
metadata {
name = "kube-storage-class"
}
storage_provisioner = "kubernetes.io/gce-pd"
parameters {
type = "pd-standard"
}
}
resource "kubernetes_persistent_volume_claim" "kube-claim" {
metadata {
name = "kube-claim"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "kube-storage-class"
resources {
requests {
storage = "10Gi"
}
}
}
}
</code></pre>
<p>/root/storage/variables.tf:</p>
<pre><code>variable "username" {
description = "Default admin username."
}
variable "password" {
description = "Default admin password."
}
variable "client_certificate" {
description = "Client certificate, output from the GKE/Provider module."
}
variable "client_key" {
description = "Client key, output from the GKE/Provider module."
}
variable "cluster_ca_certificate" {
description = "Cluster CA Certificate, output from the GKE/Provider module."
}
variable "cluster_name" {
description = "Cluster name."
}
variable "zone" {
description = "GCP Zone"
}
variable "host" {
description = "Host endpoint, output from the GKE/Provider module."
}
</code></pre>
<p>/root-folder/main.tf:</p>
<pre><code>module "gke" {
source = "./gke"
project = "${var.project}"
region = "${var.region}"
username = "${var.username}"
password = "${var.password}"
}
module "storage" {
source = "./storage"
host = "${module.gke.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
cluster_name = "${var.cluster_name}"
zone = "${var.zone}"
}
</code></pre>
<p>/root-folder/variables.tf:</p>
<pre><code>variable "project" {}
variable "region" {}
variable "username" {}
variable "password" {}
variable "gc_disk_size" {}
variable "kpv_vol_size" {}
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
variable "cluster_name" {}
variable "zone" {}
</code></pre>
<p>I won't paste the contents of my <code>staging.json</code> and <code>terraform.tfvars</code> for obvious reasons :)</p>
| <p>In your <code>/root-folder/variables.tf</code>, delete the following entries:</p>
<pre><code>variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
</code></pre>
<p>Those are not variables per se that the Terraform code at the root level needs. Instead, they are being passed as 1 module's output --> input to the 2nd module.</p>
|
<p>I'm trying to create a read only user. I want the user to be able to list nodes and pods and view the dashboard. I got the certs created and can connect but I'm getting the following error.</p>
<pre><code>$ kubectl --context minikube-ro get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "erst-operation" cannot list pods at the cluster scope
</code></pre>
<p>My cluster role...</p>
<pre><code>$ cat helm/namespace-core/templates/pod-reader-cluster-role.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
</code></pre>
<p>My cluster role binding...</p>
<pre><code>$ cat helm/namespace-core/templates/pod-reader-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: erst-operation
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>I'm aware the above shouldn't grant permissions to see the dashboard but how do I get it to just list the pods?</p>
| <p>You cluster role should contain Core group as resource <code>pods</code> are in Core group.</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps", ""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
</code></pre>
|
<p>I installed <code>minikube</code> and started <code>Jenkins</code> inside a <code>pod</code>. I am able to create a new job and execute it inside a dynamically created <code>maven container</code>. However. I have a folder in my <code>host</code> machine which is a <code>Mac</code> based and I need that folder inside this dynamically created <code>pod</code> when the job is started. How can I achieve that?</p>
| <p>Option 1.</p>
<p><a href="https://kubernetes.io/docs/setup/minikube/#interacting-with-your-cluster" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/minikube/#interacting-with-your-cluster</a></p>
<p>configure kubectl on your MAC, then use <code>kubectl cp <hostdir> <podname>:<dir></code></p>
<p>Option 2.</p>
<p>use the Host path option for POD, like this post</p>
<p><a href="https://stackoverflow.com/questions/48534980/mount-local-directory-into-pod-in-minikube">Mount local directory into pod in minikube</a></p>
|
<p>I'm not sure what the difference is between the CNI plugin and the Kube-proxy in Kubernetes. From what I get out of the documentation I conclude the following:</p>
<p>Kube-proxy is responsible for communicating with the master node and routing.</p>
<p>CNI provides connectivity by assigning IP addresses to pods and services, and reachability through its routing deamon. </p>
<p>the routing seems to be an overlapping function between the two, is that true? </p>
<p>Kind regards,
Charles</p>
| <p><strong>OVERLAY NETWORK</strong></p>
<p>Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”).</p>
<p>All other Kubernetes networking stuff relies on the overlay networking working correctly.</p>
<p>There are a lot of overlay network backends (calico, flannel, weave) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities:</p>
<ol>
<li>Make sure your pods can send network requests outside your cluster</li>
<li>Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed.</li>
</ol>
<p><strong>KUBE-PROXY</strong></p>
<p>Just to understand kube-proxy, Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6)</p>
<ol>
<li>Every Kubernetes service gets an IP address (like 10.23.1.2)</li>
<li>kube-dns resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2)</li>
<li>kube-proxy sets up iptables rules in order to do random load balancing between them.</li>
</ol>
<p>So when you make a request to my-svc.my-namespace.svc.cluster.local, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random.</p>
<p>In short, <code>overlay networks</code> define the underlying network which can be used for communicating the various component of kubernetes. While <code>kube-proxy</code> is a tool to generate the IP tables magic which let you connect to any of the pod(using servics) in kubernetes no matter on which node that pod exist.</p>
<p>Parts of this answer were taken from this blog:</p>
<blockquote>
<p><a href="https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/" rel="noreferrer">https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/</a></p>
</blockquote>
<p>Hope this gives you brief idea about kubernetes networking.</p>
|
<p>I'm trying to get used to helm, and I'm having a problem while using helm upgrade. The question is, is it possible to have 0-downtime?</p>
<p>The things that I noticed is that helm/k8s removes the old pod before the new one is ready/live. Is it possible to say to remove the old one only when the new is up and running? it seems a very logical case, but I can't figure out how to do it.</p>
<p>NB: right now I'm forcing pod recreation with <code>helm upgrade --recreate-pods notes notes/</code>, can maybe be this?</p>
| <p>Your applications should have <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">Livness and Readiness probes</a>, thus it will shut down the old ones only when Readiness probe on the new ones passed.
Also you can take a look at <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="noreferrer">Deployment Strategy</a>. </p>
<p>Liveness and Readiness probes is a must when architecting applications for Kubernetes, as described in <a href="https://www.digitalocean.com/community/tutorials/architecting-applications-for-kubernetes#implementing-readiness-and-liveness-probes" rel="noreferrer">this article</a>. </p>
|
<p>In <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md" rel="nofollow noreferrer">kube-state-metrics</a> there is a metric for pods - <code>kube_pod_status_ready</code> that has 3 attributes</p>
<pre><code>pod=<pod-name>
namespace=<pod-namespace>
condition=<true|false|unknown>
</code></pre>
<p>What does <code>condition</code> attribute stand for? I can't find it's definition anywhere in the docs. I can guess what it means, but it would be great to get a definition or explanation of how it's calculated.</p>
| <p>That's documented in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#podcondition-v1-core" rel="nofollow noreferrer">API reference</a>. In essence it's the condition in the <code>status</code> field for <code>"type": "Ready"</code> for a given pod. For example in the following output:</p>
<pre><code>$ kubectl get pod <your-pod> -o=json | jq .status.conditions
[
...
{
"lastProbeTime": null,
"lastTransitionTime": "2018-11-20T22:45:27Z",
"status": "True",
"type": "Ready"
},
...
]
</code></pre>
<p>In this case, the sub-field <code>"status": "True"</code> represents <code>condition=true</code> in your metrics. Alternatively, <code>"status": "False"</code> would represent <code>condition=false</code> and <code>"status": "Unknown"</code> would represent <code>condition=unknown</code>.</p>
|
<p>Basically, I am trying to deploy a docker image on K8s in a concourse pipeline. I want to use this <code>resource</code> to deploy.
<a href="https://github.com/jcderr/concourse-kubernetes-resource#installing" rel="nofollow noreferrer">https://github.com/jcderr/concourse-kubernetes-resource#installing</a></p>
<p>However, I coudn't exactly figure out the values of </p>
<pre><code>cluster_ca: _base64 encoded CA pem_
admin_key: _base64 encoded key pem_
admin_cert: _base64 encoded certificate_
</code></pre>
<p>For finding the <code>cluster_ca</code>, I tried to execute a command like the following:</p>
<pre><code>kubectl config view --raw -o json | jq -r '.clusters[0].cluster."certificate-authority-data"' | tr -d '"' | base64 --decode
</code></pre>
<p>And for the <code>admin_ca</code>, I logged into one of the containers in the cluster and <code>cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt</code></p>
<p>I am not sure if these two values are correct. Also, I am not sure what <code>admin_key</code> is. </p>
<p>Could someone help me to figure this out?</p>
| <p>You can use following three commands to identify the cluster_ca, admin_cert and admin_key. Assuming you have set the current context on the kubernetes-admin</p>
<pre><code>[root@ip-10-0-1-13]# kubectl config current-context
kubernetes-admin@kubernetes
</code></pre>
<p>Command for cluster_ca (Output will be Encoded in base64) </p>
<pre><code>kubectl config view current-context --raw -o json | ./jq -r '.clusters[].cluster."certificate-authority-data"'
</code></pre>
<p>Command for admin_cert (Output will be Encoded in base64)</p>
<pre><code>kubectl config view current-context --raw -o json | ./jq -r '.users[].user."client-certificate-data"'
</code></pre>
<p>Command for admin_key (Output will be Encoded in base64)</p>
<pre><code>kubectl config view current-context --raw -o json | ./jq -r '.users[].user."client-key-data"'
</code></pre>
|
<p>I using NGINX Ingress Controller in Kubernetes cluster, need to hide the Nginx version information for the client request. since Nginx configuration file generated dynamically. What is the best way to include below line in nginx.conf file?</p>
<pre><code>server_tokens off
</code></pre>
<p>Thanks
SR</p>
| <p>If you look at the <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens" rel="noreferrer">configs</a> you'll see that <code>server_tokens</code> can be either in the <code>http, server, location</code> contexts in your <code>nginx.conf</code>. So, on the nginx ingress controller to it really depends on where you want to add that setting (and how):</p>
<ul>
<li><p>http context means for all configs in the ingress controller so you'd have to change in the nginx ingress controller config map using the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#http-snippet" rel="noreferrer">http snippet</a> option.</p></li>
<li><p>server context can be done either through the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#server-snippet" rel="noreferrer">server-snippet</a> ConfigMap option or the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#server-snippet" rel="noreferrer">server-snippet annotation</a> on a per Ingress basis.</p></li>
<li><p>location context can be done either through the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#location-snippet" rel="noreferrer">location snippet</a> ConfigMap option or the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet" rel="noreferrer">configuration snippet</a> on a per Ingress basis.</p></li>
</ul>
|
<p>I've had the opportunity to install k8s clusters on CentOS VMs. In most cases, i used flanneld as overlay. On some other cases, though, i noticed flannel pods in kube-system namespace. IMHO, we need not have both flanneld and flannel pods for underlying CNI to function properly with kubernetes.</p>
<p>Have read plenty of documentation on how flannel overlay fits into kubernetes ecosystem. However, i haven't found the answers to some questions. Hope somebody can provide pointers.</p>
<ol>
<li>What is the basis for choosing flanneld or flannel pod?</li>
<li>Are there any differences in functionality between flanneld and flannel pod?</li>
<li>How does the flannel pod provide CNI functionality? My understanding is the pod populates etcd with IP address k/v pairs but how is this info really used?</li>
<li>Do most CNI plugins have a choice between running as daemon or pod?</li>
</ol>
| <p>You are right, you don't need both of them because they do the same job. There is no differences between them just where the daemon run in system, in isolated container or in system as regular daemon. All CNI plugins bases on CNI library and route the traffic. Flannel use system ETCD as key-value storage. if you have ETCD inside kubernetes cluster it will use this if external it will use external ETCD. it is only you choose what prefer to you, For example If you are running external ETCD usually people running flannel as daemon in system. </p>
|
<p>I have read about the Appdynamics in Kubernetes but I got confused.</p>
<p>The scenario is like I am having EC2 under which Kubernetes is running which is having POD and under 1 pod, multiple container is running. </p>
<p>Where I have to install machine-agent? In EC2 or in daemon set?</p>
<p>And where I have to install app-agent? do I have to add app-agent in each container Dockerfile?</p>
<p>And lastly, what would be my hostName and uniqueHostId?</p>
| <p>As stated on the AppD docs regarding <a href="https://docs.appdynamics.com/display/CLOUD/Kubernetes+and+AppDynamics+APM" rel="nofollow noreferrer">Kubernetes and AppDynamics APM</a></p>
<p><a href="https://i.stack.imgur.com/qieEb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qieEb.jpg" alt="enter image description here" /></a></p>
<blockquote>
<p>Install a Standalone Machine Agent (1) in a Kubernetes node.</p>
<p>Install an APM Agent (2) inside each container in a pod you want to monitor.</p>
<p>The Standalone Machine Agent then collects hardware metrics for each monitored container, as well as Machine and Server metrics for the host (3), and forwards the metrics to the Controller.</p>
</blockquote>
<p>ContainerID and UniqueHostID can be taken from <code>/proc/self/cgroup</code></p>
<blockquote>
<p>ContainerID <code>cat /proc/self/cgroup | awk -F '/' '{print $NF}' | head -n 1</code></p>
<p>UniqueHostID <code>sed -rn '1s#.*/##; 1s/(.{12}).*/\1/p' /proc/self/cgroup</code></p>
</blockquote>
|
<p>I am trying to setup AlertManager for my Kubernetes cluster. I have followed this document (<a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md</a>) -> Everything Ok. </p>
<p>For setting AlertManager, I am studying this document (<a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md</a>)</p>
<p>I am getting the <code>CrashLoopBackOff</code> for <code>alertmanager-example-0</code>. Please check the log attached:</p>
<p>1st image : <code>$ kubectl logs -f prometheus-operator-88fcf6d95-zctgw -n monitoring</code></p>
<p>2nd image : <code>$ kubectl describe pod alertmanager-example-0</code></p>
<p><a href="https://i.stack.imgur.com/YS5rZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YS5rZ.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/8njiB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8njiB.png" alt="enter image description here"></a></p>
<p>Can anyone point out what am I doing wrong? Thanks in advance.</p>
| <p>Sounds like you have an issue where <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> and the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="nofollow noreferrer">Service Account</a> (<code>system:serviceaccount:monitoring:prometheus-operator</code>) used by your Alert manager pods doesn't have enough permissions to talk to the kube-apiserver. </p>
<p>In your the case of the Prometheus Operator has a ClusterRoleBinding <code>prometheus-operator</code> that looks like this:</p>
<pre><code>$ kubectl get clusterrolebinding prometheus-operator -o=yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: prometheus-operator
name: prometheus-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-operator
subjects:
- kind: ServiceAccount
name: prometheus-operator
namespace: monitoring
</code></pre>
<p>More importantly, the <code>ClusterRole</code> should look something like this:</p>
<pre><code>$ kubectl get clusterrole prometheus-operator -o=yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: prometheus-operator
name: prometheus-operator
rules:
- apiGroups:
- extensions
resources:
- thirdpartyresources
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- alertmanager
- alertmanagers
- prometheus
- prometheuses
- service-monitor
- servicemonitors
- prometheusrules
verbs:
- '*'
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- '*'
- apiGroups:
- ""
resources:
- configmaps
- secrets
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- delete
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- create
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- list
- watch
</code></pre>
|
<p>I'm hosting my <a href="https://hub.docker.com/r/prismagraphql/prisma/" rel="nofollow noreferrer">Prisma docker container</a> on a Kubernetes cluster that requires a health check path to determine whether the container is alive. Since I don't have any control over the endpoints in this container, how would I add a health check route that Kubernetes could hit?</p>
| <p>Add this to your prisma container manifest file. If you deploy prisma by a <code>deployment</code>, run:</p>
<pre><code>$ kubectl edit deployment <prisma_deployment_name> --namespace <namespace>
</code></pre>
<p>And put the following probe spec in the prisma container spec.</p>
<hr>
<pre><code> livenessProbe:
httpGet:
path: /
port: 4466
# Number of seconds after the container has started before probes are initiated.
initialDelaySeconds: 120
# How often (in seconds) to perform the probe.
periodSeconds: 10
# Number of seconds after which the probe times out.
timeoutSeconds: 60
readinessProbe:
httpGet:
path: /
port: 4466
# Number of seconds after the container has started before probes are initiated.
initialDelaySeconds: 120
# How often (in seconds) to perform the probe.
periodSeconds: 10
# Number of seconds after which the probe times out.
timeoutSeconds: 60
</code></pre>
|
<p>Our application consists of circa 20 modules. Each module contains a (Helm) chart with several deployments, services and jobs. Some of those jobs are defined as Helm pre-install and pre-upgrade hooks. Altogether there are probably about 120 yaml files, which eventualy result in about 50 running pods.</p>
<p>During development we are running Docker for Windows version 2.0.0.0-beta-1-win75 with Docker 18.09.0-ce-beta1 and Kubernetes 1.10.3. To simplify management of our Kubernetes yaml files we use Helm 2.11.0. Docker for Windows is configured to use 2 CPU cores (of 4) and 8GB RAM (of 24GB).</p>
<p><strong>When creating the application environment for the first time, it takes more that 20 minutes to become available. This seems far to slow; we are probably making an important mistake somewhere. We have tried to improve the (re)start time, but to no avail. Any help or insights to improve the situation would be greatly appreciated.</strong></p>
<p>A simplified version of our startup script:</p>
<pre><code>#!/bin/bash
# Start some infrastructure
helm upgrade --force --install modules/infrastructure/chart
# Start ~20 modules in parallel
helm upgrade --force --install modules/module01/chart &
[...]
helm upgrade --force --install modules/module20/chart &
await_modules()
</code></pre>
<p>Executing the same startup script again later to 'restart' the application still takes about 5 minutes. As far as I know, unchanged objects are not modified at all by Kubernetes. Only the circa 40 hooks are run by Helm.</p>
<p>Running a single hook manually with <code>docker run</code> is fast (~3 seconds). Running that same hook through Helm and Kubernetes regularly takes 15 seconds or more.</p>
<p>Some things we have discovered and tried are listed below.</p>
<h3>Linux staging environment</h3>
<p>Our staging environment consists of Ubuntu with native Docker. Kubernetes is installed through minikube with <code>--vm-driver none</code>.</p>
<p>Contrary to our local development environment, the staging environment retrieves the application code through a (deprecated) <code>gitRepo</code> volume for almost every deployment and job. Understandibly, this only seems to worsen the problem. Starting the environment for the first time takes over 25 minutes, restarting it takes about 20 minutes.</p>
<p>We tried replacing the <code>gitRepo</code> volume with a sidecar container that retrieves the application code as a TAR. Although we have not modified the whole application, initial tests indicate this is not particularly faster than the <code>gitRepo</code> volume.</p>
<p>This situation can probably be improved with an alternative type of volume that enables sharing of code between deployements and jobs. We would rather not introduce more complexity, though, so we have not explored this avenue any further. </p>
<h3>Docker run time</h3>
<p>Executing a single empty alpine container through <code>docker run alpine echo "test"</code> takes roughly 2 seconds. This seems to be overhead of the setup on Windows. That same command takes less 0.5 seconds on our Linux staging environment.</p>
<h3>Docker volume sharing</h3>
<p>Most of the containers - including the hooks - share code with the host through a <code>hostPath</code>. The command <code>docker run -v <host path>:<container path> alpine echo "test"</code> takes 3 seconds to run. Using volumes seems to increase runtime with aproximately 1 second.</p>
<h3>Parallel or sequential</h3>
<p>Sequential execution of the commands in the startup script does not improve startup time. Neither does it drastically worsen.</p>
<h3>IO bound?</h3>
<p>Windows taskmanager indicates that IO is at 100% when executing the startup script. Our hooks and application code are not IO intensive at all. So the IO load seems to originate from Docker, Kubernetes or Helm. We have tried to find the bottleneck, but were unable to pinpoint the cause.</p>
<h3>Reducing IO through ramdisk</h3>
<p>To test the premise of being IO bound further, we exchanged <code>/var/lib/docker</code> with a ramdisk in our Linux staging environment. Starting the application with this configuration was not significantly faster.</p>
| <p>To compare Kubernetes with Docker, you need to consider that Kubernetes will run more or less the same Docker command on a final step. Before that happens many things are happening.
The authentication and authorization processes, creating objects in etcd, locating correct nodes for pods scheduling them and provisioning storage and many more.
Helm itself also adds an overhead to the process depending on size of chart.</p>
<p>I recommend reading <a href="https://techbeacon.com/one-year-using-kubernetes-production-lessons-learned" rel="nofollow noreferrer">One year using Kubernetes in production: Lessons learned</a>. Author goes into explaining what have they achieved by switching to Kubernetes as well differences in overhead:</p>
<blockquote>
<h3>Cost calculation</h3>
<p>Looking at costs, there are two sides to the story. To run Kubernetes, an etcd cluster is required, as well as a master node. While these are not necessarily expensive components to run, this overhead can be relatively expensive when it comes to very small deployments. For these types of deployments, it’s probably best to use a hosted solution such as Google's Container Service.</p>
<p>For larger deployments, it’s easy to save a lot on server costs. The overhead of running etcd and a master node aren’t significant in these deployments. Kubernetes makes it very easy to run many containers on the same hosts, making maximum use of the available resources. This reduces the number of required servers, which directly saves you money. When running Kubernetes sounds great, but the ops side of running such a cluster seems less attractive, there are a number of hosted services to look at, including Cloud RTI, which is what my team is working on.</p>
</blockquote>
|
<p>Is it possible to build Kubernetes from source code on a windows machine?</p>
<p>As per development environment setup mentioned in <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/development.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/development.md</a> only supported OSs are Linux and MAC</p>
<p>Running build/run.sh shows below:</p>
<pre><code>Unsupported host OS. Must be Linux or Mac OS X.
</code></pre>
| <p>The simple answer is yes. Kubernetes source is in <a href="https://golang.org/" rel="nofollow noreferrer">Go</a> and there is <a href="https://dl.google.com/go/go1.11.2.windows-amd64.msi" rel="nofollow noreferrer">Go compiler</a> for Windows. </p>
<p>Another question would be, is it possible to be built easily? And that would be a 'no' (as of this writing) since you have already seen by running <code>build/run.sh</code>. So it's not supported by K8s officially.</p>
|
<p>I have a console application which does some operations when run and I generate an image of it using docker. Now, I would like to deploy it to Kubernetes and run it every hour, is it possible that I could do it in K8?</p>
<p>I have read about Cron jobs but that's being offered only from version 1.4</p>
| <p>The short answer. Sure, you can do it with a <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">CronJob</a> and yes it does create a Pod. You can configure <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="nofollow noreferrer">Job History Limits</a> to control how many failed, completed pods you want to keep before Kubernetes deletes them.</p>
<p>Note that CronJob is a subset of the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> resource.</p>
|
<p>I have the following, relatively complicated piece of YAML:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.tf-eks-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
</code></pre>
<p>I now want to translate this into a Terraform-Kubernetes compatible resource, which consists exclusively of String, Lists and Maps. How do I do this?
I found an example of how a LimitRange would look <a href="https://www.terraform.io/docs/providers/kubernetes/guides/getting-started.html" rel="nofollow noreferrer">here</a> using hyphens to signify a list:</p>
<pre><code>resource "kubernetes_limit_range" "example" {
metadata {
name = "terraform-example"
}
spec {
limit {
type = "Pod"
max {
cpu = "200m"
memory = "1024M"
}
}
limit {
type = "PersistentVolumeClaim"
min {
storage = "24M"
}
}
limit {
type = "Container"
default {
cpu = "50m"
memory = "24M"
}
}
}
}
</code></pre>
<p>My current try for the mapRoles block is as follows:</p>
<pre><code>mapRole { rolearn = "${aws_iam_role.tf-eks-node.arn}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]}
</code></pre>
<p>This is not correct because mapRoles is a list (because its singular element is a list element as demonstrated by the hyphen); but if it is a list, and everything after the hyphen is the list element, what is the list title?</p>
<p>using the idea mentioned in an answer, I tried to put the metadata block in a data-block beforehand:</p>
<pre><code>data "template_file" "map_roles" {
template = <<EOF
- rolearn: ${var.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
vars {
arn = "${var.kube-arn}"
}
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${data.template_file.map_roles.template}"
}
}
</code></pre>
| <p>So when you have this in YAML:</p>
<pre><code>data:
mapRoles: |
</code></pre>
<p>it means that everything after the <code>|</code> is a string that is the value of <code>mapRoles</code>. I'm not really sure that's what you want. If you want lists of lists you would want something like this:</p>
<pre><code>data:
mapRoles:
- rolearn: ${aws_iam_role.tf-eks-node.arn}
- username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:bootstrappers
- system:nodes
</code></pre>
<p>This would translate into HCL to this:</p>
<pre><code>"data" = {
"mapRoles" = {
"rolearn" = "${aws_iam_role.tf-eks-node.arn}"
}
"mapRoles" = {
"username" = "system:node:{{EC2PrivateDNSName}}"
}
"mapRoles" = {
"groups" = ["system:bootstrappers", "system:nodes"]
}
}
</code></pre>
|
<p>I am looking for a tool that will present a microservices diagram from a YAML file or create a file that I can import into something like Visio.</p>
| <p>How about converting to JSON first with something like <a href="https://codebeautify.org/yaml-to-json-xml-csv" rel="nofollow noreferrer">https://codebeautify.org/yaml-to-json-xml-csv</a></p>
<p>and then use <a href="https://www.npmjs.com/package/json-to-plantuml" rel="nofollow noreferrer"><code>json-to-plantuml</code></a>. You can test the output with <a href="http://plantuml.com/" rel="nofollow noreferrer">http://plantuml.com/</a>. I don't think there is a silver bullet so you might have to tweak the output to get what you want.</p>
|
<p>I get the following error when creating a PVC and I have no idea what it means. </p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalExpanding 1m (x3 over 5m) volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external
controller to process this PVC.
</code></pre>
<p>My PV for it is there and seems to be fine.</p>
<p>Here is the spec for my PV and PVC.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
labels:
app: projects-service
app-guid: design-center-projects-service
asset: service
chart: design-center-projects-service
class: projects-service-nfs
company: mulesoft
component: projects
component-asset: projects-service
heritage: Tiller
product: design-center
product-component: design-center-projects
release: design-center-projects-service
name: projects-service-nfs
selfLink: /api/v1/persistentvolumes/projects-service-nfs
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: projects-service-nfs-block
namespace: design-center
resourceVersion: "7932052"
uid: d114dd38-f411-11e8-b7b1-1230f683f84a
mountOptions:
- nfsvers=3
- hard
- sync
nfs:
path: /
server: 1.1.1.1
persistentVolumeReclaimPolicy: Retain
volumeMode: Block
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
labels:
app: projects-service
app-guid: design-center-projects-service
asset: service
chart: design-center-projects-service
company: mulesoft
component: projects
component-asset: projects-service
example: test
heritage: Tiller
product: design-center
product-component: design-center-projects
release: design-center-projects-service
name: projects-service-nfs-block
selfLink: /api/v1/namespaces/design-center/persistentvolumeclaims/projects-service-nfs-block
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 20Gi
selector:
matchLabels:
class: projects-service-nfs
storageClassName: ""
volumeMode: Block
volumeName: projects-service-nfs
</code></pre>
<p>Version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>Looks like at some point you <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/expand/expand_controller.go#L185" rel="nofollow noreferrer">updated/expanded the PVC</a>? which is calling: </p>
<pre><code>func (expc *expandController) pvcUpdate(oldObj, newObj interface{})
...
</code></pre>
<p>The in the function it's trying to find a plugin for expansion and it can't find it with this:</p>
<pre><code>volumePlugin, err := expc.volumePluginMgr.FindExpandablePluginBySpec(volumeSpec)
if err != nil || volumePlugin == nil {
err = fmt.Errorf("didn't find a plugin capable of expanding the volume; " +
"waiting for an external controller to process this PVC")
...
return
}
</code></pre>
<p>If you see this <a href="https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/" rel="nofollow noreferrer">document</a> it shows that the following volume types support PVC expansion with in-tree plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD. So NFS is not one of them and that's why you are seeing the event. Looks like it could be supported in the future or also it could be supported with a custom plugin.</p>
<p>If you haven't updated the PVC I would recommend using the same capacity for the PV and PVC as described <a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="nofollow noreferrer">here</a></p>
|
<p>Could there be any reason why a webapp which perfectly loads up fine gives a <code>*HTTP 400 Bad request - The plain HTTP request was sent to HTTPS*</code> port after the webapp's ingress has been edited manually or edited through an automated job which updates the ingress modifying the Whitelisted IPs </p>
<p>Apparently, this issue gets fixed when we redeploy the webapp after <em>purging the webapp deployment</em>... </p>
<p>Any pointers to this would be great as this happens on our PROD env and not reproducible on any lower envs.
Points to note:-
- Nginx Ingress controller setup is the same across lower envs and Prod env.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ingress.allow-http: "false"
ingress.kubernetes.io/proxy-body-size: 20M
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/whitelist-source-range: xxx.yy.zz.pp/32, yyy.ss.dd.kkl/32
ingress.kubernetes.io/whitelist-source-range-status: unlocked
creationTimestamp: 2018-11-29T15:34:05Z
generation: 5
labels:
app: xyz-abc-pqr
name: xxxx-webapp-yyyy-zzzz
namespace: nspsace-name
resourceVersion: "158237270"
selfLink: /apis/extensions/v1beta1/namespaces/nspsace-name/ingresses/xxxx-webapp-yyyy-zzzz
uid: 348f892e-f3ec-11e8-aa6f-0a0340041348
spec:
rules:
- host: ssssssss.wwwwweewwerwerw.co.uk
http:
paths:
- backend:
serviceName: xxxx-webapp-yyyy-zzzz
servicePort: 443
path: /
- host: xxxx-webapp-yyyy-zzzz.bbbbv.lasdfkla.ajksdfh.ioohhsaf.pp
http:
paths:
- backend:
serviceName: xxxx-webapp-yyyy-zzzz
servicePort: 443
path: /
tls:
- hosts:
- ssssssss.wwwwweewwerwerw.co.uk
- xxxx-webapp-yyyy-zzzz.bbbbv.lasdfkla.ajksdfh.ioohhsaf.pp
secretName: xxxx-webapp-yyyy-zzzz-server-tls
status:
loadBalancer:
ingress:
- {}
</code></pre>
| <p>There may be something wrong with the ingress controller and how it updates its configuration. I'm assuming you are using a <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx ingress controller</a> so you can inspect the configs before an after:</p>
<pre><code>$ kubectl cp <nginx-ingress-controller-pod>:nginx.conf nginx.conf.before
$ kubectl edit ingress <your-ingress>
$ kubectl cp <nginx-ingress-controller-pod>:nginx.conf nginx.conf.after
$ diff nginx.conf.before nginx.conf.after
</code></pre>
<p>You can see the this may happen with nginx because of something like this: <a href="https://stackoverflow.com/questions/8768946/dealing-with-nginx-400-the-plain-http-request-was-sent-to-https-port-error">Dealing with nginx 400 "The plain HTTP request was sent to HTTPS port" error</a>.</p>
|
<p>I want to create a CI/CD pipeline using Jenkins wherein I want to add a K8s master node as a slave to my Jenkins master setup. My question is is it a good idea? I am planning to create a username Jenkins on one of the masters, copy the kubeconfig to the user, connect the master to this node and then run all the Kubernetes deployments from this node. </p>
<p>Is there a better way to do this?</p>
| <p>Sure, why not. If you are looking at running tests that don't require a production like system, I'd recommend using something like <a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">minikube</a> on your Jenkins slave. </p>
<p>Make sure that minikube is configured as the <code>jenkins</code> user and also make sure you don't have any other standalone Kubernetes component (kube-proxy, kube-apiserver, kube-controller-manager, etc) on your server that might conflict with the minikube installation.</p>
|
<p>So I have 2 similar deployments on k8s that pulls the same image from GitLab. Apparently this resulted in my second deployment to go on a <code>CrashLoopBackOff</code> error and I can't seem to connect to the port to check on the <code>/healthz</code> of my pod. Logging the pod shows that the pod received an interrupt signal while describing the pod shows the following message.</p>
<pre><code> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
29m 29m 1 default-scheduler Normal Scheduled Successfully assigned java-kafka-rest-kafka-data-2-development-5c6f7f597-5t2mr to 172.18.14.110
29m 29m 1 kubelet, 172.18.14.110 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-m4m55"
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Pulled Container image "..../consul-image:0.0.10" already present on machine
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Created Created container
29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Started Started container
28m 28m 1 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Killing Killing container with id docker://java-kafka-rest-development:Container failed liveness probe.. Container will be killed and recreated.
29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Created Created container
29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Started Started container
29m 27m 10 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Readiness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
28m 24m 13 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Liveness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
29m 19m 8 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Pulled Container image "r..../java-kafka-rest:0.3.2-dev" already present on machine
24m 4m 73 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning BackOff Back-off restarting failed container
</code></pre>
<p>I have tried to redeploy the deployments under different images and it seems to work just fine. However I don't think this will be efficient as the images are the same throughout. How do I go on about this?</p>
<p>Here's what my deployment file looks like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "java-kafka-rest-kafka-data-2-development"
labels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
spec:
replicas: 1
selector:
matchLabels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
template:
metadata:
labels:
repository: "java-kafka-rest"
project: "java-kafka-rest"
service: "java-kafka-rest-kafka-data-2"
env: "development"
release: "0.3.2-dev"
spec:
imagePullSecrets:
- name: ...
containers:
- name: java-kafka-rest-development
image: registry...../java-kafka-rest:0.3.2-dev
env:
- name: DEPLOYMENT_COMMIT_HASH
value: "0.3.2-dev"
- name: DEPLOYMENT_PORT
value: "7533"
livenessProbe:
httpGet:
path: /healthz
port: 7533
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 7533
timeoutSeconds: 1
ports:
- containerPort: 7533
resources:
requests:
cpu: 0.5
memory: 6Gi
limits:
cpu: 3
memory: 10Gi
command:
- /envconsul
- -consul=127.0.0.1:8500
- -sanitize
- -upcase
- -prefix=java-kafka-rest/
- -prefix=java-kafka-rest/kafka-data-2
- java
- -jar
- /build/libs/java-kafka-rest-0.3.2-dev.jar
securityContext:
readOnlyRootFilesystem: true
- name: consul
image: registry.../consul-image:0.0.10
env:
- name: SERVICE_NAME
value: java-kafka-rest-kafka-data-2
- name: SERVICE_ENVIRONMENT
value: development
- name: SERVICE_PORT
value: "7533"
- name: CONSUL1
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node1
- name: CONSUL2
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node2
- name: CONSUL3
valueFrom:
configMapKeyRef:
name: consul-config-...
key: node3
- name: CONSUL_ENCRYPT
valueFrom:
configMapKeyRef:
name: consul-config-...
key: encrypt
ports:
- containerPort: 8300
- containerPort: 8301
- containerPort: 8302
- containerPort: 8400
- containerPort: 8500
- containerPort: 8600
command: [ entrypoint, agent, -config-dir=/config, -join=$(CONSUL1), -join=$(CONSUL2), -join=$(CONSUL3), -encrypt=$(CONSUL_ENCRYPT) ]
terminationGracePeriodSeconds: 30
nodeSelector:
env: ...
</code></pre>
| <p>To those having this problem, I've discovered the problem and solution to my question. Apparently the problem lies with my <code>service.yml</code> where my targetPort was aimed to a port different than the one I opened in my docker image. Make sure the port that's opened in the docker image connects to the right port.</p>
<p>Hope this helps.</p>
|
<p>I have the following, relatively complicated piece of YAML:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.tf-eks-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
</code></pre>
<p>I now want to translate this into a Terraform-Kubernetes compatible resource, which consists exclusively of String, Lists and Maps. How do I do this?
I found an example of how a LimitRange would look <a href="https://www.terraform.io/docs/providers/kubernetes/guides/getting-started.html" rel="nofollow noreferrer">here</a> using hyphens to signify a list:</p>
<pre><code>resource "kubernetes_limit_range" "example" {
metadata {
name = "terraform-example"
}
spec {
limit {
type = "Pod"
max {
cpu = "200m"
memory = "1024M"
}
}
limit {
type = "PersistentVolumeClaim"
min {
storage = "24M"
}
}
limit {
type = "Container"
default {
cpu = "50m"
memory = "24M"
}
}
}
}
</code></pre>
<p>My current try for the mapRoles block is as follows:</p>
<pre><code>mapRole { rolearn = "${aws_iam_role.tf-eks-node.arn}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]}
</code></pre>
<p>This is not correct because mapRoles is a list (because its singular element is a list element as demonstrated by the hyphen); but if it is a list, and everything after the hyphen is the list element, what is the list title?</p>
<p>using the idea mentioned in an answer, I tried to put the metadata block in a data-block beforehand:</p>
<pre><code>data "template_file" "map_roles" {
template = <<EOF
- rolearn: ${var.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
vars {
arn = "${var.kube-arn}"
}
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${data.template_file.map_roles.template}"
}
}
</code></pre>
| <p>The first step is to find a Terraform resource type that matches what you're looking for. The <a href="https://www.terraform.io/docs/providers/kubernetes/index.html" rel="nofollow noreferrer">Terraform Kubernetes provider</a> has historically been a little sparse but now does include basic objects like Deployments and <a href="https://www.terraform.io/docs/providers/kubernetes/r/config_map.html" rel="nofollow noreferrer">ConfigMaps</a>. (Not DaemonSets, though.)</p>
<p>As @Rico notes in their answer, the <code>data:</code> of a ConfigMap is just a map from string name to string value, and your example uses a YAML multi-line string syntax. HCL has a different syntax that looks like shell here-documents. To do the interpolation you also need to feed it through a <a href="https://www.terraform.io/docs/providers/template/d/file.html" rel="nofollow noreferrer">template</a>. So you should be able to translate this to:</p>
<pre class="lang-hcl prettyprint-override"><code>data "template_file" "map_roles" {
template = <<EOF
- rolearn: ${arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
vars {
arn = ${aws_iam_role.tf-eks-node.arn}
}
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = ${data.template_file.map_roles}
}
}
</code></pre>
|
<p>I have an ingress that is configured like such:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: connect-app
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: it.tufts.edu
http:
paths:
- path: "/"
backend:
serviceName: connect-it
servicePort: 80
</code></pre>
<p>and the nginx controller has a configmap that looks like this:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
data:
ssl-redirect: "false"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":null,"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-config","namespace":"nginx-ingress"}}
creationTimestamp: 2018-11-13T20:56:49Z
name: nginx-config
namespace: nginx-ingress
resourceVersion: "3633400"
selfLink: /api/v1/namespaces/nginx-ingress/configmaps/nginx-config
uid: a3ec70bc-e786-11e8-be24-005056a401f6
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>According to the documentation, this should disable the redirect from http to https but it doesn't seem to workout, what am I doing wrong?</p>
<p>Thanks</p>
| <p>I believe this is either:</p>
<ul>
<li><p>A problem with your nginx ingress controller not updating the configs which you can check with:</p>
<pre><code>$ kubectl cp <nginx-ingress-controller-pod>:nginx.conf .
$ cat nginx.conf
</code></pre></li>
<li><p>A problem with your apache config redirecting to HTTPS from HTTP (port 80)</p></li>
</ul>
|
<p>I call this command to send my image to a repo.</p>
<p><code>docker push gcr.io/my-project/my-images:v1</code></p>
<p>It succeeds, as in fact I can apply a "Deployment" yaml and my new service is available at GKE.</p>
<p>My question: How do I list the images (tags) at that gcr.io repo address, to confirm that mine is there? </p>
<p><code>docker image list</code> gives me the local list, but not the remote list.</p>
<p><code>gcloud --project=my-project container images list</code> gives an empty result. (Yet, as stated, my image <em>is</em> out there.)</p>
<p>How can I get this list?</p>
| <p>Use <code>--repository</code> flag</p>
<pre><code> --repository=REPOSITORY
The name of the repository. Format: *.gcr.io/repository. Defaults to
gcr.io/<project>, for the active project.
</code></pre>
<p>This example will return all the available images:</p>
<pre><code>gcloud container images list --repository=gcr.io/your-project
NAME
gcr.io/your-project/your-image
gcr.io/your-project/another-image
gcr.io/your-project/one-more-image
</code></pre>
<p>If you want to list all the <strong>tags</strong> for the specified image, run</p>
<pre><code>gcloud container images list-tags gcr.io/your-project/your-image
DIGEST TAGS TIMESTAMP
0109315b26cf 5a9ad92 2018-11-15T13:24:56
98e2d1475275 343fca4 2018-11-15T11:35:52
df58b7269b89 d96aa6c 2018-11-14T17:11:18
47e93cb3a33f 7a9ff9d 2018-11-13T16:27:06
</code></pre>
|
<p>I am using microk8s and have an nginx frontend service connect to a headless webapplication (ClusterIP = None). However, the nginx service is refused connection to the backend service.</p>
<p>nginx configuration:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.config: |
user nginx;
worker_processes auto;
# set open fd limit to 30000
#worker_rlimit_nofile 10000;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main
'remote_addr:$remote_addr\t'
'time_local:$time_local\t'
'method:$request_method\t'
'uri:$request_uri\t'
'host:$host\t'
'status:$status\t'
'bytes_sent:$body_bytes_sent\t'
'referer:$http_referer\t'
'useragent:$http_user_agent\t'
'forwardedfor:$http_x_forwarded_for\t'
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
rewrite_log on;
upstream svc-web {
server localhost:8080;
keepalive 1024;
}
server {
listen 80;
access_log /var/log/nginx/app.access_log main;
error_log /var/log/nginx/app.error_log;
location / {
proxy_pass http://svc-web;
proxy_http_version 1.1;
}
}
}
$ k get all
NAME READY STATUS RESTARTS AGE
pod/blazegraph-0 1/1 Running 0 19h
pod/default-http-backend-587b7d64b5-c4rzj 1/1 Running 0 19h
pod/mysql-0 1/1 Running 0 19h
pod/nginx-7fdcdfcc7d-nlqc2 1/1 Running 0 12s
pod/nginx-ingress-microk8s-controller-b9xcd 1/1 Running 0 19h
pod/web-0 1/1 Running 0 13s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.152.183.94 <none> 80/TCP 19h
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 22h
service/svc-db ClusterIP None <none> 3306/TCP,9999/TCP 19h
service/svc-frontend NodePort 10.152.183.220 <none> 80:32282/TCP,443:31968/TCP 12s
service/svc-web ClusterIP None <none> 8080/TCP,8443/TCP 15s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 19h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 19h
deployment.apps/nginx 1 1 1 1 12s
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-587b7d64b5 1 1 1 19h
replicaset.apps/nginx-7fdcdfcc7d 1 1 1 12s
NAME DESIRED CURRENT AGE
statefulset.apps/blazegraph 1 1 19h
statefulset.apps/mysql 1 1 19h
statefulset.apps/web 1 1 15s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot <unknown>/55% 1 1 0 19h
$ k describe pod web-0
Name: web-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: khteh-t580/192.168.86.93
Start Time: Fri, 30 Nov 2018 09:19:53 +0800
Labels: app=app-web
controller-revision-hash=web-5b9476f774
statefulset.kubernetes.io/pod-name=web-0
Annotations: <none>
Status: Running
IP: 10.1.1.203
Controlled By: StatefulSet/web
Containers:
web-service:
Container ID: docker://b5c68ba1d9466c352af107df69f84608aaf233d117a9d71ad307236d10aec03a
Image: khteh/tomcat:tomcat-webapi
Image ID: docker-pullable://khteh/tomcat@sha256:c246d322872ab315948f6f2861879937642a4f3e631f75e00c811afab7f4fbb9
Ports: 8080/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Fri, 30 Nov 2018 09:20:02 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/web/html from web-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s6bpp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
web-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: web-persistent-storage-web-0
ReadOnly: false
default-token-s6bpp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s6bpp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/web-0 to khteh-t580
Normal Pulling 11m kubelet, khteh-t580 pulling image "khteh/tomcat:tomcat-webapi"
Normal Pulled 11m kubelet, khteh-t580 Successfully pulled image "khteh/tomcat:tomcat-webapi"
Normal Created 11m kubelet, khteh-t580 Created container
Normal Started 11m kubelet, khteh-t580 Started container
$ k describe svc svc-frontend
Name: svc-frontend
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"svc-frontend","namespace":"default"},"spec":{"ports":[{"name":"ht...
Selector: app=nginx,tier=frontend
Type: NodePort
IP: 10.152.183.159
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30879/TCP
Endpoints: 10.1.1.204:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31929/TCP
Endpoints: 10.1.1.204:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p><code>curl <nodportIP>:32282/webapi/greeting</code> would hang.</p>
<p><code>curl <pod IP>:8080/webapi/greeting</code> WORKS.</p>
<p><code>curl <endpoint IP>:80/webapi/greeting</code> results in "Bad Gateway":</p>
<pre><code>$ curl http://10.1.1.204/webapi/greeting
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.7</center>
</body>
</html>
</code></pre>
<p>Inside the nginx container:</p>
<pre><code>root@nginx-7fdcdfcc7d-nlqc2:/var/log/nginx# tail -f app.error_log
2018/11/24 08:17:04 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.1.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost:32282"
2018/11/24 08:17:04 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.1.1, server: , request: "GET / HTTP/1.1", upstream: "http://[::1]:8080/", host: "localhost:32282"
$ k get endpoints
NAME ENDPOINTS AGE
default-http-backend 10.1.1.246:80 6d20h
kubernetes 192.168.86.93:6443 6d22h
svc-db 10.1.1.248:9999,10.1.1.253:9999,10.1.1.248:3306 + 1 more... 5h48m
svc-frontend 10.1.1.242:80,10.1.1.242:443 6h13m
svc-web 10.1.1.245:8443,10.1.1.245:8080 6h13m
khteh@khteh-T580:/usr/src/kubernetes/cluster1 2950 $ curl 10.1.1.242:80/webapi/greeting
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.7</center>
</body>
</html>
khteh@khteh-T580:/usr/src/kubernetes/cluster1 2951 $
</code></pre>
| <p>fix upstream configuration by using the name of the upstream service and curl using <a href="http://clusterip/" rel="nofollow noreferrer">http://clusterip/</a>...</p>
|
<p>I am running kubectl version 1.7</p>
<p>I am trying to add an init container to my deployment via <code>kubectl patch</code> but no matter how I try it it simply returns "not patched". </p>
<p><code>kubectl patch deployment my-deployment --patch "$(cat ./init-patch.yaml)"</code>
<strong>deployment "my-deployment" not patched</strong></p>
<p><code>spec:
template:
spec:
initContainers:
- name: my-mount-init
image: "my-image"
command:
- "sh"
- "-c"
- "mkdir /mnt/data && chmod -R a+rwx /mnt/data"
volumeMounts:
- name: "my-volume"
mountPath: "/mnt/data"
securityContext:
runAsUser: 0
resources:
limits:
cpu: "0.2"
memory: "256Mi"
requests:
cpu: "0.1"
memory: "128Mi"</code></p>
<p>This is to allow a custom linux user rights to read and write to the volume instead of needing to be the root user.</p>
<p>Wish there was a better response as to why it is not being patched..</p>
| <p>Kubectl is not idempotent. If the element to be patched already contains the patch, kubectl patch fails. </p>
<p>The solution can be read in Natim's comment, but it took me a while to realise that was indeed my problem.</p>
|
<p>I have setup kube-prometheus in my cluster(<a href="https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus" rel="noreferrer">https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus</a>). It contains some default alerts like "CoreDNSdown etc". How to create my own alert?</p>
<p>Could any one provide me sample example to create an alert that will send an email to my gmail account?</p>
<p>I followed this <a href="https://stackoverflow.com/questions/49472375/alert-when-docker-container-pod-is-in-error-or-carshloopbackoff-kubernetes">Alert when docker container pod is in Error or CarshLoopBackOff kubernetes</a>. But I couldn't make it work.</p>
| <p>To send an alert to your gmail account, you need to setup the alertmanager configuration in a file say alertmanager.yaml:</p>
<pre><code>cat <<EOF > alertmanager.yml
route:
group_by: [Alertname]
# Send all notifications to me.
receiver: email-me
receivers:
- name: email-me
email_configs:
- to: $GMAIL_ACCOUNT
from: $GMAIL_ACCOUNT
smarthost: smtp.gmail.com:587
auth_username: "$GMAIL_ACCOUNT"
auth_identity: "$GMAIL_ACCOUNT"
auth_password: "$GMAIL_AUTH_TOKEN"
EOF
</code></pre>
<p>Now, as you're using kube-prometheus so you will have a secret named <code>alertmanager-main</code> that is default configuration for <code>alertmanager</code>. You need to create a secret <code>alertmanager-main</code> again with the new configuration using following command:</p>
<pre><code>kubectl create secret generic alertmanager-main --from-file=alertmanager.yaml -n monitoring
</code></pre>
<p>Now you're alertmanager is set to send an email whenever it receive alert from the prometheus.</p>
<p>Now you need to setup an alert on which your mail will be sent. You can set up DeadManSwitch alert which fires in every case and it is used to check your alerting pipeline</p>
<pre><code>groups:
- name: meta
rules:
- alert: DeadMansSwitch
expr: vector(1)
labels:
severity: critical
annotations:
description: This is a DeadMansSwitch meant to ensure that the entire Alerting
pipeline is functional.
summary: Alerting DeadMansSwitch
</code></pre>
<p>After that the <code>DeadManSwitch</code> alert will be fired and should send email to your mail.</p>
<p>Reference link:</p>
<blockquote>
<p><a href="https://coreos.com/tectonic/docs/latest/tectonic-prometheus-operator/user-guides/configuring-prometheus-alertmanager.html" rel="noreferrer">https://coreos.com/tectonic/docs/latest/tectonic-prometheus-operator/user-guides/configuring-prometheus-alertmanager.html</a></p>
</blockquote>
<p>EDIT:</p>
<p>The deadmanswitch alert should go in a config-map which your prometheus is reading. I will share the relevant snaps from my prometheus here:</p>
<pre><code>"spec": {
"alerting": {
"alertmanagers": [
{
"name": "alertmanager-main",
"namespace": "monitoring",
"port": "web"
}
]
},
"baseImage": "quay.io/prometheus/prometheus",
"replicas": 2,
"resources": {
"requests": {
"memory": "400Mi"
}
},
"ruleSelector": {
"matchLabels": {
"prometheus": "prafull",
"role": "alert-rules"
}
},
</code></pre>
<p>The above config is of my prometheus.json file which have the name of alertmanager to use and the <code>ruleSelector</code> which will select the rules based on <code>prometheus</code> and <code>role</code> label. So I have my rule configmap like:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: prometheus-rules
namespace: monitoring
labels:
role: alert-rules
prometheus: prafull
data:
alert-rules.yaml: |+
groups:
- name: alerting_rules
rules:
- alert: LoadAverage15m
expr: node_load15 >= 0.50
labels:
severity: major
annotations:
summary: "Instance {{ $labels.instance }} - high load average"
description: "{{ $labels.instance }} (measured by {{ $labels.job }}) has high load average ({{ $value }}) over 15 minutes."
</code></pre>
<p>Replace the <code>DeadManSwitch</code> in above config map.</p>
|
<p>I'm trying to get gitlab-runner "run" on a kubernetes cluster, after following the official doc -> <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/kubernetes.html</a> (using kubernetes executor) I'm getting an error once I deploy:</p>
<blockquote>
<p>Error: failed to start container "gitlab-runner": Error response from
daemon: error while creating mount source path
'/usr/share/ca-certificates/mozilla': mkdir
/usr/share/ca-certificates/mozilla: read-only file system</p>
</blockquote>
<p>I'm using the examples in that web and can't figure out why isn't allowing to create that dir (As I understand the default user is root)</p>
<p>Here my config-map.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 1
[[runners]]
name = "Kubernetes Runner"
url = "URL"
token = "TOKEN"
executor = "kubernetes"
[runners.kubernetes]
namespace = "gitlab"
</code></pre>
<p>and this is the deployment.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:alpine-v11.5.0
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
- mountPath: /etc/ssl/certs
name: cacerts
readOnly: true
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
- hostPath:
path: /usr/share/ca-certificates/mozilla
name: cacerts
</code></pre>
<p>Here is the complete list of events initializing the pod:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned gitlab-runner-5b689c7cbc-hw6r5 to gke-my-project-dev-default-pool-0d32b263-6skk
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "cacerts"
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "config"
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "default-token-6hr2h"
Normal Pulling 23s (x2 over 28s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk pulling image "gitlab/gitlab-runner:alpine-v11.5.0"
Normal Pulled 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Successfully pulled image "gitlab/gitlab-runner:alpine-v11.5.0"
Normal Created 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Created container
Warning Failed 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Error: failed to start container "gitlab-runner": Error response from daemon: error while creating mount source path '/usr/share/ca-certificates/mozilla': mkdir /usr/share/ca-certificates/mozilla: read-only file system
Warning BackOff 14s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Back-off restarting failed container
</code></pre>
<p>Any clue will be appreciated</p>
<p>Thanks</p>
| <p>From the logs, i am guessing you are using <strong>GKE</strong>. Google security mount your <code>/</code> file-system(see <a href="https://cloud.google.com/container-optimized-os/docs/concepts/security#filesystem" rel="nofollow noreferrer">here</a>). That's why you are getting error.</p>
<p>Try it by enabling <code>privileged</code> mode of the container:</p>
<pre><code>containers:
securityContext:
privileged: true
</code></pre>
<p>If that does not work, then change <code>/usr/share/ca-certificates/mozilla</code> to <code>/var/SOMETHING</code> (not sure, this is good solution). If there are files in <code>/usr/share/ca-certificates/mozilla</code>, then move/copy them to <code>/var/SOMETHING</code></p>
|
<p>I've got some strange looking behavior.</p>
<p>When a <code>job</code> is run, it completes successfully but one of the containers says it's not (or was not..) ready:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default **********-migration-22-20-16-29-11-2018-xnffp 1/2 Completed 0 11h 10.4.5.8 gke-******
</code></pre>
<p>job yaml:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: migration-${timestamp_hhmmssddmmyy}
labels:
jobType: database-migration
spec:
backoffLimit: 0
template:
spec:
restartPolicy: Never
containers:
- name: app
image: "${appApiImage}"
imagePullPolicy: IfNotPresent
command:
- php
- artisan
- migrate
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=${SQL_INSTANCE_NAME}=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
</code></pre>
<p>What may be the cause of this behavior? There is no readiness or liveness probes defined on the containers.</p>
<p>If I do a describe on the pod, the relevant info is:</p>
<pre><code>...
Command:
php
artisan
migrate
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 29 Nov 2018 22:20:18 +0000
Finished: Thu, 29 Nov 2018 22:20:19 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
...
</code></pre>
| <p>A Pod with a <code>Ready</code> status means it <em>"is able to serve requests and should be added to the load balancing pools of all matching Services"</em>, see <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions</a></p>
<p>In your case, you don't want to serve requests, but simply to execute <code>php artisan migrate</code> once, and done. So you don't have to worry about this status, the important part is the <code>State: Terminated</code> with a <code>Reason: Completed</code> and a zero exit code: your command did whatever and then exited successfully.</p>
<p>If the result of the command is not what you expected, you'd have to investigate the logs from the container that ran this command with <code>kubectl logs your-pod -c app</code> (where <code>app</code> is the name of the container you defined), and/or you would expect the <code>php artisan migrate</code> command to NOT issue a zero exit code.</p>
|
<p>Cannot find how to do so in the docs. After draining the node with <code>--ignore-daemonsets --force</code> pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet are lost. How should I move such pods prior to issuing the drain command? I want to preserve the local data on these pods. </p>
| <p>A good practice is to always start a Pod as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> with <code>specs.replicas: 1</code>. It's very easy as the Deployment <code>specs.template</code> literally takes in your Pod <code>specs</code>, and quite convenient as the deployment will make sure your Pod is always running.</p>
<p>Then, assuming you'll only have 1 replica of your Pod, you can simply use a PersistentVolumeClaim and attach it to the pod as a volume, you do not need a StatefulSet in that case. Your data will be stored in the PVC, and whenever your Pod is moved over nodes for whatever reason it will reattach the volume automatically without loosing any data.</p>
<p>Now, if it's too late for you, and your Pod hasn't got a volume pointing to a PVC, you can still get ready to change that by implementing the Deployment/PVC approach, and manually copy data out of your current pod:</p>
<pre><code>kubectl cp theNamespace/thePod:/the/path /somewhere/on/your/local/computer
</code></pre>
<p>Before copying it back to the new pod:</p>
<pre><code>kubectl cp /somewhere/on/your/local/computer theNamespace/theNewPod:/the/path
</code></pre>
<p>This time, just make sure <code>/the/path</code> (to reuse the example above) is actually a Volume mapped to a PVC so you won't have to do that manually again!</p>
|
<p>How can I run a daemonset on all nodes of a kubernetes cluster (including master) without overriding the taints of any nodes?</p>
| <p>If you want to run a daemonset and make sure it will get scheduled onto all nodes in the cluster regardless of taints. For example in a GKE cluster running google’s Stackdriver logging agent the fluentd-gcp daemonset has the following toleration to make sure it gets past any node taint:</p>
<pre><code>tolerations:
-operator: Exists
effect: NoExecute
-operator: Exists
effect: NoSchedule
</code></pre>
<p>This way you can scheduler the daemonset on the master even if it has <code>NoSchedule</code> taints.</p>
|
<p><a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning" rel="nofollow noreferrer">Kubernetes Dynamic Volume Provisioning</a> gives a handy way to supply pods with dynamically-allocated storage volumes. For example, <a href="https://github.com/kubernetes-incubator/nfs-provisioner" rel="nofollow noreferrer">NFS Provisioner</a> transparently spins up an NFS server and exposes that storage to client pods with Kubernetes volume interface, on-demand.</p>
<p>But how efficient is that? Does provisioner introduce another network protocol layer to communicate with client pod/container, in addition to NFS client-server communication? Or client pod/container talks directly to NFS server once the persistent volume claim was fulfilled?</p>
| <p>As mentioned in the official <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">documentation</a> when you consider to allocate <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent volumes</a> to the Pods in the cluster there is a requirement to specify <code>StorageClass</code> in order to find appropriate provisioner (volume plugin) for the storage provider. <code>StorageClass</code> defines all the necessary parameters have to be passed to the storage provider and what <code>provisioner:</code> should be selected in Kubernetes API <code>apiVersion: storage.k8s.io/v1</code> for the successful creation of <code>PersistentVolume</code> which corresponds to <code>PersistentVolumeClaim</code> request.
Find a list of the provisioners supported internally by Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="nofollow noreferrer">here</a>. </p>
<p>However, you are not limited only with internal volume plugins which are already included in <code>provisioner: kubernetes.io</code> module, but there are a lot of external provisioners which can be used for some specific scenarios, look at <a href="https://github.com/kubernetes-incubator/external-storage" rel="nofollow noreferrer">kubernetes-incubator/external-storage</a> project.</p>
|
<p>I'm currently trying to configure a Logstash cluster with Kubernetes and I would like to have each of the logstash nodes mount a volume as read-only with the pipelines. This same volume would then be mounted as read/write on a single management instance where I could edit the configs.</p>
<p>Is this possible with K8s and GCEPersistentDisk?</p>
| <p>By Logstash I believe you mean an ELK cluster. Logstash is just a log forwarder and not an endpoint for storage.</p>
<p>Not really. It's not possible with a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk" rel="nofollow noreferrer"><code>GCEPersistentDisk</code></a>. This is more of GCE limitation where you can only mount a volume on an instance at a time. </p>
<p>Also, as you can see in the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">docs</a> supports the <code>ReadWriteOnce</code> and the <code>ReadOnlyMany</code> but not at the same time.</p>
<blockquote>
<p>Important! A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.</p>
</blockquote>
<p>You could achieve this but just using a single volume on a single K8s node and then partition your volume to be used by different Elasticsearch pods on the same node but this would defeat the purpose of having a distributed cluster.</p>
<p>Elasticsearch works fine if you have your nodes in different Kubernetes nodes and each of them has a separate volume.</p>
|
<p>We ran some stateful applications (e.g. database) on AWS on-demand/reserved ec2 instances in the past, now we are considering moving those app to k8s statefulset with PVC.</p>
<p>My question is that is it recommended to run k8s statefulset on spot instance to reduce the cost? Since we can use kube-spot-termination-notice-handler to taint the node to move the pod to others before the spot instance terminated, it looks like it should be no problem as long as the statefulset has multiple replicas to prevent the service interrupted.</p>
| <p>There is probably not one and only answer to this question: it really depends on what it is as a workload you want to run, and how tolerant your application is to failures. When a spot instance is to be interrupted (higher bidder, no more available...), a well-done StatefulSet or any other appropriate controller will indeed do its job as expected and usually pretty quickly (seconds).</p>
<p>But be aware that it is wrong to assert that:</p>
<ul>
<li>you'll receive an interruption notice each and every time,</li>
<li>and that the notice will always come in 2 minutes before a spot instance is interrupted</li>
</ul>
<p>See AWS documentation itself <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html#using-spot-instances-managing-interruptions" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html#using-spot-instances-managing-interruptions</a> and here's the excerpt <em>"[...] it is possible that your Spot Instance is terminated before the warning can be made available"</em>.</p>
<p>So the real question is: <strong>how tolerant is your application to unprepared resources removal?</strong></p>
<p>If you have just 2 EC2s running hundreds of pods each, you'll most likely NOT want to use spot instances as your service will be highly degraded if one of the 2 instances is interrupted, until a new one spins up or k8s redispatches the load (assuming the other instance is big enough). Hundreds of EC2s with few pods each and slightly over-provisioning autoscaling rules? You might as well just go for it and use the spot cost savings!</p>
<p>You'll also want to double-check your clients behaviours: assuming you run an API on k8s and pods are stopped before responding, make sur your clients handle the scenario and fires another request or at the very least fail gracefully.</p>
<p>But you spoke of databases: so how about replication? Is it fast and automated? Are there multiple replicates of data to allow for 1 to n replica loss?..</p>
<p>In other words: it just needs some good planning and thorough testing at scale. Good news is it's easy to do: run a load-test and voluntarily crash an instance, answers will meet you there!</p>
|
<p>We ran some stateful applications (e.g. database) on AWS on-demand/reserved ec2 instances in the past, now we are considering moving those app to k8s statefulset with PVC.</p>
<p>My question is that is it recommended to run k8s statefulset on spot instance to reduce the cost? Since we can use kube-spot-termination-notice-handler to taint the node to move the pod to others before the spot instance terminated, it looks like it should be no problem as long as the statefulset has multiple replicas to prevent the service interrupted.</p>
| <p>IMO, I would not recommend running a critical StatefulSet on Spot Instances. For example, a critical database. This is some of what would/could happen in these examples:</p>
<ul>
<li><p>Mysql master/slave/clustered. Any node going down would lead to unpredictable errors and/or downtime before recovering, or nodes coming back up (with different IP addresses!)</p></li>
<li><p>Cassandra. Any node going up/down would cause your cluster to rebalance. If you have these going up and down, then they will constantly be rebalancing! Not to mention the fact that if you had all your nodes in Spot Instances you have the chance of most of them going down.</p></li>
</ul>
<p>Spots are great for large one-time batch jobs and that they are not critically time bound. These can be anything data processing or for example, creating or updating an M/L model.</p>
<p>They are also great for stateless services, meaning an application that sits behind a load balancer and uses state store that is not in a spot instance (Mysql, Cassandra, CloudSQL, RDS, etc)</p>
<p>Spots are also great for test/dev environments, again not necessarily time-bound jobs/workloads.</p>
|
<p>While running a DAG which runs a jar using a docker image,<br>
<strong>xcom_push=True</strong> is given which creates another container along with the docker image in a single pod.</p>
<p>DAG : </p>
<pre><code>jar_task = KubernetesPodOperator(
namespace='test',
image="path to image",
image_pull_secrets="secret",
image_pull_policy="Always",
node_selectors={"d-type":"na-node-group"},
cmds=["sh","-c",..~running jar here~..],
secrets=[secret_file],
env_vars=environment_vars,
labels={"k8s-app": "airflow"},
name="airflow-pod",
config_file=k8s_config_file,
resources=pod.Resources(request_cpu=0.2,limit_cpu=0.5,request_memory='512Mi',limit_memory='1536Mi'),
in_cluster=False,
task_id="run_jar",
is_delete_operator_pod=True,
get_logs=True,
xcom_push=True,
dag=dag)
</code></pre>
<p>Here are the errors when the JAR is executed successfully..</p>
<pre><code> [2018-11-27 11:37:21,605] {{logging_mixin.py:95}} INFO - [2018-11-27 11:37:21,605] {{pod_launcher.py:166}} INFO - Running command... cat /airflow/xcom/return.json
[2018-11-27 11:37:21,605] {{logging_mixin.py:95}} INFO -
[2018-11-27 11:37:21,647] {{logging_mixin.py:95}} INFO - [2018-11-27 11:37:21,646] {{pod_launcher.py:173}} INFO - cat: can't open '/airflow/xcom/return.json': No such file or directory
[2018-11-27 11:37:21,647] {{logging_mixin.py:95}} INFO -
[2018-11-27 11:37:21,647] {{logging_mixin.py:95}} INFO - [2018-11-27 11:37:21,647] {{pod_launcher.py:166}} INFO - Running command... kill -s SIGINT 1
[2018-11-27 11:37:21,647] {{logging_mixin.py:95}} INFO -
[2018-11-27 11:37:21,702] {{models.py:1760}} ERROR - Pod Launching failed: Failed to extract xcom from pod: airflow-pod-hippogriff-a4628b12
Traceback (most recent call last):
File "/usr/local/airflow/operators/kubernetes_pod_operator.py", line 126, in execute
get_logs=self.get_logs)
File "/usr/local/airflow/operators/pod_launcher.py", line 90, in run_pod
return self._monitor_pod(pod, get_logs)
File "/usr/local/airflow/operators/pod_launcher.py", line 110, in _monitor_pod
result = self._extract_xcom(pod)
File "/usr/local/airflow/operators/pod_launcher.py", line 161, in _extract_xcom
raise AirflowException('Failed to extract xcom from pod: {}'.format(pod.name))
airflow.exceptions.AirflowException: Failed to extract xcom from pod: airflow-pod-hippogriff-a4628b12
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1659, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/airflow/operators/kubernetes_pod_operator.py", line 138, in execute
raise AirflowException('Pod Launching failed: {error}'.format(error=ex))
airflow.exceptions.AirflowException: Pod Launching failed: Failed to extract xcom from pod: airflow-pod-hippogriff-a4628b12
[2018-11-27 11:37:21,704] {{models.py:1789}} INFO - All retries failed; marking task as FAILED
</code></pre>
| <p>If <code>xcom_push</code> is True then <code>KubernetesPodOperator</code> creates one more sidecar container (<code>airflow-xcom-sidecar</code>) in Pod along with the base container(actual worker container).
This sidecar container reads data from <code>/airflow/xcom/return.json</code> and returns as xcom value.
So in your base container you need to write the data you want to return in <code>/airflow/xcom/return.json</code> file.</p>
|
<p>I recently upgraded my cluster and I am continuously seeing these kinds of errors:</p>
<pre><code>Error from server (NotFound): scalingpolicies.scalingpolicy.kope.io "fluentd-gcp-scaling-policy" not found
E
2018-11-30T19:00:20,400581557+00:00 fluentd-gcp-scaling-policy not found in namespace kube-system, using defaults.
I
Error from server (NotFound): daemonsets.extensions "fluentd-gcp-v3.0.0" not found
E
</code></pre>
<p>I think the problem is that my fluentd has a different version, but I am not sure how to change this in the scaling policy</p>
| <p>Looks like your <code>ScalingPolicy</code> CRD with the name <code>fluentd-gcp-scaling-policy</code> is missing. It would be that something changed in the CRD when you did the upgrade too.</p>
<p>You can check with:</p>
<pre><code>$ kubectl -n kube-system get scalingpolicy
</code></pre>
<p>And the CRD:</p>
<pre><code>$ kubectl describe crd scalingpolicies.scalingpolicy.kope.io
</code></pre>
<p>It should match something like <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-gcp/scaler-policy.yaml" rel="nofollow noreferrer">this</a>. You also might considering applying everything from <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-gcp" rel="nofollow noreferrer">here</a></p>
|
<p>I am using helm right now. My project is like that:</p>
<p><strong>values.yaml:</strong></p>
<pre><code>environmentVariables:
KEY1: VALUE1
KEY2: VALUE2
</code></pre>
<p><strong>configmap.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "myproject.fullname" . }}
data:
{{- range $k, $v := .Values.environmentVariables }}
{{ $k }}: {{ $v | quote }}
{{- end }}
</code></pre>
<p><strong>deployment.yaml:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "myproject.fullname" . }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ $k }}
valueFrom:
configMapKeyRef:
name: {{ template "myproject.fullname" $ }}
key: {{ $k }}
{{- end }}
...
</code></pre>
<p>But right now, I'm really confused. Am I really need this configmap? Is there any benefit to use configmap for environment variables?</p>
| <p>Aside from the points about separation of config from pods, one advantage of a ConfigMap is it lets you make the values of the variables accessible to other Pods or apps that are not necessarily part of your chart. </p>
<p>It does add a little extra complexity though and there can be a large element of preference about when to use a ConfigMap. Since your ConfigMap keys are the names of the environment variables <a href="https://gist.github.com/troyharvey/4506472732157221e04c6b15e3b3f094" rel="noreferrer">you could simplify your Deployment a little by using 'envFrom'</a></p>
|
<p>I have installed both kubernetes and docker on Ubuntu in an effort to have the similar dev environment that I have on my windows 10 machine so I can debug a problem with extra \r\n on my kubernetes secrets.</p>
<p>How do you perform <a href="https://stackoverflow.com/questions/51072235/does-kubernetes-come-with-docker-by-default">this</a> step on Ubuntu?</p>
<p>I think I need something like <code>kubectl config use-context docker-for-desktop</code> which doesn't work on Ubuntu or configure kubectl to point to the right docker port.</p>
<p>How do I get kubernetes configured?</p>
<p>I am on Ubuntu 18.10.
Docker version (Installed with directions from <a href="https://docs.docker.com/v17.09/engine/installation/linux/docker-ce/ubuntu/" rel="nofollow noreferrer">here</a>):</p>
<pre><code>$ docker version
Client:
Version: 18.09.0
API version: 1.38 (downgraded from 1.39)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.4
Git commit: e68fc7a
Built: Mon Oct 1 14:25:33 2018
OS/Arch: linux/amd64
Experimental: false
</code></pre>
<p>Kubectl version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
| <p>Docker Enterprise Edition(EE) for Ubuntu is the only container platform with a built-in choice of orchestrators (Docker Swarm and Kubernetes), operating systems, (Windows and multiple Linux distributions), and supported infrastructure (bare metal, VMs, cloud, and more) -<a href="https://store.docker.com/editions/enterprise/docker-ee-server-ubuntu" rel="nofollow noreferrer">https://store.docker.com/editions/enterprise/docker-ee-server-ubuntu</a> </p>
<p><a href="https://forums.docker.com/t/is-there-a-built-in-kubernetes-in-docker-ce-for-linux/54374" rel="nofollow noreferrer">Here’s</a> an answer confirming the same</p>
<blockquote>
<p>Docker’s Community Edition engine for Linux does not include built-in
kubernetes capabilities. We say</p>
<p>We have added Kubernetes support in both Docker Desktop for Mac and
Windows and in Docker Enterprise Edition (EE).</p>
<p>You can build a Kubernetes cluster yourself on top of one or more CE
engines, though. For some guidance, you can visit the setup
documentation at <a href="https://kubernetes.io/docs/setup/scratch/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/scratch/</a></p>
</blockquote>
|
<p>Im coming from AWS not sure how to do this with gcp. Previously I asked a more general question about service accounts, this is specific to gke.</p>
<p>In AWS I can create an ECS service role. I attach policies to that role to give it the access it needs. Then I attach the role to an ECS service. So I can deploy multiple services to the same ECS cluster and give them different access with No static keys being used, no secrets being passed around.</p>
<p>How do I do this with gke? How do I attach a gcp iam service account to a gke deployment/service, etc? Can you used annotations in the deployment yaml to attach a service account?</p>
<p>I want to have multiple deployments and services on the same gke cluster using different service accounts implicitly (no keys being used)</p>
| <p><strong>Introduction:</strong></p>
<p>A Google Cloud Kubernetes Cluster consists of Compute Engine VM instances. When you create a cluster a default service account is attached to each VM instance. These credentials are stored in the instance metadata and can be accessed by using a default application <code>Client()</code> instantiation (Application Default Credentials) or by specifying the credential location.</p>
<p>ADC Credentials Search:</p>
<pre><code>from google.cloud import storage
client = storage.Client()
</code></pre>
<p>OR only from metadata:</p>
<pre><code>from google.auth import compute_engine
from google.cloud import storage
credentials = compute_engine.Credentials()
client = storage.Client(credentials=credentials, project=project)
</code></pre>
<p><strong>[Update]</strong></p>
<p>I do not want to promote poor security practices. The above techniques should be blocked on secure production Kubernetes clusters.</p>
<ol>
<li>Use a minimally privileged service account for the Kubernetes cluster.</li>
<li>Disable legacy metadata server APIs and use metadata concealment.</li>
<li>Use a Pod Security Policy.</li>
<li>Use separate service accounts for node pools.</li>
<li>Restrict traffic between pods with a Network Policy.</li>
</ol>
<p><strong>[End Update]</strong></p>
<p><strong>The Google Kubernetes Method:</strong></p>
<p>The recommended technique for Kubernetes is to create a separate service account for each application that runs in the cluster and reduce the scopes applied to the default service account. The roles assigned to each service account vary based upon the permissions that the applications require.</p>
<p>Service account credentials are downloaded as a Json file and then stored in Kubernetes as a <code>Secret</code>. You would then mount the volume with the secret (the credentials). The application running in the container will then need to load the credentials when creating Google Application Clients such as to access Cloud Storage.</p>
<p>This command will store the downloaded credentials file into a Kubernetes secret volume as the secret named <code>service-account-credentials</code>. The credentials file inside Kubernetes is named <code>key.json</code>. The credentials are loaded from the file that was downloaded from Google Cloud named `/secrets/credentials.json</p>
<pre><code>kubectl create secret generic service-account-credentials --from-file=key.json=/secrets/credentials.json
</code></pre>
<p>In your deployment file add the following to mount the volume.</p>
<pre><code>spec:
volumes:
- name: google-cloud-key
secret:
secretName: service-account-credentials
...
containers:
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
</code></pre>
<p>Inside the container, the credentials are loaded from <code>/var/secrets/google/key.json</code></p>
<p>Python Example:</p>
<pre><code>from google.cloud import storage
client = storage.Client.from_service_account_json('/var/secrets/google/key.json')
</code></pre>
<p>This document provides step-by-step details on service account credentials with Kubernetes.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform" rel="noreferrer">Authenticating to Cloud Platform with Service Accounts</a></p>
|
<p>I'm trying to get gitlab-runner "run" on a kubernetes cluster, after following the official doc -> <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/kubernetes.html</a> (using kubernetes executor) I'm getting an error once I deploy:</p>
<blockquote>
<p>Error: failed to start container "gitlab-runner": Error response from
daemon: error while creating mount source path
'/usr/share/ca-certificates/mozilla': mkdir
/usr/share/ca-certificates/mozilla: read-only file system</p>
</blockquote>
<p>I'm using the examples in that web and can't figure out why isn't allowing to create that dir (As I understand the default user is root)</p>
<p>Here my config-map.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 1
[[runners]]
name = "Kubernetes Runner"
url = "URL"
token = "TOKEN"
executor = "kubernetes"
[runners.kubernetes]
namespace = "gitlab"
</code></pre>
<p>and this is the deployment.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:alpine-v11.5.0
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
- mountPath: /etc/ssl/certs
name: cacerts
readOnly: true
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
- hostPath:
path: /usr/share/ca-certificates/mozilla
name: cacerts
</code></pre>
<p>Here is the complete list of events initializing the pod:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned gitlab-runner-5b689c7cbc-hw6r5 to gke-my-project-dev-default-pool-0d32b263-6skk
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "cacerts"
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "config"
Normal SuccessfulMountVolume 29s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk MountVolume.SetUp succeeded for volume "default-token-6hr2h"
Normal Pulling 23s (x2 over 28s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk pulling image "gitlab/gitlab-runner:alpine-v11.5.0"
Normal Pulled 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Successfully pulled image "gitlab/gitlab-runner:alpine-v11.5.0"
Normal Created 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Created container
Warning Failed 19s (x2 over 24s) kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Error: failed to start container "gitlab-runner": Error response from daemon: error while creating mount source path '/usr/share/ca-certificates/mozilla': mkdir /usr/share/ca-certificates/mozilla: read-only file system
Warning BackOff 14s kubelet, gke-my-project-dev-default-pool-0d32b263-6skk Back-off restarting failed container
</code></pre>
<p>Any clue will be appreciated</p>
<p>Thanks</p>
| <p>Finally, I got it working here what I use to <strong>register</strong> and <strong>run</strong> the gitlab-runner on GKE</p>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-cm
namespace: gitlab
data:
config.toml: |
concurrent = 4
check_interval = 30
entrypoint: |
#!/bin/bash
set -xe
cp /scripts/config.toml /etc/gitlab-runner/
# Register the runner
/entrypoint register --non-interactive \
--url $GITLAB_URL \
--tag-list "kubernetes, my_project" \
--kubernetes-image "alpine:latest" \
--kubernetes-namespace "gitlab" \
--executor kubernetes \
--config "/etc/gitlab-runner/config.toml" \
--locked=false \
--run-untagged=true \
--description "My Project - Kubernetes Runner" \
--kubernetes-privileged
# Start the runner
/entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner \
--config "/etc/gitlab-runner/config.toml"
</code></pre>
<p>Deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
app: gitlab-runner
template:
metadata:
labels:
app: gitlab-runner
spec:
containers:
- name: gitlab-runner
image: gitlab/gitlab-runner:latest
command: ["/bin/bash", "/scripts/entrypoint"]
env:
- name: GITLAB_URL
value: "URL"
- name: REGISTRATION_TOKEN
value: "TOKEN"
- name: KUBERNETES_NAMESPACE
value: gitlab
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /scripts
- name: google-cloud-key
mountPath: /var/secrets/google
restartPolicy: Always
volumes:
- name: config
configMap:
name: gitlab-runner-cm
- name: google-cloud-key
secret:
secretName: gitlab-runner-sa
</code></pre>
<p>And Autoscaling:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: gitlab-runner-hpa
namespace: gitlab
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: gitlab-runner
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
</code></pre>
<p>I hope this helps someone trying to run a Gitlab Runner in a Kubernetes Cluster on Google Kubernetes Engine</p>
|
<p>I have started to learn kubernetes. So, I have installed minikube, and I don't know how to push my local builded image to kubernetes. I'm doing next: </p>
<pre><code>minikube start
eval $(minikube docker-env)
docker build -t some_image
</code></pre>
<p>But when I run docker images, there isn't my image. So can you help me, how to push and deploy my local docker image?</p>
| <p>Answer shared by @fatcook is good reference point, a simpler solution would be. </p>
<p>Before building the docker image just execute : <code>eval $(minikube docker-env)</code>.</p>
<p>It will export following variables. </p>
<pre><code>export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/username/.minikube/certs"
export DOCKER_API_VERSION="x.yy"
# Run this command to configure your shell:
# eval $(minikube docker-env)
</code></pre>
<p>Now you can build images <code>docker build</code> and it will be available on minikube instance.</p>
<p>It's a simple dirty way to go.</p>
|
<p>I am trying to execute a command inside postgres container from shell script. This is what I have so far:</p>
<p><code>kubectl exec -it <postgres_pod> -n <deployment> -- bash -c "psql -U postgres -d database -c 'select count from table where name='FOO';'"</code></p>
<p>I am getting the following error:</p>
<p><code>ERROR: column "foo" does not exist
LINE 1: select count from table where name=FOO;
^</code></p>
<p>The query runs fine inside the container so there must be something wrong with the way I am passing the command. I did try another query:</p>
<pre><code>kubectl exec -it <postgres_pod> -n <deployment> -- bash -c "psql -U postgres -d database -c 'select * from table;'"
</code></pre>
<p>This runs fine. So, I am guessing that its someting with the way I am passing the where clause <code>where name='FOO'</code>. How can I get this to work. Kindly help me out.</p>
<p><strong>Update:</strong></p>
<p>Tried escaping using:</p>
<p><strong>1: Double Quotes</strong></p>
<p><code>kubectl exec -it <postgres_pod> -n <deployment> -- bash -c "psql -U postgres -d database -c 'select count from table where name=\"FOO\";'"</code></p>
<pre><code>ERROR: column "FOO" does not exist
LINE 1: select count from table where name="FOO";
^
</code></pre>
<p><strong>2: Single Quotes</strong></p>
<p><code>kubectl exec -it <postgres_pod> -n <deployment> -- bash -c "psql -U postgres -d database -c 'select count from table where name=\'FOO\';'"</code></p>
<pre><code>bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
command terminated with exit code 1
</code></pre>
| <p>I have used <code>$$</code> <strong><a href="https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING" rel="noreferrer">dollar-quoted string</a></strong> in where clause and escaped them using <code>/$</code>.</p>
<pre><code>kubectl exec -it <postgres_pod> -n <deployment> -- bash -c "psql -U postgres -d database -c 'select count from table where name=\$\$FOO\$\$;'"
</code></pre>
|
<p>I have the following, relatively complicated piece of YAML:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.tf-eks-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
</code></pre>
<p>I now want to translate this into a Terraform-Kubernetes compatible resource, which consists exclusively of String, Lists and Maps. How do I do this?
I found an example of how a LimitRange would look <a href="https://www.terraform.io/docs/providers/kubernetes/guides/getting-started.html" rel="nofollow noreferrer">here</a> using hyphens to signify a list:</p>
<pre><code>resource "kubernetes_limit_range" "example" {
metadata {
name = "terraform-example"
}
spec {
limit {
type = "Pod"
max {
cpu = "200m"
memory = "1024M"
}
}
limit {
type = "PersistentVolumeClaim"
min {
storage = "24M"
}
}
limit {
type = "Container"
default {
cpu = "50m"
memory = "24M"
}
}
}
}
</code></pre>
<p>My current try for the mapRoles block is as follows:</p>
<pre><code>mapRole { rolearn = "${aws_iam_role.tf-eks-node.arn}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]}
</code></pre>
<p>This is not correct because mapRoles is a list (because its singular element is a list element as demonstrated by the hyphen); but if it is a list, and everything after the hyphen is the list element, what is the list title?</p>
<p>using the idea mentioned in an answer, I tried to put the metadata block in a data-block beforehand:</p>
<pre><code>data "template_file" "map_roles" {
template = <<EOF
- rolearn: ${var.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
vars {
arn = "${var.kube-arn}"
}
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${data.template_file.map_roles.template}"
}
}
</code></pre>
| <p>Managed to make it work with some changes to David Maze's answer:</p>
<pre><code>resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = <<EOF
- rolearn: ${var.kube-arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
}
}
</code></pre>
<p>The idea of putting the |-input into an EOF-block is what I was missing. Terraform can resolve variable input directly in that block, so I didn't even need a data block to set the template up first.</p>
|
<p>I'm using <code>kubectl apply</code> to update my Kubernetes pods:</p>
<pre><code>kubectl apply -f /my-app/service.yaml
kubectl apply -f /my-app/deployment.yaml
</code></pre>
<p>Below is my service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
run: my-app
spec:
type: NodePort
selector:
run: my-app
ports:
- protocol: TCP
port: 9000
nodePort: 30769
</code></pre>
<p>Below is my deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
run: my-app
replicas: 2
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-app
image: dockerhubaccount/my-app-img:latest
ports:
- containerPort: 9000
protocol: TCP
imagePullSecrets:
- name: my-app-img-credentials
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
</code></pre>
<p>This works fine the first time, but on subsequent runs, my pods are not getting updated.</p>
<p>I have read the suggested workaround at <a href="https://github.com/kubernetes/kubernetes/issues/33664" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/33664</a> which is:</p>
<pre><code>kubectl patch deployment my-app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
</code></pre>
<p>I was able to run the above command, but it did not resolve the issue for me.</p>
<p>I know that I can trigger pod updates by manually changing the image tag from "latest" to another tag, but I want to make sure I get the latest image without having to check Docker Hub.</p>
<p>Any help would be greatly appreciated.</p>
| <p>If nothing changes in the deployment spec, the pods will not be updated for you. This is one of many reasons it is not recommended to use <code>:latest</code>, as the other answer went into more detail on. The <code>Deployment</code> controller is very simple and pretty much just does <code>DeepEquals(old.Spec.Template, new.Spec.Template)</code>, so you need some actual change, such as you have with the PATCH call by setting a label with the current datetime.</p>
|
<p>Im running deployments on GKE,</p>
<p>using <code>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0</code> image as nginx-ingress-controller</p>
<p>Im trying to increase <code>proxy_send_timeout</code> and <code>proxy_read_timeout</code> following to this <a href="https://github.com/kubernetes/ingress-nginx/blob/nginx-0.12.0/docs/user-guide/annotations.md" rel="noreferrer">link</a></p>
<p>here is my ingress config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "360s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "360s"
nginx.ingress.kubernetes.io/proxy-read-timeout: "360s"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/client-body-buffer-size: 100m
spec:
rules:
- host: app.my.com
http:
paths:
- backend:
serviceName: front-app
servicePort: 80
- host: api.my.com
http:
paths:
- backend:
serviceName: backend-app
servicePort: 80
- host: api.old.com
http:
paths:
- backend:
serviceName: backend-app
servicePort: 80
tls:
- hosts:
- app.my.com
- api.my.com
secretName: tls-secret-my-com
- hosts:
- api.old.com
secretName: tls-secret-old-com
</code></pre>
<p>still this does not change the <code>proxy_send_timeout</code> and <code>proxy_read_timeout</code></p>
<p>requests which take longer than 60s (default nginx timeout) are closed</p>
<p>I see this log:</p>
<pre><code>[error] 20967#20967: * upstream prematurely closed connection while reading response header from upstream, client: 123.456.789.12, server: api.my.com, request: "GET /v1/example HTTP/2.0", upstream: "http://11.22.3.44:4000/v3/example", host: "api.my.com", referrer: "https://app.my.com/"
</code></pre>
<p>when I go into the nginx pod:</p>
<pre><code>> kubectl exec -it nginx-ingress-controller-xxxx-yyyy -n ingress-nginx -- bash
> cat /etc/nginx/nginx.conf
</code></pre>
<p>output:</p>
<pre><code>server {
server_name _ ;
listen 80 default_server backlog=511;
location / {
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
</code></pre>
<p><code>proxy_send_timeout</code> and <code>proxy_read_timeout</code> are set to <code>60s</code> and not <code>360s</code> as I configured on the ingress</p>
<p>so I tried changing manually the timeout on <code>nginx</code> conf, and then I did not get the timeout on the client, but every time the nginx is restarted the values are returned to the default <code>60s</code></p>
<p>how can I configure currectly the timeouts on the ingress? </p>
| <p>Based on :<a href="https://github.com/kubernetes/ingress-nginx/issues/2007" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/issues/2007</a></p>
<p>Try to change the value in the annotation to '360'. The value needs to be a number. </p>
|
<p>I have not installed minikube in my local windows machine.I have only used kubernetes in docker. Docker settings checked enable Kubernetes.Everthing is ok. I created a Pod and a Service succesfully.Finally I need to an <strong>ip</strong> <strong>different from localhost</strong> for accessing through browser.</p>
<p><a href="http://I_need_an_ip:31515" rel="nofollow noreferrer">http://I_need_an_ip:31515</a></p>
<p>What is equaliant <strong>minikube ip</strong> when use <strong>kubernetes in docker</strong> for windows, without <strong>minukube</strong>?</p>
| <p>"kubectl describe node docker-for-desktop" gives you the internal IP address that the docker for desktop node is running on (see also <a href="https://stackoverflow.com/questions/51209870/minikube-vs-kubernetes-in-docker-for-windows#comment89466771_51213918">Minikube vs Kubernetes in Docker for Windows</a> )</p>
|
<p>I create a PV and claimed the PV through PVC. I see that PV is created but the PVC binding status is stuck in pending.When i looked at the describe pvc output , I see that no persistent volumes available for this claim and no storage class is set. From the documentation I understand that storage class isnt mandatory . So, am unsure on what's missing in the PVC file.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ghost
labels:
pv: pv-ghost
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 3Gi
hostPath:
path: /ghost/data
--------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ghost
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: pv-ghost
</code></pre>
<p>Out of describe PV and PVC</p>
<pre><code> kubectl describe pv pv-ghost
Name: pv-ghost
Labels: pv=pv-ghost
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 3Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /ghost/data
HostPathType:
Events: <none>
kubectl describe pvc pvc-ghost
Name: pvc-ghost
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 8m44s (x8 over 10m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Normal FailedBinding 61s (x5 over 2m3s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: <none>
</code></pre>
| <p>You need to specify the volume source manually.</p>
<p>ReadWriteMany is only available for <code>AzureFile</code>, <code>CephFS</code>, <code>Glusterfs</code>, <code>Quobyte</code>, <code>NFS</code>, <code>PortworxVolume</code>.
Also <code>Flexvolume</code> depending on the drivers and <code>VsphereVolume</code> works when pods are collocated.
You can read it all in Kubernetes docs regarding <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Volume Mode</a></p>
<p>An example PV for aws would look like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-volume
spec:
capacity:
storage: 15Gi # Doesn't really matter, as EFS does not enforce it anyway
volumeMode: Filesystem
accessModes:
- ReadWriteMany
mountOptions:
- hard
- nfsvers=4.1
- rsize=1048576
- wsize=1048576
- timeo=300
- retrans=2
nfs:
path: /
server: fs-XXX.efs.eu-central-2.amazonaws.com
</code></pre>
|
<p>Can someone please let me know why the kubernetes pod use the none network instead of the bridge network on the worker node?</p>
<p>I Setup a kubernetes cluster by use kubo. </p>
<pre><code>The worker node by default will have 3 docker network.
NETWORK ID NAME DRIVER
30bbbc954768 bridge bridge
c8cb510d1646 host host
5e6b770e7aa6 none null
</code></pre>
<p>The docker default network is bridge
$>docker network inspect bridge</p>
<pre><code>"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
</code></pre>
<p>But if I use kubectl run command to start a pod</p>
<pre><code>kubectl run -it --image nginx bash
</code></pre>
<p>on the work node there will be two container start </p>
<pre><code>7cf6061fe0b8 40960efd7b8f "nginx -g 'daemon off" 33 minutes ago
Up 33 minutes k8s_bash_bash-325157647-ns4xj_default_9d5ea60e-cf74-11e7-9ae8-00505686d000_2
37c51d605b16 gcr.io/google_containers/pause-amd64:3.0 "/pause"
35 minutes ago Up 35 minutes k8s_POD_bash-325157647-ns4xj_default_9d5ea60e-cf74-11e7-9ae8-00505686d000_0
</code></pre>
<p>if we run docker inspect 37c51d605b16
we can see it will use “none” </p>
<pre><code>"Networks": {
"none": {
"IPAMConfig": null,
"Links": null,
</code></pre>
<p>So why kubernetes will use the none network for communication?</p>
| <p>Kubernetes uses an overlay network to manage pod-to-pod communication on the same or different hosts. Each pod gets a single IP address for all containers in that pod. A <code>pause</code> container is created to hold the network namespace and thus reserve the IP address, which is useful when containers restart, as they get the same IP.</p>
<p>The pod has its own ethernet adapter, say <code>eth0</code> which is mapped to a virtual ethernet adapter on the host say <code>veth0xx</code>, in the root network namespace, which in turn is connected to a network bridge <code>docker0</code> or <code>cbr0</code>.</p>
<p>In my Kubernetes setup, with Project Calico as the overlay network CNI plugin, calico creates an ethernet adapter in each pod and maps it to a virtual adapter on the host (name format <code>calic[0-9a-z]</code>). This virtual adaptor is connected to a Linux ethernet bridge. IP table rules filter packets to this bridge and then onto the CNI plugin provider, in my case Calico which is able to redirect the packet to the correct pod.</p>
<p>So your containers are in the <code>none</code> docker network as docker networking is disabled in your Kubernetes setup, as it's using the overlay network via a CNI plugin. Kubernetes doesn't handle networking but delegates it to the underlying CNI plugin.</p>
|
<p>Is it possible to specify CPU ID list to the Kubernetes cpumanager? The goal is to make sure pods get CPUs from a single socket (0). I brought all the CPUs on the peer socket offline as mentioned <a href="https://www.kernel.org/doc/html/latest/core-api/cpu_hotplug.html?highlight=cpuhotplug" rel="nofollow noreferrer">here</a>, for example:</p>
<pre><code>$ echo 0 > /sys/devices/system/cpu/cpu5/online
</code></pre>
<p>After doing this, the Kubernetes master indeed sees the remaining online CPUs</p>
<pre><code>kubectl describe node foo
Capacity:
cpu: 56 <<< socket 0 CPU count
ephemeral-storage: 958774760Ki
hugepages-1Gi: 120Gi
memory: 197524872Ki
pods: 110
Allocatable:
cpu: 54 <<< 2 system reserved CPUs
ephemeral-storage: 958774760Ki
hugepages-1Gi: 120Gi
memory: 71490952Ki
pods: 110
System Info:
Machine ID: 1155420082478559980231ba5bc0f6f2
System UUID: 4C4C4544-0044-4210-8031-C8C04F584B32
Boot ID: 7fa18227-748f-496c-968c-9fc82e21ecd5
Kernel Version: 4.4.13
OS Image: Ubuntu 16.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.3
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
</code></pre>
<p>However, cpumanager still seems to think there are 112 CPUs (socket0 + socket1).</p>
<pre><code>cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"0-111"}
</code></pre>
<p>As a result, the kubelet system pods are throwing the following error:</p>
<pre><code>kube-system kube-proxy-nk7gc 0/1 rpc error: code = Unknown desc = failed to update container "eb455f81a61b877eccda0d35eea7834e30f59615346140180f08077f64896760": Error response from daemon: Requested CPUs are not available - requested 0-111, available: 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110 762 36d <IP address> foo <none>
</code></pre>
| <p>I was able to get this working. Posting this as an answer so that someone in need might benefit.</p>
<p>It appears the CPU set is read from <code>/var/lib/kubelet/cpu_manager_state</code> file and it is not updated across kubelet restarts. So this file needs to be removed before restarting kubelet.</p>
<p>The following worked for me:</p>
<pre><code># On a running worker node, bring desired CPUs offline. (run as root)
$ cpu_list=`lscpu | grep "NUMA node1 CPU(s)" | awk '{print $4}'`
$ chcpu -d $cpu_list
$ rm -f /var/lib/kubelet/cpu_manager_state
$ systemctl restart kubelet.service
# Check the CPU set seen by the CPU manager
$ cat /var/lib/kubelet/cpu_manager_state
# Try creating pods and check the syslog:
Dec 3 14:36:05 k8-2-w1 kubelet[8070]: I1203 14:36:05.122466 8070 state_mem.go:84] [cpumanager] updated default cpuset: "0,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110"
Dec 3 14:36:05 k8-2-w1 kubelet[8070]: I1203 14:36:05.122643 8070 policy_static.go:198] [cpumanager] allocateCPUs: returning "2,4,6,8,58,60,62,64"
Dec 3 14:36:05 k8-2-w1 kubelet[8070]: I1203 14:36:05.122660 8070 state_mem.go:76] [cpumanager] updated desired cpuset (container id: 356939cdf32d0f719e83b0029a018a2ca2c349fc0bdc1004da5d842e357c503a, cpuset: "2,4,6,8,58,60,62,64")
</code></pre>
<p>I have reported a <a href="https://github.com/kubernetes/kubernetes/issues/71622" rel="nofollow noreferrer">bug here</a> as I think the CPU set should be updated after kubelet restarts.</p>
|
<p>I have a small <strong>java</strong> webapp comprising of three microservices - <strong>api-service</strong>,<strong>book-service</strong> and <strong>db-service</strong> all of which are deployed on a kubernetes cluster locally using minikube.</p>
<p>I am planning to keep separate UIs for <strong>api-service</strong> and <strong>book-service</strong> , with the common static files served from a separate pod, probably an <code>nginx:alpine</code> image.</p>
<p>I was able to create a front end that serves the static files from <code>nginx:alpine</code> referring to this <a href="https://www.linkedin.com/pulse/serve-static-files-from-docker-via-nginx-basic-example-arun-kumar" rel="noreferrer">tutorial.</a> </p>
<p>I would like to use <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer"><code>ingress-nginx</code></a> controller for routing requests to the two services.</p>
<p>The below diagram crudely shows where I am now.</p>
<p>I am confused as to where I should place the pod that serves the static content, and how to connect it to the ingress resource.I guess that keeping a front end pod before ingress defeats the purpose of ingress-nginx controller. What is the best practice to serve static files. Appreciate any help. Thanks.</p>
<p><a href="https://i.stack.imgur.com/i86IZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/i86IZ.png" alt="enter image description here"></a> </p>
| <p>Looks like you are confusing the fact that users, browsing online, will trigger standard requests to both "download" your static content, <strong>and</strong> use your 2 APIs (book and api). It's not the NGINX service serving the static content that is accessing your APIs, but the users browsers/applications, and they do that exactly the same for both static content and APIs (former has more/specific headers and data, like auth...).</p>
<p>On your diagram you'll want to put your new <code>static-service</code> at the exact same level as your <code>book-service</code> and <code>api-service</code>, ie <em>behind</em> the ingress. But your <code>static-service</code> won't have a link with the <code>db-service</code> like the other 2. Then just complete your ingress rules, with the static-service at the end as in this example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: your-global-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /book-service
backend:
serviceName: book-service
servicePort: 80
- path: /api-service
backend:
serviceName: api-service
servicePort: 80
- path: /
backend:
serviceName: static-service
servicePort: 80
</code></pre>
<p>You'll have to adjust your services names and ports, and pick the paths you want your users to access your APIs, in the example above you'd have:</p>
<ul>
<li><code>foo.bar.com/book-service</code> for your book-service</li>
<li><code>foo.bar.com/api-service</code> for the api-service</li>
<li><code>foo.bar.com/</code> ie everything else going to the static-service</li>
</ul>
|
<p>I was trying to find some alternative for docker-swarm <code>rollback</code> command, which allows you to specify rollback strategy in the deployment file.
In k8s ideally it should use <code>readinessProbe</code>, and if didn't pass <code>failureThreshold</code> it should rollback, before starting deployment of the next pod (to avoid downtime).</p>
<p>Currently, in my deployment script, I'm using hook <code>kubectl rollout status deployment $DEPLOYMENT_NAME || kubectl rollout undo deployment $DEPLOYMENT_NAME</code>, which works, but it's not ideal because first rollout command will trigger error way after unhealthy pod will be deployed and a healthy one will be destroyed which will cause downtime.</p>
<p>Ideally, it shouldn't even kill current pod before a new one will pass <code>readinessProbe</code></p>
| <p>There is no specific rollback strategy in a Kubernetes deployment. You could try a combination of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">RollingUpdate</a> with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="nofollow noreferrer">max unavailable</a> (aka <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="nofollow noreferrer">Proportional Scaling</a>), then at some point <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="nofollow noreferrer">pause your deployment</a> resume if everything looks good and then rollback if something went wrong.</p>
<p>The recommended way is really to use another deployment as <a href="https://medium.com/google-cloud/kubernetes-canary-deployments-for-mere-mortals-13728ce032fe" rel="nofollow noreferrer">canary</a> split the traffic through a load balancer between canary and non-canary, then if everything goes well upgrade the non-canary and shut down the canary. If something goes wrong shutdown the canary and keep the non-canary until the issue is fixed.</p>
<p>Another strategy is to use something like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> that facilitates <a href="https://istio.io/blog/2017/0.1-canary/" rel="nofollow noreferrer">canary deployments</a>.</p>
|
<p>I am scraping the kubernetes metrics from prometheus and would need to extract the number of running pods.</p>
<p>I can see container_last_seen metrics but how should i get no of pods running. Can someone help on this?</p>
| <p>If you need to get number of running pods, you can use a metric from the list of pods metrics <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="noreferrer">https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md</a> for that (To get the info purely on pods, it'd make sens to use pod-specific metrics).
For example if you need to get the number of pods per namespace, it'll be:
<code>count(kube_pod_info{namespace="$namespace_name"}) by (namespace)</code>
To get the number of all pods running on the cluster, then just do:
<code>count(kube_pod_info)</code></p>
|
<p>My Traefik Ingress DaemonSet shows some awkard metrics in its dashboard. </p>
<p>Is it correct? I really doubt that my average response time is beyond minutes.</p>
<p>I think I'm doing something wrong but I have no idea what it is.</p>
<p><a href="https://i.stack.imgur.com/72SCs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/72SCs.png" alt="Traefik Dashboard"></a></p>
| <p>Answered in comments, Traefik's stats are very literal and when using Websockets it thinks that's a single HTTP connect (because it technically is) which is lasting for minutes or hours.</p>
|
<p>I have installed my kubernetes cluster(1master, 3worker)in the digital ocean.<br>
The problem is I installed both flannel and calico in this cluster.<br>
I want to remove the calico completely from my cluster or is it ok to have multiple CNI?</p>
| <p>Use following two commands to remove calico from your node:</p>
<pre><code>kubectl delete -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl delete -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
</code></pre>
|
<p>I used to define my custom resources using apiserver-builder, which is currently deprecated and recommended to use kubebuilder instead.</p>
<p>I tried to generate my resources using kubebuilder, but i found <code>sigs.k8s.io/controller-runtime</code> version in Gopkg.toml is v0.1.1, which is based on k8s 1.10.1 .</p>
<p>Then i searched in controller-runtime repository, the latest version is v0.1.7, which is based on k8s 1.11.2 .</p>
<p>I am wondering when could kubebuilder/controller-runtime update to k8s 1.12.x? Or if kubebuilder still maintained?</p>
| <p>Yes, the <a href="https://github.com/kubernetes-sigs/controller-runtime" rel="nofollow noreferrer">kubernetes-sigs/controller-runtime</a> is still actively being developed on and is maintained.</p>
<blockquote>
<p>I am wondering when could kubebuilder/controller-runtime update to k8s 1.12.x?</p>
</blockquote>
<p>The best way to ask the maintainers to make this happen would be to ask on the #kubebuilder channel on the Kubernetes Slack, or create an issue on the repo.</p>
|
<p>While using kubectl port-forward function I was able to succeed in port forwarding a local port to a remote port. However it seems that after a few minutes idling the connection is dropped. Not sure why that is so.</p>
<p>Here is the command used to portforward:</p>
<pre><code>kubectl --namespace somenamespace port-forward somepodname 50051:50051
</code></pre>
<p>Error message:</p>
<pre><code>Forwarding from 127.0.0.1:50051 -> 50051
Forwarding from [::1]:50051 -> 50051
E1125 17:18:55.723715 9940 portforward.go:178] lost connection to pod
</code></pre>
<p>Was hoping to be able to keep the connection up</p>
| <p>Setting kube's <code>streaming-connection-idle-timeout</code> to 0 should be a right solution, but if you don't want to change anything, you can use while-do construction</p>
<p>Format: <code>while true; do <<YOUR COMMAND HERE>>; done</code></p>
<p>So just inputing in CLI: <code>while true; do kubectl --namespace somenamespace port-forward somepodname 50051:50051; done</code> should keep kubectl reconnecting on connection lost</p>
|
<p>I have an app/pod: <code>app1</code> with 2 containers <code>service1</code> and <code>service2</code>. These services writes log at /var/log/app1Service1.log and /var/log/aapp1Service2.log. I'd like to tail log from mac's cli. Tried as below but did not work.</p>
<pre><code>~ $ kubectl exec app1-6f6749ccdd-4ktwf -c app1Service1 "tail -f -n +1 /var/log/app1Service1.log"
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"tail -f -n +1 /var/log/app1Service1.log\": stat tail -f -n +1 /var/log/app1Service1.log: no such file or directory"
command terminated with exit code 126
~ $
</code></pre>
<p>below command works:</p>
<pre><code>kubectl exec app1-6f6749ccdd-4ktwf -c app1Service1 ls
kubectl exec app1-6f6749ccdd-4ktwf -c app1Service1 "ls"
</code></pre>
<p>Seeing failures when I pass arguments to command. </p>
| <p>Add <code>bash -c</code> or if your container has <code>sh</code> then add <code>sh -c</code></p>
<pre><code>kubectl exec app1-6f6749ccdd-4ktwf -c app1Service1 -- bash -c "tail -f -n +1 /var/log/app1Service1.log"
</code></pre>
<p>Hope this'll help</p>
|
<p>Ho do you set the following label in an already applied deployment?</p>
<pre><code>kubectl label deployments my-deployment-v1 app=my-deployment
</code></pre>
<p>Is setting:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment-v1
labels:
app: my-deployment
</code></pre>
<p>And I need, the following for a service to find it:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment-v1
spec:
template:
metadata:
labels:
app: my-deployment
</code></pre>
| <p>You need to <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/#use-a-json-merge-patch-to-update-a-deployment" rel="nofollow noreferrer">patch</a> your resource like this:</p>
<pre><code>kubectl patch deployments/my-deployment-v1 \
-p '{"spec":{"template":{"metadata":{"labels":{"app":"my-deployment"}}}}}'
</code></pre>
|
<p>I'm running into the following issue with tiller:</p>
<pre><code>Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 18s (x15 over 1m) replicaset-controller Error creating: pods "tiller-deploy-6f65cf89f-" is forbidden: error looking up service account k8s-tiller/k8s-tiller: serviceaccount "k8s-tiller" not found
</code></pre>
<p>However a <code>k8s-tiller</code> service account exists (in the default namespace).</p>
<p>How can I investigate this further? Is it possibly looking in the <code>k8s-tiller</code> namespace, and if so could I just create the service account manually then?</p>
| <p>I faced issues with helm till below actions:</p>
<pre><code> curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller / helm init --service-account tiller --upgrade(in case you have already done heln init)
</code></pre>
<p>Hope this help you.</p>
|
<p>I get the following error message in my Gitlab CI pipeline and I can't do anything with it. Yesterday the pipeline still worked, but I didn't change anything in the <em>yml</em> and I don't know where I made the mistake. I also reset my code to the last working commit, but the error still occurs.</p>
<pre><code>$ kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
</code></pre>
<blockquote>
<p>Error from server (NotFound): deployments.extensions "ft-backend" not
found</p>
</blockquote>
<p><strong>.gitlab-ci.yml</strong></p>
<pre><code>image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MY_NAME --docker-password=$REGISTRY_PASSWD --docker-email=MY_MAIL
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
- kubectl apply -f deployment.yml
</code></pre>
| <p>I suppose that when you are invoking command:</p>
<p><code>kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend</code></p>
<p>deployment <code>ft-backend</code> does not exist in your cluster. Does the command: <code>kubectl get deployment ft-backend</code> return the same result?</p>
|
<pre><code>Base OS : CentOS (1 master 2 minions)
K8S version : 1.9.5 (deployed using KubeSpray)
</code></pre>
<p>I am new to Kubernetes Ingress and am setting up 2 different services, each reachable with its own path.</p>
<p>I have created 2 deployments :</p>
<pre><code>kubectl run nginx --image=nginx --port=80
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
</code></pre>
<p>I have also created their corresponding services :</p>
<pre><code>kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl expose deployment echoserver --target-port=8080 --type=NodePort
</code></pre>
<p>My <code>svc</code> are :</p>
<pre><code>[root@node1 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.233.48.121 <none> 8080:31250/TCP 47m
nginx NodePort 10.233.44.54 <none> 80:32018/TCP 1h
</code></pre>
<p>My NodeIP address is <code>172.16.16.2</code> and I can access both pods using</p>
<pre><code>http://172.16.16.2:31250 &
http://172.16.16.2:32018
</code></pre>
<p>Now on top of this I want to deploy an Ingress so that I can reach both pods not using 2 IPs and 2 different ports BUT 1 IP address with different paths.</p>
<p>So my Ingress file is :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-nginx-ingress
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
- path: /echo
backend:
serviceName: echoserver
servicePort: 8080
</code></pre>
<p>This yields :</p>
<pre><code>[root@node1 kubernetes]# kubectl describe ing fanout-nginx-ingress
Name: fanout-nginx-ingress
Namespace: development
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx nginx:80 (<none>)
/echo echoserver:8080 (<none>)
Annotations:
Events: <none>
</code></pre>
<p>Now when I try accessing the Pods using the NodeIP address (172.16.16.2), I get nothing.</p>
<pre><code>http://172.16.16.2/echo
http://172.16.16.2/nginx
</code></pre>
<p>Is there something I have missed in my configs ? </p>
| <p>I had the same issue on my bare metal installation - or rather something close to that (kubernetes virtual cluster - set of virtual machines connected via Host-Only-Adapter). Here is link to my <a href="https://github.com/ldynia/vagrant-k8s-centos" rel="noreferrer">kubernetes vlab</a>.</p>
<p>First of all make sure that you have ingress controller installed. Currently there are two ingress controller worth trying <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">kubernetes nginx ingress controller</a> and <a href="https://github.com/nginxinc/kubernetes-ingress" rel="noreferrer">nginx kubernetes ingress controller</a> -I installed first one.</p>
<h3>Installation</h3>
<p>Go to <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">installation instructions</a> and execute first step </p>
<pre><code># prerequisite-generic-deployment-command
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
</code></pre>
<p>Next get IP addresses of cluster nodes.</p>
<pre><code>$ kubectl get nodes -o wide
NAME STATUS ROLES ... INTERNAL-IP
master Ready master ... 192.168.121.110
node01 Ready <none> ... 192.168.121.111
node02 Ready <none> ... 192.168.121.112
</code></pre>
<p>Further, crate <code>ingress-nginx</code> service of type <code>LoadBalancer</code>. I do it by downloading <code>NodePort</code> template service from installation tutorial and making following adjustments in <code>svc-ingress-nginx-lb.yaml</code> file.</p>
<pre><code>$ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml > svc-ingress-nginx-lb.yaml
# my changes svc-ingress-nginx-lb.yaml
type: LoadBalancer
externalIPs:
- 192.168.121.110
- 192.168.121.111
- 192.168.121.112
externalTrafficPolicy: Local
# create ingress- service
$ kubectl apply -f svc-ingress-nginx-lb.yaml
</code></pre>
<h3>Verification</h3>
<p>Check that <code>ingress-nginx</code> service was created.</p>
<pre><code>$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.110.127.9 192.168.121.110,192.168.121.111,192.168.121.112 80:30284/TCP,443:31684/TCP 70m
</code></pre>
<p>Check that <code>nginx-ingress-controller</code> deployment was created.</p>
<pre><code>$ kubectl get deploy -n ingress-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1 1 1 1 73m
</code></pre>
<p>Check that <code>nginx-ingress</code> pod is running.</p>
<pre><code>$ kubectl get pods --all-namespaces -l
app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-5cd796c58c-lg6d4 1/1 Running 0 75m
</code></pre>
<p>Finally, check ingress controller version. <strong>Don't forget to change pod name!</strong></p>
<pre><code>$ kubectl exec -it nginx-ingress-controller-5cd796c58c-lg6d4 -n ingress-nginx -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.21.0
Build: git-b65b85cd9
Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
</code></pre>
<h2>Testing</h2>
<p>Test that ingress controller is working by executing steps in this <a href="https://medium.com/@Oskarr3/setting-up-ingress-on-minikube-6ae825e98f82" rel="noreferrer">tutorial</a> -of course, you will omit <code>minikube</code> part. </p>
<p>Successful, execution of all steps will create ingress controler resource that should look like this. </p>
<pre><code>$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
ingress-tutorial myminikube.info,cheeses.all 192.168.121.110,192.168.121.111,192.168.121.112 80 91m
</code></pre>
<p>And pods that looks like this.</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cheddar-cheese-6f94c9dbfd-cll4z 1/1 Running 0 110m
echoserver-55dcfbf8c6-dwl6s 1/1 Running 0 104m
stilton-cheese-5f6bbdd7dd-8s8bf 1/1 Running 0 110m
</code></pre>
<p>Finally, test that request to <code>myminikube.info</code> propagates via ingress load balancer. </p>
<pre><code>$ curl myminikube.info
CLIENT VALUES:
client_address=10.44.0.7
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://myminikube.info:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=myminikube.info
user-agent=curl/7.29.0
x-forwarded-for=10.32.0.1
x-forwarded-host=myminikube.info
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/
x-real-ip=10.32.0.1
x-request-id=b2fb3ee219507bfa12472c7d481d4b72
x-scheme=http
BODY:
</code></pre>
<p>It was a long journey to make ingress working on bear metal like environment.Thus, i will include relevant links that helped me along. </p>
<ul>
<li><a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">reproducable tutorial</a> </li>
<li><a href="https://linuxhint.com/install-minikube-ubuntu/" rel="noreferrer">installation of minikube on ubuntu</a></li>
<li><a href="https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e" rel="noreferrer">ingress I</a></li>
<li><a href="https://medium.com/@Oskarr3/setting-up-ingress-on-minikube-6ae825e98f82" rel="noreferrer">ingress II</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/1750" rel="noreferrer">digging</a></li>
<li><a href="https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress" rel="noreferrer">reverse engineering on ingress in kubernetes</a></li>
</ul>
|
<p>I have multiple helm charts creating a single deployment each (usually creating one pod each).</p>
<p>The requirement is to serialise the deployment of pods ie before the second pod can be deployed the first pod needs to be in a running state.
(coz second pod reads values from the first pod). The third pod again should only come up with the second pod is up and running or completed.</p>
<p>I tried using Umbrella helm hooks for this but hooks are evaluated on a chart object level rather than a collection of charts.</p>
<p>I was looking into having a init container that regularly checks the readiness probe (not sure if this can be done) of the first pod before running the second pod? not sure -- ideas, please...</p>
| <p><strong>Init Containers</strong></p>
<p>If you don't mind letting your previous services run to completion before running the next ones you can take advantage of the Init Containers feature: <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<blockquote>
<p>They run to completion before any app Containers start, whereas app
Containers run in parallel, so Init Containers provide an easy way to
block or delay the startup of app Containers until some set of
preconditions are met.</p>
</blockquote>
<p><strong>Behavior</strong></p>
<blockquote>
<p>During the startup of a Pod, the Init Containers are started in order,
after the network and volumes are initialized. Each Container must
exit successfully before the next is started. If a Container fails to
start due to the runtime or exits with failure, it is retried
according to the Pod restartPolicy. However, if the Pod restartPolicy
is set to Always, the Init Containers use RestartPolicy OnFailure.</p>
<p>A Pod cannot be Ready until all Init Containers have succeeded. The
ports on an Init Container are not aggregated under a service. A Pod
that is initializing is in the Pending state but should have a
condition Initializing set to true.</p>
<p>If the Pod is restarted, all Init Containers must execute again.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior</a></p>
<p><strong>Caveats</strong></p>
<p>Please review the differences and limitations in the documentation before deciding to use this feature.</p>
<p>ie.</p>
<blockquote>
<p>Differences from regular Containers</p>
<p>Init Containers support all the fields and features of app Containers,
including resource limits, volumes, and security settings. However,
the resource requests and limits for an Init Container are handled
slightly differently,</p>
</blockquote>
|
<p>I have set up a a two node cluster on Raspberry pi. I was wondering if it would be possible to make a hard drive connected to the master node as the persistent volume ? I am unable to find any options in the documentation.</p>
<p>Any suggestions or directions is much appreciated.thanks</p>
| <p><strong>hostPath volume Type</strong></p>
<p>The <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volume type is single-node only, meaning that a pod on one node cannot access the hostPath volume on another node. One way to get around this limitation may be to either create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> or Daemonset which could force pods to always deploy to the same node(s), or force a deployment's pods to always be deployed to the same node via a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">nodeSelector</a>.</p>
<p><strong>If you are circumventing the hostPath single-node limitation via nodeSelector while using multiple pods on the same node beware of the following issue:</strong></p>
<p><a href="https://stackoverflow.com/questions/46738296/multiple-kubernetes-pods-sharing-the-same-host-path-pvc-will-duplicate-output">Multiple Kubernetes pods sharing the same host-path/pvc will duplicate output</a></p>
<p><strong>Alternative Volume Types</strong></p>
<p>If you do not wish to circumvent the limitation of the hostPath volume type, you should look into other volume types such as NFS or Gluster, both of which you can setup locally, but require some additional configuration and setup.</p>
<p>If you have only one drive which you can attach to one node, I think you should use the basic NFS volume type as it does not require replication.</p>
<p>If however, you can afford another drive to plug in to the second node, you can take advantage of GlusterFS's replication feature.</p>
<p><strong>Volume Types</strong></p>
<p>NFS: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#nfs</a></p>
<p>GlusterFS: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs</a></p>
<p><strong>Converting a drive to a volume:</strong></p>
<p>As for making your hard-drive become a persistent volume, I would separate that into 2 tasks.</p>
<ol>
<li><p>You need mount your physical drive to make it available at a specific path within your operating system.</p></li>
<li><p>Refer to the path of the mounted drive when configuring NFS, GlusterFS, or hostPath.</p></li>
</ol>
|
<p>I have successfully followed the documentation <a href="https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine" rel="nofollow noreferrer">here</a> and <a href="https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine#deploy_backend" rel="nofollow noreferrer">here</a> to deploy an API spec and GKE backend to Cloud Endpoints.</p>
<p>This has left me with a deployment.yaml that looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: esp-myproject
spec:
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: esp-myproject
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-myproject
spec:
replicas: 1
template:
metadata:
labels:
app: esp-myproject
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=myproject1-0-0.endpoints.myproject.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 8081
- name: myproject
image: gcr.io/myproject/my-image:v0.0.1
ports:
- containerPort: 8080
</code></pre>
<p>This creates a single replica of the app on the backend. So far, so good...</p>
<p>I now want to update the yaml file to <strong>declaratively</strong> specify auto-scaling parameters to enable multiple replicas of the app to run alongside each other when traffic to the endpoint justifies more than one.</p>
<p>I have read around (O'Reilly book: Kubernetes Up & Running, GCP docs, K8s docs), but there are two things on which I'm stumped:</p>
<ol>
<li>I've read a number of times about the HorizontalPodAutoscaler and it's not clear to me whether the deployment <strong>must</strong> make use of this in order to enjoy the benefits of autoscaling?</li>
<li>If so, I have seen examples in the docs of how to define the spec for the HorizontalPodAutoscaler in yaml as shown below - but how would I combine this with my existing deployment.yaml?</li>
</ol>
<p>HorizontalPodAutoscaler example (<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">from the docs</a>):</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
</code></pre>
<p>Thanks in advance to anyone who can shed some light on this for me.</p>
| <blockquote>
<ol>
<li>I've read a number of times about the HorizontalPodAutoscaler and it's not clear to me whether the deployment must make use of this in order to enjoy the benefits of autoscaling?</li>
</ol>
</blockquote>
<p>Doesn't have to, but it's recommended and it's already built in. You can build your own automation that scales up and down but the question is why since it's already supported with the HPA.</p>
<blockquote>
<ol start="2">
<li>If so, I have seen examples in the docs of how to define the spec for the HorizontalPodAutoscaler in yaml as shown below - but how would I combine this with my existing deployment.yaml?</li>
</ol>
</blockquote>
<p>It should be straightforward. You basically reference your deployment in the HPA definition:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-esp-project-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: esp-myproject <== here
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
</code></pre>
|
<p>I have docker-compose.yml</p>
<pre><code>version: '3.5'
services:
container-name:
image: container-image
ports:
- 80:80
- 443:443
</code></pre>
<p>And it creates the container with port forwarding to host machine.
docker inspect container-name</p>
<pre><code>[...]
NetworkSettings: {
[...]
Ports: {
443/tcp: [{ HostIp: 0.0.0.0, HostPort: 443 }]
80/tcp: [{ HostIp: 0.0.0.0, HostPort: 80 }]
}
[...]
}
[...]
</code></pre>
<p>But in the kubernetes next pod.yml, create container without ports.</p>
<pre><code>kind: Pod
matadata:
name: pod-name
spec:
containers:
- image: container-image
name: container-name
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
[...]
</code></pre>
<p>In short, I need forward container(pod) port to host machine(node).</p>
<p>I found out that better to expose. But it doesn't work for me.</p>
| <p>The kubernetes way" to prefer would be to expose your pod through a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="noreferrer">service</a> and control it with a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">deployment</a>.</p>
<p>If you want for some reason use the port-forwarding this is how you do:</p>
<pre><code>kubectl port-forward pod/pod-name 8080:80 8443:443 -n default
</code></pre>
<p>This is going to bind on your host ports <code>8080</code> and <code>8443</code>, forwarding the traffic to the ports <code>80</code> and <code>443</code> respectively for the pod with the name <code>pod-name</code>. I have not omitted the namespace (<code>default</code>), by default <code>kubectl</code> will use your <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="noreferrer">current-context</a> </p>
|
<p><strong>I would like to know if it is possible to send notification using yaml config if the kubernetes job fails?</strong> </p>
<p>For example, I have a kubetnetes job which runs once in everyday. Now i have been running a jenkins job to check and send notification if the job fails. <strong><em>Do we have any options to get notification from kubernetes jobs directly if it fails? It should be something like we add in job yaml</em></strong></p>
| <p>I'm not sure about any built in notification support. That seems like the kind of feature you can find in external dedicated monitoring/notification tools such as Prometheus or Logstash output.</p>
<p>For example, you can try this tutorial to leverage the prometheus metrics generated by default in many kubernetes clusters: <a href="https://medium.com/@tristan_96324/prometheus-k8s-cronjob-alerts-94bee7b90511" rel="noreferrer">https://medium.com/@tristan_96324/prometheus-k8s-cronjob-alerts-94bee7b90511</a></p>
<p>Or you can theoretically setup Logstash and monitor incoming logs sent by filebeat and conditionally send alerts as part of the output stage of the pipelines via the "email output plugin"</p>
<p>Other methods exist as well as mentioned in this similar issue: <a href="https://stackoverflow.com/questions/34138765/how-to-send-alerts-based-on-kubernetes-docker-events/34139082">How to send alerts based on Kubernetes / Docker events?</a></p>
<p>For reference, you may also wish to read this request as discussed in github: <a href="https://github.com/kubernetes/kubernetes/issues/22207" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/22207</a></p>
|
<p>We have upgraded our GKE cluster to 1.11.x and although the process finished successfully the cluster is not working. There are multiple pods that crash or stay peding and it all points at calico network don`t working:</p>
<pre><code>calico-node-2hhfz 1/2 CrashLoopBackOff 5 6m
</code></pre>
<p>Its log shows this info:</p>
<p><code>kubectl -n kube-system logs -f calico-node-2hhfz calico-node</code></p>
<p>Notice the errors (<code>could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)</code>)at the end:</p>
<pre><code>2018-12-04 11:22:39.617 [INFO][10] startup.go 252: Early log level set to info
2018-12-04 11:22:39.618 [INFO][10] startup.go 268: Using NODENAME environment for node name
2018-12-04 11:22:39.618 [INFO][10] startup.go 280: Determined node name: gke-apps-internas-apps-internas-4c-6r-ecf8b140-9p8x
2018-12-04 11:22:39.619 [INFO][10] startup.go 303: Checking datastore connection
2018-12-04 11:22:39.626 [INFO][10] startup.go 327: Datastore connection verified
2018-12-04 11:22:39.626 [INFO][10] startup.go 100: Datastore is ready
2018-12-04 11:22:39.632 [INFO][10] startup.go 1052: Running migration
2018-12-04 11:22:39.632 [INFO][10] migrate.go 866: Querying current v1 snapshot and converting to v3
2018-12-04 11:22:39.632 [INFO][10] migrate.go 875: handling FelixConfiguration (global) resource
2018-12-04 11:22:39.637 [INFO][10] migrate.go 875: handling ClusterInformation (global) resource
2018-12-04 11:22:39.637 [INFO][10] migrate.go 875: skipping FelixConfiguration (per-node) resources - not supported
2018-12-04 11:22:39.637 [INFO][10] migrate.go 875: handling BGPConfiguration (global) resource
2018-12-04 11:22:39.637 [INFO][10] migrate.go 600: Converting BGP config -> BGPConfiguration(default)
2018-12-04 11:22:39.644 [INFO][10] migrate.go 875: skipping Node resources - these do not need migrating
2018-12-04 11:22:39.644 [INFO][10] migrate.go 875: skipping BGPPeer (global) resources - these do not need migrating
2018-12-04 11:22:39.644 [INFO][10] migrate.go 875: handling BGPPeer (node) resources
2018-12-04 11:22:39.651 [INFO][10] migrate.go 875: skipping HostEndpoint resources - not supported
2018-12-04 11:22:39.651 [INFO][10] migrate.go 875: skipping IPPool resources - these do not need migrating
2018-12-04 11:22:39.651 [INFO][10] migrate.go 875: skipping GlobalNetworkPolicy resources - these do not need migrating
2018-12-04 11:22:39.651 [INFO][10] migrate.go 875: skipping Profile resources - these do not need migrating
2018-12-04 11:22:39.652 [INFO][10] migrate.go 875: skipping WorkloadEndpoint resources - these do not need migrating
2018-12-04 11:22:39.652 [INFO][10] migrate.go 875: data converted successfully
2018-12-04 11:22:39.652 [INFO][10] migrate.go 866: Storing v3 data
2018-12-04 11:22:39.652 [INFO][10] migrate.go 875: Storing resources in v3 format
2018-12-04 11:22:39.673 [INFO][10] migrate.go 1151: Failed to create resource Key=BGPConfiguration(default) error=resource does not exist: BGPConfiguration(default) with error: the server could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)
2018-12-04 11:22:39.673 [ERROR][10] migrate.go 884: Unable to store the v3 resources
2018-12-04 11:22:39.673 [INFO][10] migrate.go 875: cause: resource does not exist: BGPConfiguration(default) with error: the server could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)
2018-12-04 11:22:39.673 [ERROR][10] startup.go 107: Unable to ensure datastore is migrated. error=Migration failed: error storing converted data: resource does not exist: BGPConfiguration(default) with error: the server could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)
2018-12-04 11:22:39.673 [WARNING][10] startup.go 1066: Terminating
Calico node failed to start
</code></pre>
<p>Any idea how we can fix the cluster?</p>
| <p>There was a problem with the GKE upgrade process that leaves calico pods unable to start due to the lack of a custom resource definition for BGPConfiguration.</p>
<p>After applying the corresponding crd to the cluster problem solved:</p>
<pre><code>apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
</code></pre>
|
<p>I have a requirement that involves putting together uptime metrics for some of the pods in my Kubernetes cluster.</p>
<p>I am thinking of using the Kubernetes readiness checks and was curious if anyone has done anything similar?</p>
<p>Basically I am trying to generate reports that say this pod has had 95% uptime over the last week/month.</p>
| <p>Best is to use time-series database which can store uptime metrics. You can use grafana which as comes first class citizen with k8s cluster roll out if you need.</p>
<p>We use Wavefront to store and visualize this uptime metrics and tons of other metrics. Once you have uptime values available, you will see a sudden drops during pod container down time, prometheus/wavefront/grafan all can then allow you to apply time series functions to see (total uptime / total container down) over specific period of time (month in your case) to reflect what you need</p>
|
<p>I'm trying to setup kubernetes on AWS. For this I created an EKS cluster with 3 nodes (t2.small) according to official AWS tutorial. Then I want to run a pod with some app which communicates with Postgres (RDS in different VPC). </p>
<p>But unfortunately the app doesn't connect to the database.</p>
<p>What I have:</p>
<ol>
<li>EKS cluster with its own VPC (CIDR: 192.168.0.0/16)</li>
<li>RDS (Postgres) with its own VPC (CIDR: 172.30.0.0/16)</li>
<li>Peering connection initiated from the RDS VPC to the EKS VPC</li>
<li>Route table for 3 public subnets of EKS cluster is updated: route with destination 172.30.0.0/16 and target — peer connection from the step #3 is added.</li>
<li>Route table for the RDS is updated: route with destination 192.168.0.0/16 and target — peer connection from the step #3 is added.</li>
<li>The RDS security group is updated, new inbound rule is added: all traffic from 192.168.0.0/16 is allowed</li>
</ol>
<p>After all these steps I execute kubectl command:</p>
<pre><code>kubectl exec -it my-pod-app-6vkgm nslookup rds-vpc.unique_id.us-east-1.rds.amazonaws.com
nslookup: can't resolve '(null)': Name does not resolve
Name: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
Address 1: 52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com
</code></pre>
<p>Then I connect to one of the 3 nodes and execute a command:</p>
<pre><code>getent hosts rds-vpc.unique_id.us-east-1.rds.amazonaws.com
52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com rds-vpc.unique_id.us-east-1.rds.amazonaws.com
</code></pre>
<p>What I missed in EKS setup in order to have access from pods to RDS?</p>
<p><strong>UPDATE:</strong></p>
<p>I tried to fix the problem by <code>Service</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ExternalName
externalName: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
</code></pre>
<p>So I created this service in EKS, and then tried to refer to <code>postgres-service</code> as DB URL instead of direct RDS host address.</p>
<p>This fix does not work :(</p>
| <p>Have you tried to enable "dns propagation" in the peering connection? It looks like you are not getting the internally routable dns. You can enable it by going into the setting for the peering connection and checking the box for dns propagation. I generally do this will all of the peering connections that I control.</p>
|
<p>I create a private Docker registry without username and password, only with TLS.</p>
<pre><code>rpc error: code = Unknown desc = Error response from daemon: Get https://<my-domain>/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 14m (x4 over 16m) kubelet, ip-172-10-10-157 Error: ErrImagePull
</code></pre>
<p>I tried with </p>
<pre><code>kubectl create secret docker-registry docker-registry-dev --docker-server=<my-domain>
</code></pre>
<p>And I get this error:</p>
<pre><code>required flag(s) "docker-password", "docker-username" not set
</code></pre>
<p>From the command line with docker I can pull correct my docker image.</p>
<p>Any ideas for Kubernetes?</p>
| <p>This really should work out of the box without <code>imagePullSecrets</code> and considering your registry has valid and signed certs. It looks more like your node cannot connect to <code>https://<my-domain>/v2/</code>. A couple of things you can check:</p>
<ol>
<li><p>Your registry's <a href="https://docs.docker.com/registry/configuration/#http" rel="nofollow noreferrer">http</a> section is configured to listen on https on the right port.</p></li>
<li><p>Check with something like <code>curl https://registry-name/v2/</code> from one of your Kubernetes nodes and that you have connectivity.</p></li>
</ol>
|
<p>Consider the below .yaml file :</p>
<pre><code>application/guestbook/redis-slave-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-slave
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 6379
</code></pre>
<p>The resource section isn't clear to me! If I have 16G RAM and 4core CPU, each core 2GHz, then how much are the requested resources which come above? </p>
| <p>So you have a total of 4 CPU cores and 16GB RAM. This Deployment will start two Pods (replicas) and each will start with 0.1 cores and 0.1GB reserved on the Node on which it starts. So in total 0.2 cores and 0.2GB will be reserved, leaving up to 15.8GB and 3.8cores. However the actual usage may exceed the reservation as this is only a the requested amount. To specify an upper limit you use a limits section. </p>
<p>It can be <a href="https://github.com/kubernetes/kubernetes/issues/60787" rel="nofollow noreferrer">counter-intuitive that CPU allocation is based on cores rather than GHz</a> - there's a <a href="https://www.google.co.uk/amp/s/gweb-cloudblog-publish.appspot.com/products/gcp/kubernetes-best-practices-resource-requests-and-limits/amp/" rel="nofollow noreferrer">fuller explanation in the GCP docs</a> and more on the arithmetic <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit" rel="nofollow noreferrer">in the official kubernetes docs</a></p>
|
<p>I'm trying to create a <code>Kubernetes</code> cluster for learning purposes. So, I created 3 virtual machines with <code>Vagrant</code> where the master has IP address of <code>172.17.8.101</code> and the other two are <code>172.17.8.102</code> and <code>172.17.8.103</code>.</p>
<p>It's clear that we need <code>Flannel</code> so that our containers in different machines can connect to each other without port mapping. And for <code>Flannel</code> to work, we need <code>Etcd</code>, because flannel uses this <code>Datastore</code> to put and get its data. </p>
<p>I installed <code>Etcd</code> on master node and put <code>Flannel</code> network address on it with command <code>etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'</code></p>
<p>To enable <code>ip masquerading</code> and also using the private network interface in the virtual machine, I added <code>--ip-masq --iface=enp0s8</code> to <code>FLANNEL_OPTIONS</code> in <code>/etc/sysconfig/flannel</code> file.</p>
<p>In order to make <code>Docker</code> use <code>Flannel</code> network, I added <code>--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}'</code> to <code>OPTIONS</code> variable in <code>/etc/sysconfig/docker</code> file. Note that the values for <code>FLANNEL_SUBNET</code> and <code>FLANNEL_MTU</code> variables are the ones set by <code>Flannel</code> in <code>/run/flannel/subnet.env</code> file.</p>
<p>After all these settings, I installed <code>kubernetes-master</code> and <code>kubernetes-client</code> on the master node and <code>kubernetes-node</code> on all the nodes. For the final configurations, I changed <code>KUBE_SERVICE_ADDRESSES</code> value in <code>/etc/kubernetes/apiserver</code> file to <code>--service-cluster-ip-range=10.33.0.0/16</code>
and <code>KUBELET_API_SERVER</code> value in <code>/etc/kubernetes/kubelet</code> file to <code>--api-servers=http://172.17.8.101:8080</code>.</p>
<p>This is the link to <a href="https://github.com/yubar45/k8s-tutorial" rel="nofollow noreferrer">k8s-tutorial project</a> repository with the complete files.</p>
<p>After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command <code>kubectl get nodes</code>. I can successfully create a <code>nginx</code> pod with command <code>kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx"</code> and create a service with <code>kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod"</code> command.</p>
<p>The command <code>kubectl describe service service-pod</code> outputs the following results:</p>
<pre><code>Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
</code></pre>
<p>The challenge is that when I try to connect to the created service with <code>curl 10.33.79.222:8000</code> I get <code>curl: (7) Failed connect to 10.33.72.2:8000; Connection refused</code> but if I try <code>curl 10.33.72.2:80</code> I get the default <code>nginx</code> page. Also, I can't ping to <code>10.33.79.222</code> and all the packets get lost. </p>
<p>Some suggested to stop and disable <code>Firewalld</code>, but it wasn't running at all on the nodes. As <code>Docker</code> changed <code>FORWARD</code> chain policy to <code>DROP</code> in <code>Iptables</code> after version 1.13 I changed it back to <code>ACCEPT</code> but it didn't help either. I eventually tried to change the <code>CIDR</code> and use different IP/subnets but no luck.</p>
<p>Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?</p>
| <p>The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.</p>
<p>The Flannel network: <code>'{"Network": "10.33.0.0/16"}'</code>. Then on the kube-apiserver <code>--service-cluster-ip-range=10.33.0.0/16</code>. That's the same range and it should be different so you have your kube-proxy setting up services for <code>10.33.0.0/16</code> and then you have your overlay thinking it needs to route to the pods running on <code>10.33.0.0/16</code>. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.</p>
<p>For example on my cluster (I'm using Calico) I have a podCidr of <code>192.168.0.0/16</code> and I have a service Cidr of <code>10.96.0.0/12</code></p>
<p>Note: you wouldn't be able to ping <code>10.33.79.222</code> since ICMP is not allowed in this case.</p>
|
<p>I having an issue trying to get Istio working on my cluster. My infrastructure looks like this:</p>
<p>I have a Magento Store with a varnish as front cache. It was working before istio installation. I have already enabled envoy injection. Varnish is deployed in a pod and has its own service redirects uncached to magento service.</p>
<p>The problem comes when I try to curl from varnish to magento.</p>
<p>If I curl magento service from varnish I get a redirect to magento URL (which is the expected behavior)</p>
<pre><code>root@varnish-6468d5958d-dvxhx:/# curl -v store-es
* Rebuilt URL to: store-es/
* Trying 10.32.97.229...
* TCP_NODELAY set
* Connected to store-es (10.32.97.229) port 80 (#0)
> GET / HTTP/1.1
> Host: store-es
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< server: envoy
< date: Wed, 07 Nov 2018 11:08:47 GMT
< content-type: text/html
< content-length: 185
< location: https://store-sta.xxxxx.yyy/
< myheader: store-es-6484d46d66-952xj
< x-envoy-upstream-service-time: 4
<
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.13.8</center>
</body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host store-es left intact
</code></pre>
<p>But when I try the same but using a Host header to skip magento redirection I got a 404:</p>
<pre><code>root@varnish-6468d5958d-dvxhx:/# curl -v -H "Host: store-sta.xxxxx.yyy" store-es
* Rebuilt URL to: store-es/
* Trying 10.32.97.229...
* TCP_NODELAY set
* Connected to store-es (10.32.97.229) port 80 (#0)
> GET / HTTP/1.1
> Host: store-sta.xxxxx.yyy
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Wed, 07 Nov 2018 11:11:59 GMT
< server: envoy
< content-length: 0
<
* Curl_http_done: called premature == 0
* Connection #0 to host store-es left intact
</code></pre>
<p>I have checked logs and requests whit Host header are never arriving to magento service (store-es in my example). And when I check varnish envoy's log is the one which is returning a 404:</p>
<pre><code>[2018-11-07T11:11:50.548Z] "GET /HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.52.1" "e754b17f-ae2e-4734-beb4-f7a2d6e412da" "store-sta.xxxxx.yyy" "-" - - 10.32.97.229:80 10.32.67.5:45540
</code></pre>
<p>Do you know why is this happening? Why host header is returning a 404 without requesting magento service?</p>
<p>I also want to mention that magento is working without varnish and it's able to connect to redis (in the same cluster) and mysql (outside cluster) so I have discarded a magento's trouble.</p>
| <p>I have a similar issue where the Host header hitting envoy is an external domain which does not match the internal service entry name (it never would). As it does not match it gets a 404, I think there must be a way to configure this with VirtualService but not sure how yet.</p>
|
<p>Currently I have an OKD/openshift template which exposes port 1883 on a specific container:</p>
<pre><code>ports:
- name: 1883-tcp
port: 1883
containerPort: 1883
protocol: TCP
hostPort: ${MQTT-PORT}
</code></pre>
<p>Is it possible to have an if/else clause depending on parameters. For example:</p>
<pre><code>ports:
- name: 1883-tcp
port: 1883
containerPort: 1883
protocol: TCP
{{ if ${MQTT-PORT} != 0 }}
hostPort: ${MQTT-PORT}
{{ /if }}
</code></pre>
<p>By doing this, I can have the same template in all my environments (e.g.: development/testing/production) but based on the parameters given by creation, some ports are available for debugging or testing without having to forward them each time using the oc command.</p>
| <p>You can't do this kind of conditional processing at the template level. </p>
<p>But, to achieve your desired outcome, you can do one of 2 things.</p>
<p><strong>Option 1</strong>
Pass all the parameters required for the condition to process at the template level, like <code>MQTT-PORT</code>and map the correct port number when building your service.
This might be the correct approach as templates are designed to be as logic-less as possible, you do all the decision making at a much lower level.</p>
<p><strong>Option 2</strong>
If you can relax the "same template" constraint, We could have 2 flavors of the same template, one with the specific port and another with the parameterized port. The only issue with this option is to change 2 templates every time you change your app/service specs, which violates the <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">DRY principle</a>.</p>
<p><strong>Update</strong></p>
<p>Using Helm with OpenShift might be the best option here. You can templatize your artifacts using Helm's conditionals and deploy a Helm app to OpenShift. Here's a <a href="https://github.com/sclorg/nodejs-ex/tree/master/helm/nodejs/templates" rel="nofollow noreferrer">repository</a> which has a Helm chart tailored for OpenShift.
Also, you need to point to the right namespace for Tiller to use Helm with OpenShift. You can find more details about it <a href="https://blog.openshift.com/getting-started-helm-openshift/" rel="nofollow noreferrer">here</a>.</p>
|
<pre><code>Error: configmaps is forbidden: User "system:serviceaccount:k8s-tiller:k8s-tiller" cannot list configmaps in the namespace "k8s-tiller": clusterrole.rbac.authorization.k8s.io "tiller" not found
</code></pre>
<p>Can someone explain this error? The <code>"k8s-tiller": clusterrole.rbac.authorization.k8s.io "tiller" not found</code> does not make sense to me. What is this meant to indicate?</p>
<p>Please ignore how to actually solve the error, I'm just looking for an explanation of it.</p>
| <p>This error for <strong>RBAC</strong>( to know more about RBAC, see <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">here</a>). </p>
<p><strong>Serviceaccount</strong> <code>k8s-tiller</code> in namespace <code>k8s-tiller</code> has no permission to list <code>configmaps</code> in namespace <code>k8s-tiller</code>. Also <strong>Clusterrole</strong> <code>tiller</code> does not exist in your cluster. The ClusterRoleBinding or RoleBinding you created for your serviceaccount <code>k8s-tiller</code> included ClusterRole <code>tiller</code> as <code>roleRef</code>. But that ClusterRole <code>tiller</code> does not exist.</p>
|
<p>I'm having some issues when using a path to point to a different
Kubernetes service. </p>
<p>I'm pointing to a secondary service using the path <strong>/secondary-app</strong> and I can see through my logs that I am correctly reaching that service. </p>
<p>My issue is that any included resource on the site, let's say <strong>/css/main.css</strong> for example, end up not found resulting in a 404.</p>
<p>Here's a slimmed down version of my ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/rewrite-target: /
name: my-app
spec:
rules:
- host: my-app.example.com
http:
paths:
- backend:
path: /
serviceName: my-app
servicePort: http
- backend:
path: /secondary-app
serviceName: secondary-app
servicePort: http
</code></pre>
<p>I've tried a few things and haven't yet been able to make it work. Do I maybe need to do some apache rewrites?</p>
<p>Any help would be appreciated.</p>
<h2>Edit - Solution</h2>
<p>Thanks to some help from @mk_sta I was able to get my secondary service application working by using the <code>nginx.ingress.kubernetes.io/configuration-snippet</code> annotation like so:</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
if ($request_uri = '/?%secondary-app') { rewrite /(.*) secondary-app/$1 break; }
</code></pre>
<p>It still needs a bit of tweaking for my specific app but that worked exactly how I needed it to.</p>
| <p>I guess <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> annotation in your <code>Ingress</code> configuration doesn't bring any success for multipath rewrite target paths, read more <a href="https://stackoverflow.com/questions/49514702/kubernetes-ingress-with-multiple-target-rewrite">here</a>. However, you can consider to use <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx Plus Ingress controller</a>, shipped with <code>nginx.org/rewrites:</code> annotation and can be used for pointing URI paths to multiple services as described in this <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/rewrites" rel="nofollow noreferrer">example</a>.</p>
<p>You can also think about using <code>nginx.ingress.kubernetes.io/configuration-snippet</code> <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet" rel="nofollow noreferrer">annotation</a> for the existing <code>Ingress</code>, which can adjust rewrite rules to Nginx location, something like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite /first-app/(.*) $1 break;
rewrite /secondary-app/(.*) /$1 break;
name: my-app
spec:
rules:
- host: my-app.example.com
http:
paths:
- backend:
path: /first-app
serviceName: my-app
servicePort: http
- backend:
path: /secondary-app
serviceName: secondary-app
servicePort: http
</code></pre>
|
<p>I have an existing POD containing a DB. I have a script containing executable queries in that container. I need to schedule the execution of the script. How do I go about doing this?</p>
| <p>OpenShift has a "cronjob" resource type which can schedule a job to run at specific intervals. You can read more about it <a href="https://docs.okd.io/3.11/dev_guide/cron_jobs.html" rel="nofollow noreferrer">here</a>.</p>
<p>You can create a custom image which contains the client to connect to your DB and supply it with the credentials mapped as secrets. This can run your executable queries at the interval you've set for the job.</p>
|
<p>I have created a <code>Certifate</code> and a <code>ClusterIssuer</code>.</p>
<p>I see the following in the cert-manager pod :</p>
<pre><code>I1205 10:43:33.398387 1 setup.go:73] letsencrypt: generating acme account private key "letsencrypt-key"
I1205 10:43:33.797808 1 logger.go:88] Calling GetAccount
I1205 10:43:34.622715 1 logger.go:83] Calling CreateAccount
I1205 10:43:34.826902 1 setup.go:181] letsencrypt: verified existing registration with ACME server
I1205 10:43:34.826932 1 helpers.go:147] Setting lastTransitionTime for ClusterIssuer "letsencrypt" condition "Ready" to 2018-12-05 10:43:34.826925858 +0000 UTC m=+8919.950996321
I1205 10:43:34.833364 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt"
I1205 10:43:43.797372 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt'
I1205 10:43:43.797637 1 setup.go:144] Skipping re-verifying ACME account as cached registration details look sufficient.
I1205 10:43:43.797667 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt"
I1205 11:07:17.492578 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt'
I1205 11:07:17.493076 1 setup.go:144] Skipping re-verifying ACME account as cached registration details look sufficient.
I1205 11:07:17.493107 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt"
I1205 11:49:10.390864 1 controller.go:171] certificates controller: syncing item 'staging/acme-crt'
I1205 11:49:10.391909 1 helpers.go:194] Setting lastTransitionTime for Certificate "acme-crt" condition "Ready" to 2018-12-05 11:49:10.391887695 +0000 UTC m=+12855.515958147
I1205 11:49:10.399460 1 controller.go:185] certificates controller: Finished processing work item "staging/acme-crt"
I1205 11:49:12.400064 1 controller.go:171] certificates controller: syncing item 'staging/acme-crt'
I1205 11:49:12.400270 1 controller.go:185] certificates controller: Finished processing work item "staging/acme-crt"
</code></pre>
<p>There is no 'certificate' being generated - I assume this because the secret referenced in my <code>Certficate</code> resource is not generated.</p>
<p>How can I diagnose this further?</p>
| <p><code>kubectl get certificates</code> and then one can do a describe on the certificate for extra information</p>
|
<p>I manage a K8s-cluster, managed by terraform :</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I want to delete a stack; so I removed the code and I applied. It throwed an error due to a timeout. I retried and I went successfully.</p>
<p>but now, I still have 2 replications controller (that are empty) :</p>
<pre><code>portal-api 0 0 0 2h
portal-app 0 0 0 2h
</code></pre>
<p>no more service, no more horizontal_pod_scheduler; but still my replication_controller.</p>
<p>I tried to remove them :</p>
<pre><code>$ kubectl delete rc portal-api
error: timed out waiting for "portal-api" to be synced
</code></pre>
<p>Same if I want to force the deletion :</p>
<pre><code>$ kubectl delete rc portal-api --cascade=false --force=true
$
$ kubectl get rc
[...]
portal-api 0 0 0 2h
portal-app 0 0 0 2h
[...]
</code></pre>
<p>I also still can see its configuration (filled with a <code>deletionTimestamp</code>) :</p>
<pre><code>$ kubectl edit rc portal-api
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: ReplicationController
metadata:
creationTimestamp: 2018-12-05T14:00:15Z
deletionGracePeriodSeconds: 0
deletionTimestamp: 2018-12-05T15:22:00Z
finalizers:
- orphan
generation: 3
labels:
App: portal-api
name: portal-api
namespace: default
resourceVersion: "32590661"
selfLink: /api/v1/namespaces/default/replicationcontrollers/portal-api
uid: 171f605e-f896-11e8-b761-02d4b8553a0e
spec:
replicas: 0
selector:
App: portal-api
template:
metadata:
creationTimestamp: null
labels:
App: portal-api
spec:
automountServiceAccountToken: false
containers:
- env:
- name: AUTHORITY_MGR
value: http://system-authority-manager-service
image: gitlab.********************:4567/apps/portal/api:prd
imagePullPolicy: Always
name: portal-api
ports:
- containerPort: 3300
protocol: TCP
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
terminationGracePeriodSeconds: 30
status:
replicas: 0
</code></pre>
<p>Could someone help me on this ? Any idea ? </p>
<p>thanks,</p>
| <p>Using <code>kubectl edit rc portal-api</code> remove <code>finalizer</code> part from the resource:</p>
<pre><code>finalizers:
- orphan
</code></pre>
|
<p>I have created a REST API - in a few words, my client hits a particular URL and she gets back a JSON response.</p>
<p>Internally, quite a complicated process starts when the URL is hit, and there are various services involved as a microservice architecture is being used.</p>
<p>I was observing some performance bottlenecks and decided to switch to a message queue system. The idea is that now, once the user hits the URL, a request is published on internal message queue waiting for it to be consumed. This consumer will process and publish back on a queue and this will happen quite a few times until finally, the same node servicing the user will receive back the processed response to be delivered to the user.</p>
<p>An asynchronous "fire-and-forget" pattern is now being used. But my question is, how can the node servicing a particular person remember who it was servicing once the processed result arrives back and without blocking (i.e. it can handle several requests until the response is received)? If it makes any difference, my stack looks a little like this: TomCat, Spring, Kubernetes and RabbitMQ.</p>
<p>In summary, how can the request node (whose job is to push items on the queue) maintain an open connection with the client who requested a JSON response (i.e. client is waiting for JSON response) and receive back the data of the correct client?</p>
| <p>You have few different scenarios according to how much control you have on the client.</p>
<p>If the client behaviour cannot be changed, you will have to keep the session open until the request has not been fully processed. This can be achieved employing a pool of workers (futures/coroutines, threads or processes) where each worker keeps the session open for a given request. </p>
<p>This method has few drawbacks and I would keep it as last resort. Firstly, you will only be able to serve a limited amount of concurrent requests proportional to your pool size. Lastly as your processing is behind a queue, your front-end won't be able to estimate how long it will take for a task to complete. This means you will have to deal with long lasting sessions which are prone to fail (what if the user gives up?).</p>
<p>If the client behaviour can be changed, the most common approach is to use a fully asynchronous flow. When the client initiates a request, it is placed within the queue and a Task Identifier is returned. The client can use the given <code>TaskId</code> to poll for status updates. Each time the client requests updates about a task you simply check if it was completed and you respond accordingly. A common pattern when a task is still in progress is to let the front-end return to the client the estimated amount of time before trying again. This allows your server to control how frequently clients are polling. If your architecture supports it, you can go the extra mile and provide information about the progress as well. </p>
<p>Example response when task is in progress:</p>
<pre><code>{"status": "in_progress",
"retry_after_seconds": 30,
"progress": "30%"}
</code></pre>
<p>A more complex yet elegant solution would consist in using HTTP callbacks. In short, when the client makes a request for a new task it provides a tuple (URL, Method) the server can use to signal the processing is done. It then waits for the server to send the signal to the given URL. You can see a better explanation <a href="https://bttn.freshdesk.com/support/solutions/articles/5000668897-using-http-callbacks" rel="noreferrer">here</a>. In most of the cases this solution is overkill. Yet I think it's worth to mention it.</p>
|
<p>I am trying to use <code>Helm charts</code> to install applications in Kubernetes clusters. Can someone please suggest what could be a better solution to manage secrets? Using <code>helm secrets</code> would be a good idea or <code>Hashicorp Vault</code>?</p>
| <p>Vault is technically awesome, but it can be an administrative burden. You can get strong protection of "secrets", whatever they may be; you can avoid ever sharing magic secrets like the your central database password by generating single-use passwords; if you need something signed or encrypted, you can ask Vault to do that for you and avoid ever having to know the cryptographic secret yourself. The big downsides are that it's a separate service to manage, getting secrets out of it is not totally seamless, and you occasionally need to have an administrator party to unseal it if you need to restart the server.</p>
<p>Kubernetes secrets are really just ConfigMaps with a different name. With default settings it's very easy for an operator to get out the value of a Secret (<code>kubectl get secret ... -o yaml</code>, then base64 decode the strings), so they're not actually that secret. If you have an interesting namespace setup, you generally can't access a Secret in a different namespace, which could mean being forced to copy around Secrets a lot. Using only native tools like <code>kubectl</code> to manage Secrets is also a little clumsy.</p>
<p>Pushing credentials in via Helm is probably the most seamless path – it's very easy to convert from a Helm value to a Secret object to push into a container, and very easy to push in values from somewhere like a CI system – but also the least secure. In addition to being able to dump out the values via <code>kubectl</code> you can also <code>helm get values</code> on a Helm release to find out the values.</p>
<p>So it's a question of how important keeping your secrets really secret is, and how much effort you want to put in. If you want seamless integration and can limit access to your cluster to authorized operators and effectively use RBAC, a Helm value might be good enough. If you can invest in the technically best and also most complex solution and you want some of its advanced capabilities, Vault works well. Maintaining a plain Kubernetes secret is kind of a middle ground, it's a little more secure than using Helm but not nearly as manageable.</p>
|
<p>I create a <code>deployment.yaml</code> to create deployment of kubernetes.</p>
<p>Here is my tries:</p>
<p><code>apiVersion: apps/v1</code></p>
<p>get <code>error: unable to recognize "./slate-master/deployment.yaml": no matches for kind "Deployment" in version "apps/v1"</code></p>
<p><code>apiVersion: extensions/v1beta1</code> and <code>apiVersion: apps/v1beta1</code></p>
<p>both of them, get <code>Error from server (BadRequest): error when creating "./slate-master/deployment.yaml": Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment: ...</code></p>
<p>here is my kubernetes version: </p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-05-12T04:12:12Z", GoVersion:"go1.9.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>So, why kubernetes create deployment failed?</p>
| <p>Check in "env" section, for apiVersion: </p>
<pre><code>apps/v1
apps/v1beta1
apps/v1beta2
</code></pre>
<p>All the env variables should be a string, add the <code>quote</code>: e.g </p>
<pre><code>- name: POSTGRES_PORT
value: {{ .Values.db.env.POSTGRES_PORT | quote }}
</code></pre>
|
<p>I'm still new to Kubernetes so please excuse if this is a silly question.</p>
<p>I'm architecting a system which includes: </p>
<ul>
<li>an MQTT broker</li>
<li>a set of (containerized) microservices that publish and subscribe to it</li>
<li>a Redis cache that the microservices read and write to. </li>
</ul>
<p>We will certainly need multiplicity of all of these components as we scale.</p>
<p>There is a natural division in the multiplicity of each of these things: they each pertain to a set of intersections in a city. A publishing or subscribing microservice will handle 1 or more intersections. The MQTT broker instance and the Redis instance each could be set up to handle n intersections. </p>
<p>I am wondering if it makes sense to try to avoid unnecessary network hops in Kubernetes by trying to divide things up by intersection and put all containers related to a given set of intersections on one node. Would this mean putting them all on a single pod, or is there another way? </p>
<p>(By the way, there will still be other publishers and subscribers that need to access the MQTT broker that are not intersection-specific.)</p>
| <p>This is more of an opinion question.</p>
<blockquote>
<p>Would this mean putting them all on a single pod, or is there another way?</p>
</blockquote>
<p>I would certainly avoid putting them all in one Pod. In theory, you can put anything in a single pod, but the general practice is to add lightweight sidecars that handle a very specific function. </p>
<p>IMO an MQTT broker, a Redis datastore and a subscribe/publish app seem like a lot of to put in a single pod.</p>
<p>Possible Disadvantages:</p>
<ul>
<li>Harder to debug because you may not know where the failure comes from.</li>
<li>A publish/subscriber is generally more of a stateless application and MQTT & Redis would stateful. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a> are more recommended for stateless services and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> are recommended for stateful services.</li>
<li>Maybe networking latency. But you can use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">Node Affinity and Pod Affinity</a> to mitigate that.</li>
</ul>
<p>Possible Advantages:</p>
<ul>
<li>All services sharing the same IP/Context.</li>
<li>Too much clutter in a pod.</li>
</ul>
<p>It would be cleaner if you had:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> for your sub/pub app.</li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> with its own storage for your Redis server.</li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Statefulset</a> with its own storage for your MQTT.</li>
</ul>
<p>Each one of these workload resources would create separate pods and you can scale independently up/down.</p>
|
<p>I am aware that client affinity is possible for a LoadBalancer type service in Kubernetes. The thing is that this affinity doesn't forbid that two different clientes access the same pod. </p>
<p>Is it possible to associate a pod exclusively always to the same client?</p>
<p>Thanks in advance and have a really nice day!</p>
| <p>To only allow a specific external client/s to access a specific Pod/Deployment you can use whitelisting/source ranges. Restrictions can be <a href="https://stackoverflow.com/questions/43849285/whitelist-filter-incoming-ips-for-https-load-balancer">applied to LoadBalancers</a> as <code>loadBalancerSourceRanges</code>. You add a section to the Service like:</p>
<pre><code> loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
</code></pre>
<p>But <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/" rel="nofollow noreferrer">not all cloud providers currently support it</a>. </p>
<p>Alternatively you could expose the Pod with an Ingress and <a href="https://medium.com/@maninder.bindra/using-nginx-ingress-controller-to-restrict-access-by-ip-ip-whitelisting-for-a-service-deployed-to-bd5c86dc66d6" rel="nofollow noreferrer">apply whitelisting on the Ingress</a>. For whitelisting with an nginx Ingress you can add annotation to the Ingress such as <code>nginx.ingress.kubernetes.io/whitelist-source-range: 49.36.X.X/32</code></p>
|
<p>Seems that <code>kubectl logs</code> doesn't support cronjob. It says</p>
<blockquote>
<p>error: cannot get the logs from *v1beta1.CronJob: selector for *v1beta1.CronJob not implemented</p>
</blockquote>
<p>Currently I check the logs of all relative jobs one by one.</p>
<p>Is there any simple command or tool to get these logs?</p>
<hr>
<p>I did some research on bash script and modified <a href="https://stackoverflow.com/questions/53647683/how-to-get-logs-of-jobs-created-by-a-cronjob/53648331#53648331">edbighead's answer</a> to better suit my needs.</p>
<pre><code># cronJobGetAllLogs.sh: Get all logs of a cronjob.
# example:
# ./cronJobGetAllLogs.sh [Insert name of cronJob]
jobs=( $(kubectl get jobs --no-headers -o custom-columns=":metadata.name" | awk "/$1-[0-9]+/{print \$1}" | sort -r ) )
for job in "${jobs[@]}"
do
echo Logs from job $job
pod=$(kubectl get pods -l job-name=$job --no-headers -o custom-columns=":metadata.name")
kubectl logs $pod
done
</code></pre>
<p></p>
<pre><code># cronJobGetLatestLog.sh: Get log of latest job initiated by a cronjob.
# example:
# ./cronJobGetLateestLog.sh [Insert name of cronJob]
job=$(kubectl get jobs --no-headers -o custom-columns=":metadata.name" | awk "/$1-[0-9]+/{print \$1}" | sort -r | head -1)
pod=$(kubectl get pods -l job-name=$job --no-headers -o custom-columns=":metadata.name")
kubectl logs $pod
</code></pre>
| <p>From documentation of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">CronJobs</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">Jobs</a></p>
<blockquote>
<p>A Cron Job creates Jobs on a time-based schedule</p>
<p>...</p>
<p>A job creates one or more pods and ensures that a specified number of them successfully terminate.</p>
</blockquote>
<p>All you need is to view logs for a pod that was created for the job.</p>
<ol>
<li><p>Find your job with <code>kubectl get jobs</code>. This will return your CronJob name with a timestamp</p>
</li>
<li><p>Find pod for executed job <code>kubectl get pods -l job-name=your-job-@timestamp</code></p>
</li>
<li><p>Use <code>kubectl logs your-job-@timestamp-id</code> to view logs</p>
</li>
</ol>
<p>Here's an example of bash script that does all the above and outputs logs for every job's pod.</p>
<pre><code>jobs=( $(kubectl get jobs --no-headers -o custom-columns=":metadata.name") )
for job in "${jobs[@]}"
do
pod=$(kubectl get pods -l job-name=$job --no-headers -o custom-columns=":metadata.name")
kubectl logs $pod
done
</code></pre>
|
<p>Since yesterday, I am struggling with this strange issue: node "kmaster" not found.
<a href="https://i.stack.imgur.com/VIqtp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VIqtp.png" alt="img_1"></a></p>
<p>I tried with multiple combinations of installing kubernetes on jetstream instance.</p>
<ol>
<li>using calico in ubuntu</li>
<li>using flannel in centos</li>
<li>and few other ways</li>
</ol>
<p>I looked it online and found many people have the same issue:
<a href="https://github.com/kubernetes/kubernetes/issues/61277" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/61277</a>
<a href="https://i.stack.imgur.com/dAupU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dAupU.png" alt="[kubeissue_2.jpg]"></a></p>
<p>If someone ran into a similar issue, then please let me know what steps are needed to be taken to resolve it.</p>
<p>Thanks.</p>
| <p>I would recommend to bootstrap Kubernetes cluster from scratch and share with you some helpful links with steps how to proceed: </p>
<ul>
<li><a href="https://stackoverflow.com/questions/52720380/kubernetes-api-server-is-not-starting-on-a-single-kubeadm-cluster">Kubernetes cluster install on Ubuntu with Calico CNI</a></li>
<li><a href="https://stackoverflow.com/questions/53383994/error-marking-master-timed-out-waiting-for-the-condition-kubernetes/53410468#53410468">Kubernetes cluster install on Centos with Flannel CNI</a></li>
</ul>
<p>Keep in mind to fulfill system <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#before-you-begin" rel="nofollow noreferrer">requirements</a> before you start with <code>kubeadm</code> installation procedure.</p>
<p>You can also take a look at the general <code>kubeadm</code> installation or runtime <a href="https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/" rel="nofollow noreferrer">troubleshooting guide</a>.</p>
|
<p>When creating deployments, I am currently trying to find a reason why one should externalize the environment variables for a container into a configmap.
So instead of defining environment variables with </p>
<pre><code> env:
- name: LANGUAGE
value: "English"
</code></pre>
<p>in the deployment.yaml use</p>
<pre><code> env:
- name: LANGUAGE
valueFrom:
configMapKeyRef:
name: language
key: LANGUAGE
</code></pre>
<p>or </p>
<pre><code> envFrom:
- configMapRef:
name: env-configmap
</code></pre>
<p>with an additional configmap.yaml like so: </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: env-configmap
data:
LANGUAGE: English
</code></pre>
<p>Of course, when using confidential values, they should be read from a secret, but that does not apply to non-confidential variables. The only advantage I see is that I can reuse these configmaps, but apart from that it only makes the chart more complicated as I now have to ensure that the pods are restarted etc...</p>
<p>So: What are other advantages when using ConfigMaps to read the environment variables?</p>
| <p>As you point out, you can re-use the ConfigMap so that other parts of your chart can <a href="https://stackoverflow.com/a/53571968/9705485">easily re-use the same environment variables</a>. How useful this is can depend on how many variables you have and how many places they are used in. </p>
<p>A ConfigMap is also available as an Object in the cluster that other Pods can make use of, including ones that are not part of your chart. This could mean your configmap getting referenced by other apps getting installed in the same cluster, or it could be that you choose to publish your chart and then it might get packaged as a dependency within another chart. If your chart is to be used as a dependency in another chart then it makes things a bit easier/cleaner for the chart that is building on top of yours to reference parts of your configuration from a ConfigMap. So the usefulness can also depend on how you intend your chart to be used. The <a href="https://github.com/helm/charts" rel="noreferrer">official charts</a> use a lot of ConfigMaps but they do sometimes use environment variables directly and they <a href="https://hackernoon.com/the-art-of-the-helm-chart-patterns-from-the-official-kubernetes-charts-8a7cafa86d12" rel="noreferrer">use ConfigMaps in a variety of ways for different purposes</a>.</p>
|
<p>I am trying to set up a multipath ingress controller, the problem I encounter is that one path is completely ignored.The service at /blog never gets hit.I tried duplicating the host entry but is the same result.
Any help is more than welcomed as i've been hitting my head against the wall for past 10 hours with this.</p>
<p>This is the ingress.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 'my-ingress-ip'
spec:
tls:
- secretName: my-ingress-tls
rules:
- host: www.example.com
http:
paths:
- path: /blog
backend:
serviceName: blog
servicePort: 81
- path: /*
backend:
serviceName: www
servicePort: 80
- host: graphql.example
http:
paths:
- path: /*
backend:
serviceName: example-graphql
servicePort: 80
</code></pre>
| <p>This how it should be your ingress if you want to have multi service in one host: </p>
<pre><code> apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 'my-ingress-ip'
spec:
tls:
- secretName: my-ingress-tls
rules:
- host: www.example.com
http:
paths:
- path: /blog/*
backend:
serviceName: blog
servicePort: 81
- path: /*
backend:
serviceName: www
servicePort: 80
- host: graphql.example
http:
paths:
- path: /*
backend:
serviceName: example-graphql
servicePort: 80
</code></pre>
|
<p>I had a "stuck" namespace that I deleted showing in this eternal "terminating" status.</p>
| <p>Assuming you've already tried to force-delete resources like:
<a href="https://stackoverflow.com/q/35453792">Pods stuck at terminating status</a>, and your at your wits' end trying to recover the namespace...</p>
<p>You can force-delete the namespace (perhaps leaving dangling resources):</p>
<pre><code>(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)
</code></pre>
<ul>
<li><p>This is a refinement of the answer <a href="https://stackoverflow.com/a/52412965/86967">here</a>, which is based on the comment <a href="https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-408599873" rel="noreferrer">here</a>.</p></li>
<li><p>I'm using the <code>jq</code> utility to programmatically delete elements in the finalizers section. You could do that manually instead.</p></li>
<li><p><code>kubectl proxy</code> creates the listener at <code>127.0.0.1:8001</code> <em>by default</em>. If you know the hostname/IP of your cluster master, you may be able to use that instead.</p></li>
<li><p>The funny thing is that this approach seems to work even when using <code>kubectl edit</code> making the same change has no effect.</p></li>
</ul>
|
<p>I have created a sample node.js app and other required files (deployment.yml, service.yml) but I am not able to access the external IP of the service. </p>
<pre><code>#kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.7.240.1 <none> 443/TCP 23h
node-api LoadBalancer 10.7.254.32 35.193.227.250 8000:30164/TCP 4m37s
#kubectl get pods
NAME READY STATUS RESTARTS AGE
node-api-6b9c8b4479-nclgl 1/1 Running 0 5m55s
#kubectl describe svc node-api
Name: node-api
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=node-api
Type: LoadBalancer
IP: 10.7.254.32
LoadBalancer Ingress: 35.193.227.250
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 30164/TCP
Endpoints: 10.4.0.12:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 6m19s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 5m25s service-controller Ensured load balancer
</code></pre>
<p>When I try to do a curl on external ip it gives connection refused</p>
<pre><code>curl 35.193.227.250:8000
curl: (7) Failed to connect to 35.193.227.250 port 8000: Connection refused
</code></pre>
<p>I have exposed port 8000 in Dockerfile also. Let me know if I am missing anything.</p>
| <p>Looking at your description on this thread it seems everything is fine.
Here is what you can try: </p>
<ol>
<li><p>SSH to the GKE node where the pod is running. You can get the node name by running the same command you used with "-o wide" flag.</p>
<p>$ kubectl get pods -o wide</p></li>
</ol>
<p>After that doing the SSH, try to curl Cluster as well as Service IP to see if you get response or not. </p>
<ol start="2">
<li><p>Try to SSH to the pod</p>
<p>$ kubectl exec -it -- /bin/bash</p></li>
</ol>
<p>After that, run local host to see if you get response or not</p>
<pre><code>$ curl localhost
</code></pre>
<p>So if you get response upon trying above troubleshooting steps then it could be an issue underlying at the GKE. You can file a defect report <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">here</a>. </p>
<p>If you do not get any response while trying the above steps, it is possible that you have misconfigured the cluster somewhere. </p>
<p>This seems to me a good starting point for troubleshooting your use case.</p>
|
<p>We have a lot of services set up by Nodeport and available from external via <code><node_ip>:<node_port></code>.</p>
<p>It should be a common requirement that I would like to control the access to certain services, which means the requests from some of IPs may access to it, while others not.</p>
<p>We'd like to use <code>iptables</code> to meet this requirement, which gets a lot of <strong>confusion</strong> since kubernetes use it to set up communication as well.
Do we have any high-level <code>guidance</code> to design/create iptable rule to control k8s service?</p>
<p>Specifically, I am confused in below areas:</p>
<ol>
<li>Which table should I append rules into? I find that lots of rules in <code>nat and filter</code> are created by K8s</li>
<li>If I what to disable the access of service from one external IP to certain node, such as
<code>telnet <node_ip>:<node_port></code>
should I REJECT on <code>FORWARD</code> or <code>INPUT</code>, or <code>PREROUTING</code> directly?</li>
<li>Do these rules depend on specific network plugins (eg flannel or weave)? Whether different plugins have a different way to config rule or not?</li>
</ol>
<p>For my scenarios, I have below rules to be set up:</p>
<ol>
<li>all nodes in the cluster should have <code>full access</code>to each other</li>
<li>some core services (API) should only be <strong>ACCEPT</strong> by <code>certain</code> IPs</li>
<li>certain services in a port range can be <strong>ACCEPT</strong> by <code>all</code> IPs</li>
<li><strong>REJECT</strong> the access to <code>any other</code> services from all IPs (outside of cluster)</li>
</ol>
<p>k8s version: 1.9.5
network plugin: weave</p>
<p>Best Regards!</p>
<p>/triage support</p>
| <p>Although you can change iptables on your K8s nodes, I wouldn't recommend making any changes since K8s (kube-proxy) is constantly changing the rules dynamically. In other words, Kubernetes manages (combined with the overlay) manages iptables for you.</p>
<p>To block traffic I would strongly suggest using <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">NetworkPolicies</a>. And/Or if you are using an overlay, you can use what that overlay provides. For example, Calico has its own <a href="https://docs.projectcalico.org/v3.3/reference/calicoctl/resources/networkpolicy" rel="nofollow noreferrer">Network Policy</a></p>
<p>Another way of controlling traffic in/out is to use a service-mesh like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a>.</p>
|
<p>I have a Spring Boot Application backed by MongoDB. Both are deployed on a Kubernetes cluster on Azure. My Application throws "Caused by: java.net.UnknownHostException: mongo-dev-0 (pod): Name or service not known" when it tries to connect to MongoDB. </p>
<p>I am able to connect to the mongo-dev-0 pod and run queries on the MongoDB, so there is no issue with the Mongo itself and it looks like the Spring boot is able to connect to Mongo Service and discover the pod behind the service.</p>
<p>How do I ensure the pods are discoverable by my Spring Boot Application?
How do I go about debugging this issue?</p>
<p>Any help is appreciated. Thanks in advance.</p>
<p>Here is my config:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: mongo-dev
labels:
name: mongo-dev
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo-dev
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo-dev
spec:
serviceName: "mongo-dev"
replicas: 3
template:
metadata:
labels:
role: mongo-dev
environment: dev
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo-dev
image: mongo:3.4
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--auth"
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-dev-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo-dev,environment=dev"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongo-dev"
volumeClaimTemplates:
- metadata:
name: mongo-dev-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "devdisk"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: devdisk
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS
location: abc
storageAccount: xyz
</code></pre>
| <p>To be able to reach your mongodb pod via its service from your spring boot application, you have to start the mongodb pod and the corresponding service first, and then start your spring boot application pod (let's name it sb-pod).</p>
<p>You can enforce this order by using an initContainer in your sb-pod; to wait for the database service to be available before starting. Something like:</p>
<pre><code>initContainers:
- name: init-mongo-dev
image: busybox
command: ['sh', '-c', 'until nslookup mongo-dev; do echo waiting for mongo-dev; sleep 2; done;']
</code></pre>
<p>If you connect to your sb-pod using:</p>
<pre><code>kubectl exec -it sb-pod bash
</code></pre>
<p>and type the env command, make sure you can see the environment variables</p>
<p>MONGO_DEV_SERVICE_HOST and MONGO_DEV_SERVICE_PORT</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.