Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I'm trying to verify that shutdown is completing cleanly on Kubernetes, with a .NET Core 2.0 app.</p>
<p>I have an app which can run in two "modes" - one using ASP.NET Core and one as a kind of worker process. Both use Console and JSON-which-ends-up-in-Elasticsearch-via-Filebeat-sidecar-container logger output which indicate startup and shutdown progress.</p>
<p>Additionally, I have console output which writes directly to stdout when a SIGTERM or Ctrl-C is received and shutdown begins.</p>
<p>Locally, the app works flawlessly - I get the direct console output, then the logger output flowing to stdout on Ctrl+C (on Windows).</p>
<p>My experiment scenario:</p>
<ul>
<li>App deployed to GCS k8s cluster (using <code>helm</code>, though I imagine that doesn't make a difference)</li>
<li>Using <code>kubectl logs -f</code> to stream logs from the specific container</li>
<li>Killing the pod from GCS cloud console site, or deleting the resources via <code>helm delete</code></li>
<li>Dockerfile is <code>FROM microsoft/dotnet:2.1-aspnetcore-runtime</code> and has <code>ENTRYPOINT ["dotnet", "MyAppHere.dll"]</code>, so not wrapped in a <code>bash</code> process or anything</li>
<li>Not specifying a <code>terminationGracePeriodSeconds</code> so guess it defaults to 30 sec</li>
<li>Observing output returned</li>
</ul>
<p>Results:</p>
<ul>
<li>The API pod log streaming showed just the immediate console output, "[SIGTERM] Stop signal received", not the other Console logger output about shutdown process</li>
<li>The worker pod log streaming showed a little more - the same console output and some Console logger output about shutdown process</li>
<li>The JSON logs didn't seem to pick any of the shutdown log output</li>
</ul>
<p>My conclusions:</p>
<ol>
<li>I don't know if Kubernetes is allowing the process to complete before terminating it, or just issuing SIGTERM then killing things very quick. I think it should be waiting, but then, why no complete console logger output?</li>
<li>I don't know if console output is cut off when stdout log streaming at some point before processes finally terminates?</li>
<li>I would guess that the JSON stuff doesn't come through to ES because filebeat running in the sidecar terminates even if there's outstanding stuff in files to send</li>
</ol>
<p>I would like to know:</p>
<ul>
<li>Can anyone advise on points 1,2 above?</li>
<li>Any ideas for a way to allow a little extra time or leeway for the sidecar to send stuff up, like a pod container termination order, delay on shutdown for that container, etc?</li>
</ul>
| Kieren Johnstone | <p><code>SIGTERM</code> does indeed signal termination. The less obvious part is that when the <code>SIGTERM</code> handler returns, everything is considered finished.</p>
<p>The fix is to not return from the <code>SIGTERM</code> handler until the app has finished shutting down. For example, using a <code>ManualResetEvent</code> and <code>Wait()</code>ing it in the handler.</p>
| Kieren Johnstone |
<p>I initialized master node and joined workers nodes to the cluster with <code>kubeadm</code>. According to the logs worker nodes successfully joined to the cluster.</p>
<p>However, when I list the nodes in master using <code>kubectl get nodes</code>, worker nodes are absent. What is wrong?</p>
<pre><code>[vagrant@localhost ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 12m v1.13.1
</code></pre>
<p>Here are <code>kubeadm</code> logs</p>
<pre><code>PLAY[
Alusta kubernetes masterit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:01.078073",
"end":"2019-01-05 07:06:59.079748",
"rc":0,
"start":"2019-01-05 07:06:58.001675",
"stderr":"",
"stderr_lines":[
],
...
}TASK[
kubeadm init
]************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
"delta":"0:01:05.163377",
"end":"2019-01-05 07:08:06.229286",
"rc":0,
"start":"2019-01-05 07:07:01.065909",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[init] Using Kubernetes version: v1.13.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 19.504023 seconds\n[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\n[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"stdout_lines":[
"[init] Using Kubernetes version: v1.13.1",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Activating the kubelet service",
"[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
"[certs] Generating \"ca\" certificate and key",
"[certs] Generating \"apiserver\" certificate and key",
"[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]",
"[certs] Generating \"apiserver-kubelet-client\" certificate and key",
"[certs] Generating \"etcd/ca\" certificate and key",
"[certs] Generating \"etcd/server\" certificate and key",
"[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"etcd/healthcheck-client\" certificate and key",
"[certs] Generating \"etcd/peer\" certificate and key",
"[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"apiserver-etcd-client\" certificate and key",
"[certs] Generating \"front-proxy-ca\" certificate and key",
"[certs] Generating \"front-proxy-client\" certificate and key",
"[certs] Generating \"sa\" key and public key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Writing \"admin.conf\" kubeconfig file",
"[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
"[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
"[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s",
"[apiclient] All control plane components are healthy after 19.504023 seconds",
"[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace",
"[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]",
"[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6",
"[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles",
"[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials",
"[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token",
"[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster",
"[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace",
"[addons] Applied essential addon: CoreDNS",
"[addons] Applied essential addon: kube-proxy",
"",
"Your Kubernetes master has initialized successfully!",
"",
"To start using your cluster, you need to run the following as a regular user:",
"",
" mkdir -p $HOME/.kube",
" sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
" sudo chown $(id -u):$(id -g) $HOME/.kube/config",
"",
"You should now deploy a pod network to the cluster.",
"Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
" https://kubernetes.io/docs/concepts/cluster-administration/addons/",
"",
"You can now join any number of machines by running the following on each node",
"as root:",
"",
" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
]
}TASK[
set_fact
]****************************************************************
ok:[
k8s-n1
]=>{
"ansible_facts":{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
},
"changed":false
}TASK[
debug
]*******************************************************************
ok:[
k8s-n1
]=>{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
}TASK[
Aseta ymparistomuuttujat
]************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"cp /etc/kubernetes/admin.conf /home/vagrant/ && chown vagrant:vagrant /home/vagrant/admin.conf && export KUBECONFIG=/home/vagrant/admin.conf && echo export KUBECONFIG=$KUBECONFIG >> /home/vagrant/.bashrc",
"delta":"0:00:00.008628",
"end":"2019-01-05 07:08:08.663360",
"rc":0,
"start":"2019-01-05 07:08:08.654732",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Konfiguroi CNI-verkko
]***************************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
Asenna Flannel-plugin
]***************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"export KUBECONFIG=/home/vagrant/admin.conf ; kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml",
"delta":"0:00:00.517346",
"end":"2019-01-05 07:08:17.731759",
"rc":0,
"start":"2019-01-05 07:08:17.214413",
"stderr":"",
"stderr_lines":[
],
"stdout":"clusterrole.rbac.authorization.k8s.io/flannel created\nclusterrolebinding.rbac.authorization.k8s.io/flannel created\nserviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.extensions/kube-flannel-ds-amd64 created\ndaemonset.extensions/kube-flannel-ds-arm64 created\ndaemonset.extensions/kube-flannel-ds-arm created\ndaemonset.extensions/kube-flannel-ds-ppc64le created\ndaemonset.extensions/kube-flannel-ds-s390x created",
"stdout_lines":[
"clusterrole.rbac.authorization.k8s.io/flannel created",
"clusterrolebinding.rbac.authorization.k8s.io/flannel created",
"serviceaccount/flannel created",
"configmap/kube-flannel-cfg created",
"daemonset.extensions/kube-flannel-ds-amd64 created",
"daemonset.extensions/kube-flannel-ds-arm64 created",
"daemonset.extensions/kube-flannel-ds-arm created",
"daemonset.extensions/kube-flannel-ds-ppc64le created",
"daemonset.extensions/kube-flannel-ds-s390x created"
]
}TASK[
shell
]*******************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"sleep 10",
"delta":"0:00:10.004446",
"end":"2019-01-05 07:08:29.833488",
"rc":0,
"start":"2019-01-05 07:08:19.829042",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Alusta kubernetes workerit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n3
]ok:[
k8s-n2
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.085388",
"end":"2019-01-05 07:08:34.547407",
"rc":0,
"start":"2019-01-05 07:08:34.462019",
"stderr":"",
"stderr_lines":[
],
...
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.086224",
"end":"2019-01-05 07:08:34.600794",
"rc":0,
"start":"2019-01-05 07:08:34.514570",
"stderr":"",
"stderr_lines":[
],
"stdout":"[preflight] running pre-flight checks\n[reset] no etcd config found. Assuming external etcd\n[reset] please manually reset etcd to prevent further issues\n[reset] stopping the kubelet service\n[reset] unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]\n[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]\n[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually.\nFor example: \niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)\nto reset your system's IPVS tables.",
"stdout_lines":[
"[preflight] running pre-flight checks",
"[reset] no etcd config found. Assuming external etcd",
"[reset] please manually reset etcd to prevent further issues",
"[reset] stopping the kubelet service",
"[reset] unmounting mounted directories in \"/var/lib/kubelet\"",
"[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]",
"[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
"[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
"",
"The reset process does not reset or clean up iptables rules or IPVS tables.",
"If you wish to reset iptables, you must do so manually.",
"For example: ",
"iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X",
"",
"If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
"to reset your system's IPVS tables."
]
}TASK[
kubeadm join
]************************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:01.988676",
"end":"2019-01-05 07:08:38.771956",
"rc":0,
"start":"2019-01-05 07:08:36.783280",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:02.000874",
"end":"2019-01-05 07:08:38.979256",
"rc":0,
"start":"2019-01-05 07:08:36.978382",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}PLAY RECAP *********************************************************************
k8s-n1:ok=24 changed=16 unreachable=0 failed=0
k8s-n2:ok=16 changed=13 unreachable=0 failed=0
k8s-n3:ok=16 changed=13 unreachable=0 failed=0
</code></pre>
<p>.</p>
<pre><code>[vagrant@localhost ~]$ kubectl get events -a
Flag --show-all has been deprecated, will be removed in an upcoming release
LAST SEEN TYPE REASON KIND MESSAGE
3m15s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 72f6776d-c267-4e31-8e6d-a4d36da1d510
3m16s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 2d68a2c8-e27a-45ff-b7d7-5ce33c9e1cc4
4m2s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 0213bbdf-f4cd-4e19-968e-8162d95de9a6
</code></pre>
| Tuomas Toivonen | <p>By default the nodes (kubelet) identify themselves using their hostnames. It seems that your VMs' hostnames are not set.</p>
<p>In the <code>Vagrantfile</code> set the <code>hostname</code> value to different names for each VM.
<a href="https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname" rel="nofollow noreferrer">https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname</a></p>
| Janos Lenart |
<p>I am running Docker Desktop 4.10 on Windows 10, with K8s enabled. I have the following k8s resources deployed through a helm chart, using Helm 3.12:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: fab-rabbitmq
labels:
type: mydata
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: hostpath
hostPath:
path: /c/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: hostpath
selector:
matchLabels:
type: "mydata"
---
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/mydata"
name: vol1
volumes:
- name: vol1
persistentVolumeClaim:
claimName: pvc1
</code></pre>
<p>After deploying the chart, the pod runs correctly, the PVC is bound to the PV. I also set Windows security permissions to Everyone-full control on C:\data.</p>
<p>However, when I access the pod and try to list the files in the folder, I get the following error:</p>
<pre><code>kubectl exec -it pod1 bash
# ls -la mydata
ls: reading directory 'mydata': Operation not permitted
</code></pre>
<p>I can't seem to find a way to really have access to the mounted folder.</p>
<p>When I try to run a simple container using <code>docker run -v /c/data:/mydata nginx</code> and access /mydata, it works.</p>
<p>Any ideas what I am doing wrong?</p>
| Sagi Mann | <p>I found the explanation <a href="https://stackoverflow.com/questions/68948411/mounting-volume-resulting-empty-folder-in-kubernetes-minikube">here</a> and <a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="nofollow noreferrer">here</a>. When running in Minikube, the "host path" is not actually the path on the physical host but rather a path inside the Minikube node. That's because Minikube IS actually the host of all pods (hence their node). So basically, the steps to do this correctly are:</p>
<ol>
<li>Ensure the physical path is mounted inside Minikube by running <code>minimuke start --mount --mount-string=C:\myfolder:/folder/in/minikube</code></li>
<li>When setting up the PV, specify: <code>hostPath.path: /folder/in/minikube:/folder/in/pod</code></li>
</ol>
| Sagi Mann |
<p>Headless VirtualBox successfully runs inside Docker container</p>
<pre><code>docker run --device=/dev/vboxdrv:/dev/vboxdrv my-vb
</code></pre>
<p>I need to run this image on Kubernetes and I get:</p>
<pre><code>VBoxHeadless: Error -1909 in suplibOsInit!
VBoxHeadless: Kernel driver not accessible
</code></pre>
<p>Kubernetes object:</p>
<pre><code>metadata:
name: vbox
labels:
app: vbox
spec:
selector:
matchLabels:
app: vbox
template:
metadata:
labels:
app: vbox
spec:
securityContext:
runAsUser: 0
containers:
- name: vbox-vm
image: my-vb
imagePullPolicy: 'Always'
ports:
- containerPort: 6666
volumeMounts:
- mountPath: /root/img.vdi
name: img-vdi
- mountPath: /dev/vboxdrv
name: vboxdrv
volumes:
- name: img-vdi
hostPath:
path: /root/img.vdi
type: File
- name: vboxdrv
hostPath:
path: /dev/vboxdrv
type: CharDevice
</code></pre>
<p>This image runs in Docker so must be the problem in Kubernetes configuration.</p>
| Jonas | <p>Slight modification is required in the configuration for this work:</p>
<pre><code>metadata:
name: vbox
labels:
app: vbox
spec:
selector:
matchLabels:
app: vbox
template:
metadata:
labels:
app: vbox
spec:
securityContext:
runAsUser: 0
containers:
- name: vbox-vm
image: my-vb
imagePullPolicy: 'Always'
securityContext: # << added
privileged: true
ports:
- containerPort: 6666
volumeMounts:
- mountPath: /root/img.vdi
name: img-vdi
- mountPath: /dev/vboxdrv
name: vboxdrv
volumes:
- name: img-vdi
hostPath:
path: /root/img.vdi
type: File
- name: vboxdrv
hostPath:
path: /dev/vboxdrv
type: CharDevice
</code></pre>
<p>To be able to run privileged containers you'll need to have:</p>
<ul>
<li>kube-apiserver running with --allow-privileged</li>
<li>kubelet (all hosts that might have this container) running with --allow-privileged=true</li>
</ul>
<p>See more at <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers</a></p>
<p>Once it works do it properly via <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">PodSecurityPolicy</a></p>
| Janos Lenart |
<h1>What happened</h1>
<p>Resolving an external domain from within a pod fails with <strong>SERVFAIL</strong> message. In the logs, <strong>i/o timeout</strong> error is mentioned.</p>
<h1>What I expected to happen</h1>
<p>External domains should be successfully resolved from the pods.</p>
<h1>How to reproduce it</h1>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
<ol>
<li><p>Create the pod above (from <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">Debugging DNS Resolution</a> help page).</p>
</li>
<li><p>Run <code>kubectl exec dnsutils -it -- nslookup google.com</code></p>
<pre class="lang-sh prettyprint-override"><code>pig@pig202:~$ kubectl exec dnsutils -it -- nslookup google.com
Server: 10.152.183.10
Address: 10.152.183.10#53
** server can't find google.com.mshome.net: SERVFAIL
command terminated with exit code 1
</code></pre>
</li>
<li><p>Also run <code>kubectl exec dnsutils -it -- nslookup google.com.</code></p>
<pre class="lang-sh prettyprint-override"><code>pig@pig202:~$ kubectl exec dnsutils -it -- nslookup google.com.
Server: 10.152.183.10
Address: 10.152.183.10#53
** server can't find google.com: SERVFAIL
command terminated with exit code 1
</code></pre>
</li>
</ol>
<h1>Additional information</h1>
<p>I am using <strong>microk8s</strong> environment in a <strong>Hyper-V virtual machine</strong>.</p>
<p>Resolving DNS from the virtual machine works, and Kubernetes is able to pull container images. It's only from within the pods that the resolution is failing meaning I cannot communicate with the Internet from within the pods.</p>
<p>This is OK:</p>
<pre class="lang-sh prettyprint-override"><code>pig@pig202:~$ kubectl exec dnsutils -it -- nslookup kubernetes.default
Server: 10.152.183.10
Address: 10.152.183.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.152.183.1
</code></pre>
<h1>Environment</h1>
<h3>The version of CoreDNS</h3>
<pre class="lang-yaml prettyprint-override"><code>image: 'coredns/coredns:1.6.6'
</code></pre>
<h3>Corefile (taken from ConfigMap)</h3>
<pre class="lang-yaml prettyprint-override"><code> Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
</code></pre>
<h3>Logs</h3>
<pre class="lang-yaml prettyprint-override"><code>pig@pig202:~$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns -f
[INFO] 10.1.99.26:47204 - 29832 "AAAA IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002558s
[ERROR] plugin/errors: 2 grafana.com. AAAA: read udp 10.1.99.19:52008->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:59350 - 50446 "A IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002028s
[ERROR] plugin/errors: 2 grafana.com. A: read udp 10.1.99.19:60405->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:43050 - 13676 "AAAA IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002151s
[ERROR] plugin/errors: 2 grafana.com. AAAA: read udp 10.1.99.19:45624->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:36997 - 30359 "A IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002791s
[ERROR] plugin/errors: 2 grafana.com. A: read udp 10.1.99.19:37554->8.8.4.4:53: i/o timeout
[INFO] 10.1.99.32:57927 - 53858 "A IN google.com.mshome.net. udp 39 false 512" NOERROR - 0 2.0001987s
[ERROR] plugin/errors: 2 google.com.mshome.net. A: read udp 10.1.99.19:34079->8.8.4.4:53: i/o timeout
[INFO] 10.1.99.32:38403 - 36398 "A IN google.com.mshome.net. udp 39 false 512" NOERROR - 0 2.000224s
[ERROR] plugin/errors: 2 google.com.mshome.net. A: read udp 10.1.99.19:59835->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:57447 - 20295 "AAAA IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001892s
[ERROR] plugin/errors: 2 grafana.com.mshome.net. AAAA: read udp 10.1.99.19:51534->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:41052 - 56059 "A IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001879s
[ERROR] plugin/errors: 2 grafana.com.mshome.net. A: read udp 10.1.99.19:47378->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:56748 - 51804 "AAAA IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0003226s
[INFO] 10.1.99.26:45442 - 61916 "A IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001922s
[ERROR] plugin/errors: 2 grafana.com.mshome.net. AAAA: read udp 10.1.99.19:35528->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 grafana.com.mshome.net. A: read udp 10.1.99.19:53568->8.8.8.8:53: i/o timeout
</code></pre>
<h3>OS</h3>
<pre class="lang-sh prettyprint-override"><code>pig@pig202:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
</code></pre>
<p>Tried on Ubuntu 18.04.3 LTS, same issue.</p>
<h3>Other</h3>
<p><code>mshome.net</code> search domain comes from Hyper-V network, I assume. Perhaps this will be of help:</p>
<pre class="lang-sh prettyprint-override"><code>pig@pig202:~$ nmcli device show eth0
GENERAL.DEVICE: eth0
GENERAL.TYPE: ethernet
GENERAL.HWADDR: 00:15:5D:88:26:02
GENERAL.MTU: 1500
GENERAL.STATE: 100 (connected)
GENERAL.CONNECTION: Wired connection 1
GENERAL.CON-PATH: /org/freedesktop/NetworkManager/ActiveConnection/1
WIRED-PROPERTIES.CARRIER: on
IP4.ADDRESS[1]: 172.19.120.188/28
IP4.GATEWAY: 172.19.120.177
IP4.ROUTE[1]: dst = 0.0.0.0/0, nh = 172.19.120.177, mt = 100
IP4.ROUTE[2]: dst = 172.19.120.176/28, nh = 0.0.0.0, mt = 100
IP4.ROUTE[3]: dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000
IP4.DNS[1]: 172.19.120.177
IP4.DOMAIN[1]: mshome.net
IP6.ADDRESS[1]: fe80::6b4a:57e2:5f1b:f739/64
IP6.GATEWAY: --
IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 100
IP6.ROUTE[2]: dst = ff00::/8, nh = ::, mt = 256, table=255
</code></pre>
| Nikola Malešević | <p>Finally found the solution which was the combination of two changes. After applying both changes, my pods could finally resolve addresses properly.</p>
<h3>Kubelet configuration</h3>
<p>Based on <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues" rel="noreferrer">known issues</a>, change resolv-conf path for Kubelet to use.</p>
<pre class="lang-sh prettyprint-override"><code># Add resolv-conf flag to Kubelet configuration
echo "--resolv-conf=/run/systemd/resolve/resolv.conf" >> /var/snap/microk8s/current/args/kubelet
# Restart Kubelet
sudo service snap.microk8s.daemon-kubelet restart
</code></pre>
<h3>CoreDNS forward</h3>
<p>Change forward address in CoreDNS config map from default (<code>8.8.8.8 8.8.4.4</code>) to DNS on <code>eth0</code> device.</p>
<pre class="lang-sh prettyprint-override"><code># Dump definition of CoreDNS
microk8s.kubectl get configmap -n kube-system coredns -o yaml > coredns.yaml
</code></pre>
<p>Partial content of coredns.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code> Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
</code></pre>
<p>Fetch DNS:</p>
<pre class="lang-sh prettyprint-override"><code># Fetch eth0 DNS address (this will print 172.19.120.177 in my case)
nmcli dev show 2>/dev/null | grep DNS | sed 's/^.*:\s*//'
</code></pre>
<p>Change the following line and save:</p>
<pre class="lang-sh prettyprint-override"><code> forward . 8.8.8.8 8.8.4.4 # From this
forward . 172.19.120.177 # To this (your DNS will probably be different)
</code></pre>
<p>Finally apply to change CoreDNS forwarding:</p>
<pre class="lang-sh prettyprint-override"><code>microk8s.kubectl apply -f coredns.yaml
</code></pre>
| Nikola Malešević |
<p><strong>I'm setting up multi node cassandra cluster in kubernetes (Azure AKS),Since this a headless service with statefull set pods without having a external IP.
How can i connect my spark application with cassandra which is in kubernetes cluster</strong></p>
<p><b>We have tried with cluster ip,ingress ip also but only single pod is getting up rest are failing.</b></p>
<p>I have 3 manifest:</p>
<ol>
<li>Service</li>
<li>PersistentVolumeClaim </li>
<li>StatefulSet</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myvolume-disk-claim
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
apiVersion: "apps/v1"
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
env:
- name: CASSANDRA_SEEDS
value: cassandra-0.cassandra.default.svc.cluster.local
- name: MAX_HEAP_SIZE
value: 256M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_CLUSTER_NAME
value: "Cassandra"
- name: CASSANDRA_DC
value: "DC1"
- name: CASSANDRA_RACK
value: "Rack1"
- name: CASSANDRA_ENDPOINT_SNITCH
value: GossipingPropertyFileSnitch
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: myvolume-disk-claim
volumeClaimTemplates:
- metadata:
name: myvolume-disk-claim
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>Expected Result:(public ip as external IP)</p>
<pre><code>dspg@Digiteds28:$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1h
cassandra ClusterIP None 154.167.90.98 9042/TCP 1h
dspg@Digiteds28:$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 59m
cassandra-1 1/1 Running 0 58m
cassandra-2 1/1 Running 0 56m
</code></pre>
<p>Actual Output:</p>
<pre><code>dspg@Digiteds28:$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1h
cassandra ClusterIP None <none> 9042/TCP 1h
dspg@Digiteds28:$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 59m
cassandra-1 1/1 Running 0 58m
cassandra-2 1/1 Running 0 56m
</code></pre>
<p>Now this doesnot include external IP to connect to application.</p>
| suraj1287 | <p>It depends on what exactly you are trying to do. If you need an external IP then in general you'd need to create an additional Service object (probably <code>type: LoadBalancer</code>) like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra-ext
spec:
type: LoadBalancer
ports:
- port: 9042
selector:
app: cassandra
</code></pre>
<p>If you need to reach it from within the cluster then use the DNS name <code>cassandra-0.cassandra.default</code> from the other pod (if the StatefulSet was deployed in the default namespace)</p>
| Janos Lenart |
<p>I would like to access role's aws key and secret within EKS pod but have been unsuccessful so far.</p>
<p>The idea is to retrieve them from instance metadata and inject them as ENV variables into pod. I have already linked service account to role and specified it in pod deployment but it doesn't look like key is automatically available as ENV in pod.</p>
<p>What's the best way to approach this?</p>
| Anton Kim | <p>Using ak/sk is not a good practice, you can use <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">IAM role for service account</a> to bind the RBAC in k8s with the IAM roles .</p>
| Kane |
<p>I would like to migrate an application from one GKE cluster to another, and I'm wondering how to accomplish this while avoiding any downtime for this process.</p>
<p>The application is an HTTP web backend.</p>
<p>Usually how I'd usually handle this in a non GCP/K8S context is have a load balancer in front of the application, setup a new web backend and then just update the appropriate IP address in the load balancer to point from the old IP to the new IP. This would essentially have 0 downtime while also allowing for a seemless rollback if anything goes wrong.</p>
<p>I do not see why this should not work for this context as well however I'm not 100% sure. And if there is a more robust or alternative way to do this (GCP/GKE friendly way), I'd like to investigate that.</p>
<p><strong>So to summarize my question,</strong> does GCP/GKE support this type of migration functionality? If not, is there any implications I need to be aware of with my usual load balancer approach mentioned above?</p>
<hr />
<p>The reason for migrating is the current k8s cluster is running quite an old version (1.18) and if doing an GKE version upgrade to something more recent like 1.22, I suspect a lot of incompatibilities as well risk.</p>
| Chris Stryczynski | <p>I see 2 approaches:</p>
<ol>
<li>In the new cluster get a new IP address and update the DNS record to point to the new load balancer</li>
<li>See if you can switch to Multi-cluster gateways, however that would probably require you to use approach 1 to switch to multi-cluster gateways as well: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways</a></li>
</ol>
| Sam Stoelinga |
<p>My workload needs network connectivity to start properly and I want to use a <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">postStart lifecycle hook</a> that waits until it is ready and then does something. However, lifecycle hooks seem to block CNI; the following workload will never be assigned an IP:</p>
<pre><code>kubectl apply -f <(cat <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- |
while true; do
sleep
done
EOF
)
kubectl get pods -o wide
</code></pre>
<p>This means my workload never starts (hanging when trying to connect out) and my lifecycle hook loops forever. Is there a way to work around this?</p>
<p>EDIT: I used a sidecar instead of a lifecycle hook to achieve the same thing - still unsure why lifecycle hook doesn't work though, executing CNI is part of container creation IMO so I'd expect lifecycle hooks to fire after networking had been configured</p>
| dippynark | <p>This is an interesting one :-) It's not much of an answer but I did some investigation and I thought I share it - perhaps it is of some use.</p>
<p>I started from the yaml posted in the question. Then I logged into the machine running this pod and located the container.</p>
<pre><code>$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-8f59d655b-ds7x2 0/1 ContainerCreating 0 3m <none> node-x
$ ssh node-x
node-x$ docker ps | grep nginx-8f59d655b-ds7x2
2064320d1562 881bd08c0b08 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx_nginx-8f59d655b-ds7x2_default_14d1e071-4cd4-11e9-8104-42010af00004_0
2f09063ed20b k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_nginx-8f59d655b-ds7x2_default_14d1e071-4cd4-11e9-8104-42010af00004_0
</code></pre>
<p>The second container running <code>/pause</code> is the infrastructure container. The other one is Pod's nginx container. Note that normally this information would be available trough <code>kubectl get pod</code> as well, but in this case it is not. Strange.</p>
<p>In the container I'd expect that the networking is set up and nginx is running. Let's verify that:</p>
<pre><code>node-x$ docker exec -it 2064320d1562 bash
root@nginx-8f59d655b-ds7x2:/# apt update && apt install -y iproute2 procps
...installs correctly...
root@nginx-8f59d655b-ds7x2:/# ip a s eth0
3: eth0@if2136: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP group default
link/ether 0a:58:0a:f4:00:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.0.169/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::da:d3ff:feda:1cbe/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<p>So networking is set up, routes are in place and the IP address on eth0 is actually on the overlay network as it is supposed to be. Looking at the process list now:</p>
<pre><code>root@nginx-8f59d655b-ds7x2:/# ps auwx
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 32652 4900 ? Ss 18:56 0:00 nginx: master process nginx -g daemon off;
root 5 5.9 0.0 4276 1332 ? Ss 18:56 0:46 /bin/sh -c while true; do sleep done
nginx 94 0.0 0.0 33108 2520 ? S 18:56 0:00 nginx: worker process
root 13154 0.0 0.0 36632 2824 ? R+ 19:09 0:00 ps auwx
root 24399 0.0 0.0 18176 3212 ? Ss 19:02 0:00 bash
</code></pre>
<p>Hah, so nginx is running and so is the preStop command. Notice however to large PIDs. There is a typo in the deployment file and it is executing <code>sleep</code> with no parameters - which is an error.</p>
<pre><code>root@nginx-8f59d655b-ds7x2:/# sleep
sleep: missing operand
Try 'sleep --help' for more information.
</code></pre>
<p>This is running from a loop, hence the loads of forking leading to large PIDs.</p>
<p>As another test, from a node I also try to curl the server:</p>
<pre><code>node-x$ curl http://10.244.0.169
...
<p><em>Thank you for using nginx.</em></p>
...
</code></pre>
<p>Which is very much expected. So finally I'd like to force the preStop command to finish so from inside the container I kill the containing shell:</p>
<pre><code>root@nginx-8f59d655b-ds7x2:/# kill -9 5
...container is terminated in a second, result of the preStop hook failure...
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-8f59d655b-ds7x2 0/1 PostStartHookError: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (53423560 vs. 16777216) 0 21m
</code></pre>
<p>Hm, so I imagine the 50MB (!) worth of messages were the failures from the missing parameter to sleep. Actually, what is even more spooky is that the Deployment is not recovering from this failure. This Pod keeps hanging around forever, instead of what you'd expect (spawn another Pod and retry).</p>
<p>At this point I deleted the deployment and recreated it with the sleep fixed in the preStop hook (<code>sleep 1</code>). The results are much the same, and the Deployment won't spawn another Pod in that case either (so it was not that just that it choked on the logs).</p>
<p>Now I did say at the top that this is not really an answer. But perhaps some takeaway: the lifecycle hooks need some work before they can considered useful and safe.</p>
| Janos Lenart |
<p>I created a CronJob resource with</p>
<pre><code>kubectl create -f cronjob.yaml
</code></pre>
<p>Now I have too much mess on my comp,cronjob attempts to create one node every </p>
<pre><code>15 minutes
batch-job-every-fifteen-minutes-1528876800-h8dsj 0/1 Pending 0 39m
batch-job-every-fifteen-minutes-1528877700-d8g9d 0/1 Pending 0 24m
batch-job-every-fifteen-minutes-1528878600-kbdmb 0/1 Pending 0 9m
</code></pre>
<p>How to perform the opposite operation?
How to delete the resource? </p>
| Richard Rublev | <p>You can delete it with this command:</p>
<pre><code>kubectl delete -f cronjob.yaml
</code></pre>
<p>It is also possible to delete it directly by name:</p>
<pre><code>kubectl delete cronjob batch-job-every-fifteen-minutes
</code></pre>
<p>I am not sure whether the generated Pods and Jobs also get deleted with this command. You can delete them like this:</p>
<pre><code>kubectl delete job batch-job-every-fifteen-minutes-1528876800
kubectl delete job batch-job-every-fifteen-minutes-1528877700
kubectl delete job batch-job-every-fifteen-minutes-1528878600
kubectl delete pod batch-job-every-fifteen-minutes-1528876800-h8dsj
kubectl delete pod batch-job-every-fifteen-minutes-1528877700-d8g9d
kubectl delete pod batch-job-every-fifteen-minutes-1528878600-kbdmb
</code></pre>
<p>This solution assumes you are using the <code>default</code> namespace. If you do not, you have to add the <code>--namespace $NAMESPACE</code> argument to <code>kubectl</code>.</p>
| svenwltr |
<p>I have 5 microservices which I wish to allow external traffic to. These microservices will be hosted on different subdomains. I am using K8s cluster on EKS and have the cluster and other services running.
There seems to be quite a lot of confusion when it comes to Ingress. I have configured the ALB ingress controller by following this tutorial on <a href="https://www.eksworkshop.com/beginner/130_exposing-service/ingress_controller_alb/" rel="nofollow noreferrer">eksworkshop</a>. This worked for me and I am able to deploy the 2048 game as the tutorial explains.</p>
<p>Now what I wish to develop is an Ingress resource as following:</p>
<pre><code># apiVersion: networking.k8s.io/v1beta1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cluster-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: app.my-domain.com
http:
paths:
- path: /*
backend:
serviceName: app-cluster-ip-service
servicePort: 3000
- host: ms1.my-domain.com
http:
paths:
- path: /*
backend:
serviceName: ms1-cluster-ip-service
servicePort: 8000
- host: ms2.my-domain.com
http:
paths:
- path: /*
backend:
serviceName: ms2-cluster-ip-service
servicePort: 2000
- host: ms3.my-domain.com
http:
paths:
- path: /*
backend:
serviceName: ms3-cluster-ip-service
servicePort: 4000
- host: website.my-domain.com
http:
paths:
- path: /*
backend:
serviceName: website-cluster-ip-service
servicePort: 3333
</code></pre>
<h1><strong>So here are my doubts</strong></h1>
<ol>
<li>How do I configure ingress to redirect to different ports based on the domain? (when I was using Nginx, there is a provision to set Upstream and then Nginx routes traffic accordingly)</li>
<li>What is the procedure to link it to my registered domain? (TLS certificates with Cert manager Lets Encrypt)</li>
<li>What should I put in my DNS records for all the subdomains? (A records/CNAME of ALB) And do all the 5 subdomains have the same record/config?</li>
</ol>
<p>I use Cloudflare for DNS management if that helps.</p>
| Divyansh Khandelwal | <ol>
<li>Application load balancer uses the rules to conditional route the requests to different hosts/paths. So <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/" rel="nofollow noreferrer">AWS load balancer controller</a> supports that feature via annotations, see <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/#traffic-routing" rel="nofollow noreferrer">doc</a> for detail.</li>
<li>You can cert manager to manage the certificates of your domain. Also AWS load balancer supports <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/#ssl" rel="nofollow noreferrer">specifying the certificate stored in ACM</a> if you use a wildcard cert.</li>
<li>Yes, you have to create multiple DNS records for your domains whatever the ingress controller you’re using. You can have a look at <a href="https://github.com/kubernetes-sigs/external-dns" rel="nofollow noreferrer">external-dns</a> to make it automatically.</li>
</ol>
| Kane |
<p>I have a deployment on google gke, and I can't see the pod logs on the console even though the Cloud logging is enabled on the cluster?
So what could be the issue? did I miss something?</p>
| Navir | <p>It sounds like Workload monitoring and logging may not have been enabled and currently it's only doing system monitoring and logging. Please see the docs here on how to change the logging settings: <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/installing#installing" rel="nofollow noreferrer">https://cloud.google.com/stackdriver/docs/solutions/gke/installing#installing</a></p>
| Sam Stoelinga |
<p>I allocated resource to 1 pod only with 650MB/30% of memory (with other built-in pods, limit memory is 69% only)</p>
<p>However, when the pod handling process, the usage of pod is within 650MB but overall usage of node is 94%. </p>
<p>Why does it happen because it supposed to have upper limit of 69%? Is it due to other built-in pods which did not set limit? How to prevent this as sometimes my pod with error if usage of Memory > 100%?</p>
<p>My allocation setting (<code>kubectl describe nodes</code>):
<a href="https://i.stack.imgur.com/tDoZ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDoZ6.png" alt="enter image description here"></a></p>
<p>Memory usage of Kubernetes Node and Pod when idle:<br>
<code>kubectl top nodes</code><br>
<a href="https://i.stack.imgur.com/JtXgo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JtXgo.png" alt="enter image description here"></a><br>
<code>kubectl top pods</code><br>
<a href="https://i.stack.imgur.com/ijLHU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ijLHU.png" alt="enter image description here"></a></p>
<p>Memory usage of Kubernetes Node and Pod when running task:<br>
<code>kubectl top nodes</code><br>
<a href="https://i.stack.imgur.com/phCZS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/phCZS.png" alt="enter image description here"></a><br>
<code>kubectl top pods</code><br>
<a href="https://i.stack.imgur.com/7Ja9B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Ja9B.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Further Tested behaviour:</strong><br>
1. Prepare deployment, pods and service under namespace <em>test-ns</em><br>
2. Since only <em>kube-system</em> and <em>test-ns</em> have pods, so assign 1000Mi to each of them (from <code>kubectl describe nodes</code>) aimed to less than 2GB<br>
3. Suppose memory used in <em>kube-system</em> and <em>test-ns</em> will be less than 2GB which is less than 100%, why memory usage can be 106%? </p>
<p>In <em>.yaml file:</em> </p>
<pre><code> apiVersion: v1
kind: LimitRange
metadata:
name: default-mem-limit
namespace: test-ns
spec:
limits:
- default:
memory: 1000Mi
type: Container
---
apiVersion: v1
kind: LimitRange
metadata:
name: default-mem-limit
namespace: kube-system
spec:
limits:
- default:
memory: 1000Mi
type: Container
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devops-deployment
namespace: test-ns
labels:
app: devops-pdf
spec:
selector:
matchLabels:
app: devops-pdf
replicas: 2
template:
metadata:
labels:
app: devops-pdf
spec:
containers:
- name: devops-pdf
image: dev.azurecr.io/devops-pdf:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
cpu: 600m
memory: 500Mi
limits:
cpu: 600m
memory: 500Mi
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: devops-pdf
namespace: test-ns
spec:
type: LoadBalancer
ports:
- port: 8007
selector:
app: devops-pdf
</code></pre>
<p><a href="https://i.stack.imgur.com/mjIKA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mjIKA.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/EGqCs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EGqCs.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/BXpEe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BXpEe.png" alt="enter image description here"></a></p>
| DaiKeung | <p>This effect is most likely caused by the 4 Pods that run on that node <strong>without</strong> a memory limit specified, shown as <code>0 (0%)</code>. Of course 0 doesn't mean it can't use even a single byte of memory as no program can be started without using memory; instead it means that there is no limit, it can use as much as available. Also programs running not in pod (ssh, cron, ...) are included in the total used figure, but are not limited by kubernetes (by cgroups).</p>
<p>Now kubernetes sets up the kernel oom adjustment values in a tricky way to favour containers that are under their memory <em>request</em>, making it more more likely to kill processes in containers that are between their memory <em>request</em> and <em>limit</em>, and making it most likely to kill processes in containers with no memory <em>limits</em>. However, this is only shown to work fairly in the long run, and sometimes the kernel can kill your favourite process in your favourite container that is behaving well (using less than its memory <em>request</em>). See <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior</a></p>
<p>The pods without memory limit in this particular case are coming from the aks system itself, so setting their memory limit in the pod templates is not an option as there is a reconciler that will restore it (eventually). To remedy the situation I suggest that you create a LimitRange object in the kube-system namespace that will assign a memory limit to all pods without a limit (as they are created):</p>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: default-mem-limit
namespace: kube-system
spec:
limits:
- default:
memory: 150Mi
type: Container
</code></pre>
<p>(You will need to delete the already existing <strong>Pods</strong> without a memory limit for this to take effect; they will get recreated)</p>
<p>This is not going to completely eliminate the problem as you might end up with an overcommitted node; however the memory usage will make sense and the oom events will be more predictable.</p>
| Janos Lenart |
<p>I am trying to create an S3 bucket using </p>
<p><code>aws s3api create-bucket —bucket kubernetes-aws-wthamira-io</code></p>
<p>It gives this error: </p>
<pre class="lang-sh prettyprint-override"><code>An error occurred (IllegalLocationConstraintException) when calling
the CreateBucket operation: The unspecified location constraint is
incompatible for the region specific endpoint this request was sent
to.
</code></pre>
<p>I set the region using <code>aws configure</code> to <code>eu-west-1</code> </p>
<pre><code>Default region name [eu-west-1]:
</code></pre>
<p>but it gives the same error. How do I solve this?</p>
<p>I use osx terminal to connect aws </p>
| wthamira | <p>try this:</p>
<pre><code>aws s3api create-bucket --bucket kubernetes-aws-wthamira-io --create-bucket-configuration LocationConstraint=eu-west-1
</code></pre>
<p>Regions outside of <code>us-east-1</code> require the appropriate <code>LocationConstraint</code> to be specified in order to create the bucket in the desired region.</p>
<p><a href="https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html" rel="noreferrer">https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html</a></p>
| Asdfg |
<p><strong>UPDATED</strong></p>
<p>Following the <a href="https://www.youtube.com/watch?v=PitS8RiyDv8" rel="nofollow noreferrer">AWS instance scheduler</a> I've been able to setup a scheduler that starts and stops at the beginning and end of the day.</p>
<p>However, the instances keep being terminated and reinstalled. </p>
<p>I have an Amazon Elastic Kubernetes Service (EKS) that returns the following CloudWatch log:
discovered the following log in my CloudWatch </p>
<pre><code>13:05:30
2019-11-21 - 13:05:30.251 - INFO : Handler SchedulerRequestHandler scheduling request for service(s) rds, account(s) 612681954602, region(s) eu-central-1 at 2019-11-21 13:05:30.251936
13:05:30
2019-11-21 - 13:05:30.433 - INFO : Running RDS scheduler for account 612681954602 in region(s) eu-central-1
13:05:31
2019-11-21 - 13:05:31.128 - INFO : Fetching rds Instances for account 612681954602 in region eu-central-1
13:05:31
2019-11-21 - 13:05:31.553 - INFO : Number of fetched rds Instances is 2, number of schedulable resources is 0
13:05:31
2019-11-21 - 13:05:31.553 - INFO : Scheduler result {'612681954602': {'started': {}, 'stopped': {}}}
</code></pre>
<p>I don't know if it is my EKS that keeps rebooting my instances, but I really would love to keep them stopped until the next day.</p>
<p>How can I prevent my EC2 instances from automatically rebooting every time one has stopped? Or, even better, how can I deactivate my EKS stack automatically?</p>
<p><strong>Update:</strong></p>
<p>I discovered that EKS has a Cluster Autoscaler. Maybe this could be where the problem lies?
<a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html</a></p>
| Cédric Bloem | <p>EKS node group would create an auto scaling group to manage the worker nodes. You need specify the minimum, maximum and desired size of worker nodes. Once any instance is stopped, the auto scaling group would create new instance to match the desired instance size.</p>
<p>Check below doc for details,</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html</a></p>
| Kane |
<p>I have an application deployed to kubernetes (AKS) where I have a mix of gRPC and http services. I initially added the route for a new gRPC service to the existing ingress which was previously serving only http. That didn't work and digging into it, I read that we need to add the <code>nginx.ingress.kubernetes.io/backend-protocol: GRPC</code> annotation, and that it applied to all routes, so we would need two separate ingress. I'm currently getting an exception <code>io.grpc.internal.ManagedChannelImpl$NameResolverListener</code> error trying to connect to the gRPC service with message <code>nodename nor servname provided, or not known</code>. I'm guessing that though when multiple paths within an Ingress match a request, precedence is given first to the longest matching path, that doesn't apply across the both ingress. So I would need to either use different hosts, or change the <code>/*</code> path so that it didn't also match <code>/results</code>? Or is there something else that I need to change in my configuration?</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- {{ .Values.ingress.hosts.host }}
secretName: {{ .Values.ingress.tls.secretName }}
rules:
- host: {{ .Values.ingress.hosts.host }}
http:
paths:
- path: /graphql
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }}
port:
number: 80
- path: /graphql/*
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }}
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }}
port:
number: 80
- path: /*
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }}
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-grpc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- {{ .Values.ingress.hosts.host }}
secretName: {{ .Values.ingress.tls.secretName }}
rules:
- host: {{ .Values.ingress.hosts.host }}
http:
paths:
- path: /results
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.externalResults.host }}
port:
number: 9000
</code></pre>
| bhnat | <p>This wound up being resolved by creating a second host name that pointed to our k8s cluster. I changed the route for the grpc service to be the root path and pathType of ImplementationSpecific.</p>
<pre><code> - path: /
pathType: ImplementationSpecific
</code></pre>
<p>Both host names needed to be included in the tls section of both ingress. I was getting an SSL exception after changing the route but not updating the hosts in the tls section of each ingress.</p>
<pre><code>Channel Pipeline: [SslHandler#0, ProtocolNegotiators$ClientTlsHandler#0, WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
at io.grpc.Status.asRuntimeException(Status.java:533)
at akka.grpc.internal.UnaryCallAdapter.onClose(UnaryCallAdapter.scala:40)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:413)
| => cat io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:742)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:721)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
stderr:
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLHandshakeException: General OpenSslEngine problem
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.handshakeException(ReferenceCountedOpenSslEngine.java:1771)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.wrap(ReferenceCountedOpenSslEngine.java:776)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:511)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:1079)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.wrapNonAppData(SslHandler.java:970)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1443)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1275)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1322)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
stderr:
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
stderr:
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching grpc.aks.dev.app.cycleautomation.com found.
stderr:
at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:214)
at sun.security.util.HostnameChecker.match(HostnameChecker.java:96)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:462)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:428)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:261)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:144)
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSslTlsv13X509ExtendedTrustManager.checkServerTrusted(OpenSslTlsv13X509ExtendedTrustManager.java:223)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify(ReferenceCountedOpenSslClientContext.java:261)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:700)
at io.grpc.netty.shaded.io.netty.internal.tcnative.SSL.readFromSSL(Native Method)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:595)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1202)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1324)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1367)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:206)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1380)
... 21 more
Suppressed: javax.net.ssl.SSLHandshakeException: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.sslReadErrorResult(ReferenceCountedOpenSslEngine.java:1287)
at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1248)
... 25 more
</code></pre>
<p>The final yaml looked like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- {{ .Values.ingress.hosts.host }}
- {{ .Values.ingress.grpc.host }}
secretName: {{ .Values.ingress.tls.secretName }}
rules:
- host: {{ .Values.ingress.hosts.host }}
http:
paths:
- path: /graphql
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }}
port:
number: 80
- path: /graphql/*
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }}
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }}
port:
number: 80
- path: /*
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }}
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-grpc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- {{ .Values.ingress.hosts.host }}
- {{ .Values.ingress.grpc.host }}
secretName: {{ .Values.ingress.tls.secretName }}
rules:
- host: {{ .Values.ingress.hosts.host }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ .Release.Name }}-{{ .Values.services.externalResults.host }}
port:
number: 9000
</code></pre>
<p>Then I was able to connect to the grpc service over port 443 w/ tls enabled and just using the host name with no path in my connection.</p>
| bhnat |
<p>An ingress controller is a Layer 7 construct. Does it bypass the Service (VIP) and Layer 4 kube proxy?</p>
| Compendius | <p>In a nutshell: Ingress deals with North-South traffic (bringing traffic from the outside world into the cluster), a service acts a load balancer, routing the traffic to one of its pods. So, if I understand your question correctly, the answer is no: Ingress and services work together to get traffic from a client outside of the cluster to a certain pod.</p>
<p>You can read more about the topic in an excellent blog post series by Mark Betz (linked from <a href="https://mhausenblas.info/cn-ref/" rel="noreferrer">here</a>, in the "3rd-party articles" section).</p>
| Michael Hausenblas |
<p>I created an EKS cluster but while deploying pods, I found out that the native AWS CNI only supports a set number of pods because of the IP restrictions on its instances. I don't want to use any third-party plugins because AWS doesn't support them and we won't be able to get their tech support. What happens right now is that as soon as the IP limit is hit for that instance, the scheduler is not able to schedule the pods and the pods go into pending state.</p>
<p>I see there is a cluster autoscaler which can do horizontal scaling.</p>
<pre><code>https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
</code></pre>
<p>Using a larger instance type with more available IPs is an option but that is not scalable since we will run out of IPs eventually.
Is it possible to set a pod limit for each node in cluster-autoscaler and if that limit is reached, a new instance is spawned. Since each pod uses one secondary IP of the node so that would solve our issue of not having to worry about scaling. Is this a viable option? and also if anybody has faced this and would like to share how they overcame this limitation.</p>
| Anshul Tripathi | <p>EKS's node group is using auto scaling group for nodes scaling. </p>
<p>You can follow <a href="https://eksworkshop.com/scaling/deploy_ca/" rel="nofollow noreferrer">this workshop</a> as a dedicated example.</p>
| Kane |
<p>I have installed a K8S cluster on laptop using Kubeadm and VirtualBox. It seems a bit odd that the cluster has to be up and running to see the documentation as shown below.</p>
<pre><code>praveensripati@praveen-ubuntu:~$ kubectl explain pods
Unable to connect to the server: dial tcp 192.168.0.31:6443: connect: no route to host
</code></pre>
<p>Any workaround for this?</p>
| Praveen Sripati | <p>So the rather sobering news is that AFAIK there's not out-of-the box way how to do it, though you could totally write a <code>kubectl</code> plugin (it has become rather trivial now in 1.12). But for now, the best I can offer is the following:</p>
<pre><code># figure out which endpoint kubectl uses to retrieve docs:
$ kubectl -v9 explain pods
# from above I learn that in my case it's apparently
# https://192.168.64.11:8443/openapi/v2 so let's curl that:
$ curl -k https://192.168.64.11:8443/openapi/v2 > resources-docs.json
</code></pre>
<p>From here you can, for example, <a href="http://andrew.gibiansky.com/blog/command-line/jq-primer/" rel="nofollow noreferrer">use jq</a> to query for the descriptions. It's not as nice as a proper explain, but kinda is a good enough workaround until someone writes an docs offline query kubectl plugin.</p>
| Michael Hausenblas |
<p>Is there a way to automatically remove completed Jobs besides making a CronJob to clean up completed Jobs?</p>
<p>The <a href="http://kubernetes.io/docs/user-guide/jobs/" rel="noreferrer">K8s Job Documentation</a> states that the intended behavior of completed Jobs is for them to remain in a completed state until manually deleted. Because I am running thousands of Jobs a day via CronJobs and I don't want to keep completed Jobs around.</p>
| Josh Newman | <p>You can now set history limits, or disable history altogether, so that failed or successful CronJobs are not kept around indefinitely. See my answer <a href="https://stackoverflow.com/a/43115763/379037">here</a>. Documentation is <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">here</a>.</p>
<p>To set the <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="noreferrer">history limits</a>:</p>
<blockquote>
<p>The <code>.spec.successfulJobsHistoryLimit</code> and <code>.spec.failedJobsHistoryLimit</code> fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to <code>0</code> corresponds to keeping none of the corresponding kind of jobs after they finish.</p>
</blockquote>
<p>The config with 0 limits would look like:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
| JJC |
<p>I want to know how label can be set while namespace is getting created in e2e.
This line of code simply creates a namespace <a href="https://github.com/kubernetes/kubernetes/blob/v1.25.0/test/e2e/framework/framework.go#L239" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.25.0/test/e2e/framework/framework.go#L239</a> and it picks default <code>pod-security.kubernetes.io/</code> set which is <code>restricted</code>. I want test framework to create namespace with <code>pod-security.kubernetes.io</code> to be <code>Privileged</code>.</p>
| ambikanair | <p>This label is set by an admission controller. Setting it manually will cause the controller to reject your pod.</p>
<p>To get the correct Pod Security Policy (PSP) assigned you need RBAC rules that allows <code>use</code> on that PSP: <a href="https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#via-rbac" rel="nofollow noreferrer">https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#via-rbac</a></p>
<p>Also, if when several PSPs can be used with a particular Pod they are applied in lexicographical order: <a href="https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#policy-order" rel="nofollow noreferrer">https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#policy-order</a></p>
| Janos Lenart |
<p>A word of warning, this is my first posting, and I am new to docker and Kubernetes with enough knowledge to get me into trouble.
I am confused about where docker container images are being stored and listing images.</p>
<p>To illustrate my confusion I start with the confirmation that "docker images" indicates no image for nginx is present.
Next I create a pod running nginx.</p>
<p><code>kubectl run nginx --image=nginx</code> is succesful in pulling image "nginx" from github (or that's my assumption):</p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/nginx to minikube
Normal Pulling 8s kubelet Pulling image "nginx"
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 833.30993ms
Normal Created 7s kubelet Created container nginx
Normal Started 7s kubelet Started container nginx
</code></pre>
<p>Even though the above output indicates the image is pulled, issuing "docker images" does not include <code>nginx</code> the output.</p>
<p>If I understand correctly, when an image is pulled, it is being stored on my local disk. In my case (Linux) in <code>/var/lib/docker</code>.</p>
<p>So my first question is, why doesn't <code>docker images</code> list it in the output, or is the better question where does <code>docker images</code> look for images?</p>
<p>Next if I issue a <code>docker pull</code> for <code>nginx</code> it is pulled from what I assume to be Github. <code>docker images</code> now includes it in it's output.</p>
<p>Just for my clarification, nothing up to this point involves a private local registry, correct?</p>
<p>I purposefully create a basic local Docker Registry using the docker registry container thinking it would be clearer since that will allow me to explicitly specify a registry but this only results in another issue:</p>
<pre class="lang-shell prettyprint-override"><code>docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v /registry:/var/lib/registry \
registry
</code></pre>
<p>I tag and push the <code>nginx</code> image to my newly created local registry:</p>
<pre class="lang-shell prettyprint-override"><code>docker tag nginx localhost:5000/nginx:latest
docker push localhost:5000/nginx:latest
The push refers to repository [localhost:5000/nginx]
2bed47a66c07: Pushed
82caad489ad7: Pushed
d3e1dca44e82: Pushed
c9fcd9c6ced8: Pushed
0664b7821b60: Pushed
9321ff862abb: Pushed
latest: digest: sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47 size: 1570
</code></pre>
<p>I now delete the original <code>nginx</code> image:</p>
<pre><code>docker rmi nginx
Untagged: nginx:latest
Untagged: nginx@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603
</code></pre>
<p>... and the newely tagged one:</p>
<pre><code>docker rmi localhost:5000/nginx
Untagged: localhost:5000/nginx:latest
Untagged: localhost:5000/nginx@sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47
Deleted: sha256:f652ca386ed135a4cbe356333e08ef0816f81b2ac8d0619af01e2b256837ed3e
</code></pre>
<p>... but from where are they being deleted?
Now the image <code>nginx</code> should only be present in localhost:5000/? But <code>docker images</code> doesn't show it in it's output.</p>
<p>Moving on, I try to create the nginx pod once more using the image pushed to <code>localhost:5000/nginx:latest</code>.</p>
<pre><code>kubectl run nginx --image=localhost:5000/nginx:latest --image-pull-policy=IfNotPresent
</code></pre>
<p>This is the new issue. The connection to localhost:5000 is refused.</p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 1s kubelet Pulling image "localhost:5000/nginx:latest"
Warning Failed 1s kubelet Failed to pull image "localhost:5000/nginx:latest": rpc error: code = Unknown desc = Error response from daemon: Get "http://localhost:5000/v2/": dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 1s kubelet Error: ErrImagePull
Normal BackOff 0s kubelet Back-off pulling image "localhost:5000/nginx:latest"
</code></pre>
<p>Why is it I can pull and push to localhost:5000, but pod creation fails with what appears to be an authorization issue? I try logging into the registry but no matter what I use for the username and user password, login is successful. This confuses me more.
I would try creating/specifying <code>imagePullSecret</code>, but based on docker login outcome, it doesn't make sense.</p>
<p>Clearly I not getting it.
Someone please have pity on me and show where I have lost my way.</p>
| RoyalCoachman | <p>I will try to bring some clarity to you despite the fact your question already contains about 1000 questions (and you'll probably have 1000 more after my answer :D)</p>
<p>Before you can begin to understand any of this, you need to learn a few basic things:</p>
<ol>
<li><p>Docker produces images which are used by containers - it similar to Virtual Machine, but more lightweight (I'm oversimplifying, but the TL;DR is pretty much that).</p>
</li>
<li><p>Kubernetes is an orchestration tool - it is responsible for starting containers (by using already built images) and tracking their state (i.e. if this container has crashed it should be restarted, or if it's not started it should be started, etc)</p>
</li>
</ol>
<p>Docker can run on any machine. To be able to start a <code>container</code> you need to build an image first. The image is essentially a lightweight mini OS (i.e. alpine, ubuntu, windows, etc) which is configured with only those dependencies you need to run your application. This image is then pushed to a public repository/registry (hub.docker.com) or to a private one. And afterwards it's used for starting containers.</p>
<p>Kubernetes builds on top of this and adds the "automation" layer which is responsible for scheduling and monitoring the containers. For example, you have a group of 10 servers all running <code>nginx</code>. One of those servers restarts - the <code>nginx</code> container will be automatically started by k8s.</p>
<p>A kubernetes cluster is the group of physical machines that are dedicated to the mentioned logical cluster. These machines have <code>labels</code> or <code>tags</code> which define the purpose of physical node and work as a constraint for where a container will be scheduled.</p>
<p>Now that I have explained the minimum basics in an oversimplified way I can move with answering your questions.</p>
<ol>
<li><p>When you do <code>docker run nginx</code> - you are instructing docker to pull the <code>nginx</code> image from <a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">https://hub.docker.com/_/nginx</a> and then start it on the machine you executed the command on (usually your local machine).</p>
</li>
<li><p>When you do <code>kubectl run nginx --image=nginx</code> - you are instructing Kubernetes to do something similar to <code>1.</code> but in a cluster. The container will be deployed to a <code>random</code> machine somewhere in the cluster unless you put a <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">nodeSelector</a> or configure <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">affinity</a>. If you put a <code>nodeSelector</code> this container (called Pod in K8S) will be placed on that specific <code>node</code>.</p>
</li>
<li><p>You have started a <code>private registry</code> server on your local machine. It is crucial to know that <code>localhost</code> inside a <code>container</code> will point to the <code>container</code> itself.</p>
</li>
<li><p>It is worth mentioning that some of the <code>kubernetes</code> commands will create their own <code>container</code> for the execution phase of the command. (remember this!)</p>
</li>
<li><p>When you run <code>kubectl run nginx --image=nginx</code> everything works fine, because it is downloading the image from <code>https://hub.docker.com/_/nginx</code>.</p>
</li>
<li><p>When you run <code>kubectl run nginx --image=localhost:5000/nginx</code> you are telling kubernetes to instruct docker to look for the image at <code>localhost</code> which is ambiguous because you have multiple layers of <code>containers</code> running (check 4.). This means the command that will do <code>docker pull localhost:5000/nginx</code> also runs in a docker container -- so there is no service running at port <code>:5000</code> (the registry is running in a completely different isolated container!) :D</p>
</li>
<li><p>And this is why you are getting <code>Error: ErrImagePull</code> - it can't resolve <code>localhost</code> as it points to itslef.</p>
</li>
<li><p>As for the <code>docker rmi nginx</code> and <code>docker rmi localhost:5000/nginx</code> commands - by running them you removed your <code>local copy</code> of the <code>nginx</code> images.</p>
</li>
<li><p>If you run <code>docker run localhost:5000/nginx</code> on the machine where you started <code>docker run registry</code> you should get a running nginx container.</p>
</li>
</ol>
<p>You should definitely read the <a href="https://docs.docker.com/get-docker/" rel="nofollow noreferrer">Docker Guide</a> <strong>BEFORE</strong> you try to dig into Kubernetes or nothing will ever make sense.</p>
<p>Your head will stop hurting after that I promise... :D</p>
| tftd |
<p>Basically, when using Google Cloud Build, how do I read a value that was written in an earlier build step in subsequent steps? </p>
<p>Specifically, I'd like to make a custom image tag that's based on a combination of the timestamp and $SHORT_SHA. Something like the below. Though, it doesn't work, as docker complains about "export", and, even if that worked, it likely will be a different env:</p>
<pre><code> # Setting tag in a variable:
- name: 'ubuntu'
args: ['export', '_BUILD_TAG=`date', '-u', '+%Y%m%dT%H%M%S_$SHORT_SHA`']
</code></pre>
<p>Then, in a later step:</p>
<pre><code> # Using tag from the variable:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_BUILD_TAG', '.']
</code></pre>
<p>So, how do I use the output of one step in another? I could write the contents of <code>date</code> to a file, and then read it, but I'm back at not knowing how to set the variable from the file I read (or otherwise interpolate its results to form the argument to docker build). </p>
| JJC | <p>I never found a way to set an environment variable in one build step that can be read in other steps, but I ended up accomplishing the same effect by building on Konstantin's answer in the following way: </p>
<p>In an early step, I generate and write my date-based tag to a file. The filesystem (/workspace) is retained between steps, and serves as store of my environment variable. Then, in each step that I need to reference that value, I cat that file in place. The trick is to use sh or bash as the entrypoint in each container so that the sub-shell that reads from the file can execute. </p>
<p>Here's an example:</p>
<pre><code>## Set build tag and write to file _TAG
- name: 'ubuntu'
args: ['bash', '-c', 'date -u +%Y%m%dT%H%M_$SHORT_SHA > _TAG']
...
# Using the _TAG during Docker build:
- name: gcr.io/cloud-builders/docker
entrypoint: sh
args: ['-c', 'docker build -t gcr.io/$PROJECT_ID/image_name:$(cat _TAG) .']
</code></pre>
<p>A caveat to note is that if you are doing the bash interpolation in this way within, say, a JSON object or something that requires double quotes, you need the subshell call to never be surrounded by single quotes when executed in the container, only double, which may require escaping the internal double quotes to build the JSON object. Here's an example where I patch the kubernetes config using the _TAG file value to deploy the newly-build image: </p>
<pre><code>- name: gcr.io/cloud-builders/kubectl
entrypoint: bash
args: ['-c', 'gcloud container clusters get-credentials --zone $$CLOUDSDK_COMPUTE_ZONE $$CLOUDSDK_CONTAINER_CLUSTER ; kubectl patch deployment deployment_name -n mynamespace -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"image_name\",\"image\":\"gcr.io/$PROJECT_ID/image_name:$(cat _TAG)\"}]}}}}}"']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-b'
- 'CLOUDSDK_CONTAINER_CLUSTER=my-google-proj-cluster-name'
</code></pre>
| JJC |
<p>Currently I run a curl container and directly connect with its terminal to verify connectivity to services and check we can can connect on some port to an external service or a service maintained by some other team.</p>
<pre><code>kubectl run curl -it --rm --image=curlimages/curl -- sh
</code></pre>
<p>Now the problem is that I have to run a curl container on a Node that has taints and tolerations enabled. Is there a way to run this container on by providing tolerations from the kubectl cli?</p>
<p>For reference I am using AKS service and we use helm for deployment. In order to schedule workloads on the tainted nodes we use a combination of telerations and nodeaffinity in combination. Configs given below.</p>
<pre><code> spec:
tolerations:
- key: "mendix"
operator: "Equal"
value: "true"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: appType
operator: In
values:
- mendix
</code></pre>
| Faisal | <p>You can do something like this, if you need to run it on a specific node that is tainted (it will run despite any taints):</p>
<pre><code>kubectl run curl -it --rm --image=curlimages/curl --overrides \
'{"spec":{"tolerations":[{"operator":"Exists"}]},"nodeName":"mynode"}' \
-- sh
</code></pre>
| Janos Lenart |
<p>I have a container with a backend processing application that only connects to other services, but does not expose any ports it listens to. For example in my case it connects to a JMS broker and uses the Rest API of another service. </p>
<p>I want to deploy that container along with the JMS broker and the server with the Rest API to kubernetes. Therefore I'm currently having these kubernetes API objects for the backend processing application:</p>
<pre><code>---
kind: "Deployment"
apiVersion: "extensions/v1beta1"
metadata:
name: "foo-processing-module"
namespace: "foo-4"
labels:
foo.version: "0.0.1-SNAPSHOT"
k8s-app: "foo-processing-module"
annotations:
deployment.kubernetes.io/revision: "1"
description: "Processing Modules App for foo"
spec:
replicas: 1
selector:
matchLabels:
foo.version: "0.0.1-SNAPSHOT"
k8s-app: "foo-processing-module"
template:
metadata:
name: "foo-processing-module"
labels:
foo.version: "0.0.1-SNAPSHOT"
k8s-app: "foo-processing-module"
annotations:
description: "Processing Modules App for foo"
spec:
containers:
-
name: "foo-processing-module"
image: "foo/foo-processing-module-docker:0.0.1-SNAPSHOT"
resources: {}
terminationMessagePath: "/dev/termination-log"
terminationMessagePolicy: "File"
imagePullPolicy: "IfNotPresent"
securityContext:
privileged: false
restartPolicy: "Always"
terminationGracePeriodSeconds: 30
dnsPolicy: "ClusterFirst"
securityContext: {}
schedulerName: "default-scheduler"
strategy:
type: "RollingUpdate"
rollingUpdate:
maxUnavailable: "25%"
maxSurge: "25%"
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: "Service"
apiVersion: "v1"
metadata:
name: "foo-processing-module"
namespace: "foo-4"
labels:
foo.version: "0.0.1-SNAPSHOT"
k8s-app: "foo-processing-module"
annotations:
description: "Processing Modules App for foo"
spec:
selector:
foo.version: "0.0.1-SNAPSHOT"
k8s-app: "foo-processing-module"
type: "LoadBalancer"
sessionAffinity: "None"
externalTrafficPolicy: "Cluster"
</code></pre>
<p>However when I use <code>kubectl create</code> I get the following error message when the above API objects should be created:</p>
<pre><code>Error from server (Invalid): error when creating "foo.yml": Service "foo-processing-module" is invalid: spec.ports: Required value
error converting YAML to JSON: yaml: line 22: did not find expected <document start>
</code></pre>
<p>What do I have to do to resolve the error? Is a kubernetes <code>Service</code> even the correct API object to use in this case?</p>
| SpaceTrucker | <p>Simply remove the entire <code>Service</code> object. Since you have an app that doesn't need to communicate via the network, you don't need a service. Think of the service as a kind of specialized load-balancer in front of an (HTTP?) API your pods expose. Since you don't have that API, you don't need it. The <code>Deployment</code> does the actual supervision of the worker pods, that is, whatever goes on in <code>foo/foo-processing-module-docker:0.0.1-SNAPSHOT</code>.</p>
<p>Also, always use <code>kubectl apply</code> and not <code>create</code> and if you want to keep track of the revisions deployed, add the <code>--record</code> option so you can access the history.</p>
| Michael Hausenblas |
<p>I plan to make secret in k8s from appsetting.json but how to store the file before using by devops pipelines</p>
<p>I want to store the appsettings.json and pipeline can fetch it to make a secret</p>
| agungardiyanta | <p>Is your k8s cluster hosted in a cloud? If so, you should be using AWS Secret Manager or Azure Key Vault to store secret settings.</p>
<p>If that is not an option, create an encrypted configuration provider that would allow encrypting data inside the <code>appsettings.json</code>. Here are some examples: <a href="https://stackoverflow.com/questions/36062670/encrypted-configuration-in-asp-net-core">Encrypted configuration in ASP.NET Core</a></p>
<p>Kubernetes secrets are not really secrets and are accessible to anyone who has access to the infrastructure. <a href="https://auth0.com/blog/kubernetes-secrets-management/" rel="nofollow noreferrer">https://auth0.com/blog/kubernetes-secrets-management/</a></p>
| Dmitry S. |
<p>Right now, I can add my ip using</p>
<pre><code>gcloud container clusters update core-cluster --zone=asia-southeast1-a --enable-master-authorized-networks --master-authorized-networks w.x.y.z/32
</code></pre>
<p>but it overrides all the existing authorized networks that was already there.</p>
<p>Is there any way to append the new ip to the existing list of authorized networks?</p>
| Krishna | <p>You could automate what @Gari Singh said using gcloud, jq and tr. See below for doing it with CLI:</p>
<pre class="lang-bash prettyprint-override"><code>NEW_CIDR=8.8.4.4/32
export CLUSTER=test-psp
OLD_CIDR=$(gcloud container clusters describe $CLUSTER --format json | jq -r '.masterAuthorizedNetworksConfig.cidrBlocks[] | .cidrBlock' | tr '\n' ',')
echo "The existing master authorized networks were $OLD_CIDR"
gcloud container clusters update $CLUSTER --master-authorized-networks "$OLD_CIDR$NEW_CIDR" --enable-master-authorized-networks
</code></pre>
| Sam Stoelinga |
<p>I am trying to run a Spark job on a separate master Spark server hosted on kubernetes but port forwarding reports the following error:</p>
<pre class="lang-none prettyprint-override"><code>E0206 19:52:24.846137 14968 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 1cf922cbe9fc820ea861077c030a323f6dffd4b33bb0c354431b4df64e0db413, uid : exit status 1: 2022/02/07 00:52:26 socat[25402] E connect(16, AF=2 127.0.0.1:7077, 16): Connection refused
</code></pre>
<p>My setup is:</p>
<ul>
<li>I am using VS Code with a dev container to manage a setup where I can run Spark applications. I can run local spark jobs when I build my context like so : <code>sc = pyspark.SparkContext(appName="Pi")</code></li>
<li>My host computer is running Docker Desktop where I have kubernetes running and used Helm to run the Spark release from Bitnami <a href="https://artifacthub.io/packages/helm/bitnami/spark" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/spark</a></li>
<li>The VS Code dev container <strong>can</strong> access the host correctly since I can do <code>curl host.docker.internal:80</code> and I get the Spark web UI status page. The port 80 is forwarded from the host using <code>kubectl port-forward --namespace default --address 0.0.0.0 svc/my-release-spark-master-svc 80:80</code></li>
<li>I am also forwarding the port <code>7077</code> using a similar command <code>kubectl port-forward --address 0.0.0.0 svc/my-release-spark-master-svc 7077:7077</code>.</li>
</ul>
<p>When I create a Spark context like this <code>sc = pyspark.SparkContext(appName="Pi", master="spark://host.docker.internal:7077")</code> I am expecting Spark to submit jobs to that master. I don't know much about Spark but I have seen a few examples creating a context like this.</p>
<p>When I run the code, I see connections attempts failing at port 7077 of kubernetes port forwarding, so the requests are going through but they are being refused somehow.</p>
<pre class="lang-none prettyprint-override"><code>Handling connection for 7077
E0206 19:52:24.846137 14968 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 1cf922cbe9fc820ea861077c030a323f6dffd4b33bb0c354431b4df64e0db413, uid : exit status 1: 2022/02/07 00:52:26 socat[25402] E connect(16, AF=2 127.0.0.1:7077, 16): Connection refused
</code></pre>
<p>Now, I have no idea why the connections are being refused. I know the Spark server is accepting requests because I can see the Web UI from within the docker dev container. I know that the Spark service is exposing port 7077 because I can do:</p>
<pre class="lang-none prettyprint-override"><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h
my-release-spark-headless ClusterIP None <none> <none> 7h40m
my-release-spark-master-svc ClusterIP 10.108.109.228 <none> 7077/TCP,80/TCP 7h40m
</code></pre>
<p>Can anyone tell why the connections are refused and how I can successfully configure the Spark master to accept jobs from external callers ?</p>
<p>Example code I am using is:</p>
<pre class="lang-py prettyprint-override"><code>import findspark
findspark.init()
import pyspark
import random
#sc = pyspark.SparkContext(appName="Pi", master="spark://host.docker.internal:7077")
sc = pyspark.SparkContext(appName="Pi")
num_samples = 100000000
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(range(0, num_samples)).filter(inside).count()
pi = 4 * count / num_samples
print(pi)
sc.stop()
</code></pre>
| Tristan Dubé | <p>After tinkering with it a bit more, I noticed this output when launching the helm chart for Apache Spark <code>** IMPORTANT: When submit an application from outside the cluster service type should be set to the NodePort or LoadBalancer. **</code>.</p>
<p>This led me to research a bit more into Kubernetes networking. To submit a job, it is not sufficient to forward port 7077. Instead, the cluster itself needs to have an IP assigned. This requires the helm chart to be launched with the following commands to set Spark config values <code>helm install my-release --set service.type=LoadBalancer --set service.loadBalancerIP=192.168.2.50 bitnami/spark</code>. My host IP address is above and will be reachable by the Docker container.</p>
<p>With the LoadBalancer IP assigned, Spark will run using the example code provided.</p>
<p>Recap: Don't use port forwarding to submit jobs, a Cluster IP needs to be assigned.</p>
| Tristan Dubé |
<p>I am learning kubernetes by playing with minikube.</p>
<p>This is my pod deployment file which is fine.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 2
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: myapp
image: myid/myimage
</code></pre>
<p>I am exposing the above pods using NodePort. I am able to access using minikube IP at port 30002.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-ip-service
spec:
type: NodePort
externalIPs:
- 192.168.99.100
selector:
component: web
ports:
- port: 3000
nodePort: 30002
targetPort: 8080
</code></pre>
<p>Now i would like to use ingress to access the application at port 80 which will forward the request the ip-service at port 3000. It does NOT work</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: my-ip-service
servicePort: 3000
</code></pre>
<p>If i try to access to ingress, address is blank. </p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
test-ingress * 80 41m
</code></pre>
<p>How to use ingress with minikube? Or how to bind the minikube ip with ingress service - so that the app can be exposed outside without using nodeport</p>
| KitKarson | <p>You can get your minikube node's IP address with:</p>
<pre><code>minikube ip
</code></pre>
<p>The ingress' IP address will not populate in minikube because minikube lacks a load balancer. If you'd like something that behaves like a load balancer for your minikube cluster, <a href="https://github.com/knative/serving/blob/master/docs/creating-a-kubernetes-cluster.md#loadbalancer-support-in-minikube" rel="nofollow noreferrer">https://github.com/knative/serving/blob/master/docs/creating-a-kubernetes-cluster.md#loadbalancer-support-in-minikube</a> suggests running the following commands to patch your cluster:</p>
<pre><code>sudo ip route add $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") via $(minikube ip)
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
</code></pre>
| Seth Difley |
<p>Can I set the default namespace? That is:</p>
<pre><code>$ kubectl get pods -n NAMESPACE
</code></pre>
<p>It saves me having to type it in each time especially when I'm on the one namespace for most of the day.</p>
| mac | <p>Yes, you can set the namespace <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference" rel="noreferrer">as per the docs</a> like so:</p>
<pre><code>$ kubectl config set-context --current --namespace=NAMESPACE
</code></pre>
<p>Alternatively, you can use <a href="https://github.com/ahmetb/kubectx" rel="noreferrer">kubectx</a> for this.</p>
| Michael Hausenblas |
<p>I want to drain a Node from the Node itself. I therefore created a Service Account and added the token to the .kube/config file on the Node. I also creaded the Role Binding.</p>
<p>But I can't figure out the right permissions.
I tried this so far but it didn't work.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: admin-clusterrole
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["drain"]
</code></pre>
<p>What would be the correct permissions for that?<br />
Thanks :)</p>
<p>Edit 1:</p>
<p>RoleBinding:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: node-drainer-clusterrole-bind
namespace: default
subjects:
- kind: ServiceAccount
name: node-drainer-sa
namespace: default
roleRef:
kind: ClusterRole
name: system:node-drainer
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>ServiceAccount:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: node-drainer-sa
</code></pre>
| erkum | <p>First off, you should not use the name <code>admin-clusterrole</code> for this ClusterRole, because you risk locking yourself out of your own cluster by overwriting default bindings.</p>
<p>Here's a ClusterRole which should be able to drain a Node. Let me know if it doesn't work for you.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:node-drainer
rules:
# Needed to evict pods
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
# Needed to list pods by Node
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
# Needed to cordon Nodes
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "patch"]
# Needed to determine Pod owners
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list"]
# Needed to determine Pod owners
- apiGroups: ["extensions"]
resources: ["daemonsets", "replicasets"]
verbs: ["get", "list"]
</code></pre>
<p>You can determine which APIs are used by a kubectl command by using the verbosity levels.</p>
<p>For example:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl drain node my-node -v=10
</code></pre>
<p>From here you can inspect the HTTP requests made by kubectl.</p>
| OregonTrail |
<p>Is it possible to specify the Pod creation time as part of the k8s Pod Name? </p>
<p><strong>Scenario:</strong><br>
I have many pods with the same name prefix (and uniquely generated tail-end of the name) and these are all names of log groups. </p>
<p>I wish to distinguish between log groups by creation time. </p>
<p>Unfortunately AWS CloudWatch Logs console does not sort by log group creation time. </p>
| cryanbhu | <p>No, not with a deployment at least, a stateful set would work but you should really be using labels here.</p>
| Michael Hausenblas |
<p>After far as I know: </p>
<ul>
<li>deploymentconfig → replicationcontroller → pod</li>
</ul>
<p>vs.</p>
<ul>
<li>deployment → replicaset → pod</li>
</ul>
<p>Otherwise, do these two resources have additional differences?</p>
<p>The more detail the better.</p>
| Weiwei Jiang | <p>A <a href="https://docs.openshift.com/container-platform/3.9/dev_guide/deployments/how_deployments_work.html" rel="noreferrer">DeploymentConfig</a> (DC) in OpenShift is more or less equivalent to a Kubernetes <code>Deployment</code>, nowadays. Main difference (besides that one is using <code>ReplicationController</code> and the other using <code>ReplicaSet</code> as you rightly pointed out) is that </p>
<ol>
<li><p>there are a few things you can do with a <code>DeploymentConfig</code> (around triggers) that you can't do with a <code>Deployment</code>. </p></li>
<li><p><code>DeploymentConfig</code>'s are first-class citizens in the Web console.</p></li>
</ol>
<p>The reason <code>DeploymentConfig</code>'s exist is because we (Red Hat) are innovating. In other words: <code>DeploymentConfig</code>'s predate <code>Deployment</code>'s and while we're always trying to propose these innovations upstream, they are not always accepted by the community as is. For example, in the case of RBAC, the stuff we had in OpenShift was accepted upstream and that's why you have the same RBAC resources etc. now in OpenShift and Kubernetes. With <code>DeploymentConfig</code>'s that was not the case. Over time one would expect that <code>DeploymentConfig</code>'s are phased out in favor of <code>Deployment</code>'s but I can't give you a timeline. If portability is your primary concern, I'd say, use <code>Deployment</code>'s.</p>
| Michael Hausenblas |
<p>I got two types of strange situations when deploying Vault in Kubernetes and using <code>Kubernetes Auth</code> method</p>
<blockquote>
<p>Kubernetes version: v1.25.6<br />
Vault version: v1.12.1</p>
</blockquote>
<h3><strong>1. It kept getting 403 <code>permission denied</code> from <code>/v1/auth/kubernetes/login</code> for about 30 minutes long time before suddenly got desired secrets successfully at <code>vault-agent-init</code> stage. Sometime it never got success after even several hours.</strong></h3>
<p><strong>Error:</strong></p>
<pre class="lang-bash prettyprint-override"><code>==> Vault agent started! Log data will stream in below:
==> Vault agent configuration:
Cgo: disabled
Log Level: info
Version: Vault v1.12.1, built 2022-10-27T12:32:05Z
Version Sha: e34f8a14fb7a88af4640b09f3ddbb5646b946d9c
2023-04-03T15:42:38.374Z [INFO] sink.file: creating file sink
2023-04-03T15:42:38.374Z [INFO] sink.file: file sink configured: path=/home/vault/.vault-token mode=-rw-r-----
2023-04-03T15:42:38.374Z [INFO] template.server: starting template server
2023-04-03T15:42:38.375Z [INFO] (runner) creating new runner (dry: false, once: false)
2023-04-03T15:42:38.374Z [INFO] sink.server: starting sink server
2023-04-03T15:42:38.375Z [INFO] auth.handler: starting auth handler
2023-04-03T15:42:38.375Z [INFO] auth.handler: authenticating
2023-04-03T15:42:38.375Z [INFO] (runner) creating watcher
2023-04-03T15:42:38.381Z [ERROR] auth.handler: error authenticating:
error=
| Error making API request.
|
| URL: PUT http://vault.vault.svc:8200/v1/auth/kubernetes/login
| Code: 403. Errors:
|
| * permission denied
backoff=1s
2023-04-03T15:42:39.381Z [INFO] auth.handler: authenticating
2023-04-03T15:42:39.383Z [ERROR] auth.handler: error authenticating:
error=
| Error making API request.
|
| URL: PUT http://vault.vault.svc:8200/v1/auth/kubernetes/login
| Code: 403. Errors:
|
| * permission denied
backoff=1.62s
</code></pre>
<h3><strong>2. Sometime it got authenticated at <code>/v1/auth/kubernetes/login</code> soon, but then threw the error like:</strong></h3>
<pre class="lang-bash prettyprint-override"><code># (Can't get the output now, but something like this:)
vault.read(myapp/data/postgres/config), vault.read(myapp/data/postgres/config)
URL: GET http://vault.vault.svc:8200/v1/myapp/data/postgres/config
Code: 403. Errors:
* permission denied
</code></pre>
<hr />
<h2><strong>How I installed Vault in namespace vault:</strong></h2>
<p><strong>helm vaules.yaml:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>ui:
enabled: true
server:
logLevel: trace
ha:
enabled: true
replicas: 3
raft:
enabled: true
dataStorage:
storageClass: cstor-csi
auditStorage:
storageClass: cstor-csi
authDelegator:
enabled: true
injector:
enabled: true
logLevel: trace
</code></pre>
<pre class="lang-bash prettyprint-override"><code>helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
# Install a spceified version vault in namespace `vault`.
helm upgrade --install vault hashicorp/vault --namespace vault -f vault-values.yaml --version 0.23.0 --create-namespace
# Unseal
kubectl exec -ti vault-0 -n vault -- vault operator init > keys.txt
kubectl exec -ti vault-1 -n vault -- vault operator init >> keys.txt
kubectl exec -ti vault-2 -n vault -- vault operator init >> keys.txt
kubectl exec -ti vault-0 -n vault -- vault operator unseal
kubectl exec -ti vault-1 -n vault -- vault operator unseal
kubectl exec -ti vault-2 -n vault -- vault operator unseal
kubectl exec -it vault-0 -n vault -- /bin/sh
vault login
vault auth enable kubernetes
# Do this as document says:
vault write auth/kubernetes/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
vault secrets enable -path=myapp kv-v2
vault kv put myapp/postgres/config POSTGRES_DB="myapp" POSTGRES_USER="myapp" POSTGRES_PASSWORD="myapp"
vault kv get myapp/postgres/config
vault policy write myapp - <<EOF
path "myapp/data/postgres/config" {
capabilities = ["read"]
}
EOF
vault write auth/kubernetes/role/myapp \
bound_service_account_names=myapp-sa \
bound_service_account_namespaces=myapp \
policies=myapp \
ttl=3d
# create sa myapp-sa in namespace myapp, not the namespace with vault.
kubectl create sa myapp-sa -n myapp
</code></pre>
<hr />
<h2><strong>Deployemnt of myapp in namespace myapp</strong></h2>
<pre class="lang-yaml prettyprint-override"><code># I don't know if this ClusterRoleBinding needed?
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: myapp-sa-rbac
namespace: myapp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: myapp-sa
namespace: myapp
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
spec:
type: ClusterIP
ports:
- name: myapp
port: 8080
targetPort: 8080
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-status: 'update'
vault.hashicorp.com/role: "myapp"
vault.hashicorp.com/agent-inject-secret-database-config: "myapp/data/postgres/config"
# Environment variable export template
vault.hashicorp.com/agent-inject-template-database-config: |
{{ with secret "myapp/data/postgres/config" -}}
export POSTGRES_DB="{{ .Data.data.POSTGRES_DB }}"
export POSTGRES_USER="{{ .Data.data.POSTGRES_USER }}"
export POSTGRES_PASSWORD="{{ .Data.data.POSTGRES_PASSWORD }}"
{{- end }}
spec:
serviceAccountName: myapp-sa
containers:
- name: myapp
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
command: ["sh", "-c"]
args:
- . /vault/secrets/database-config
env:
- name: POSTGRES_HOST
value: postgres
- name: POSTGRES_PORT
value: "5432"
</code></pre>
| Suge | <p>Fixed:</p>
<p>We should do <code>raft join</code> rather than <code>vault operator init</code> for three times:</p>
<p><a href="https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-minikube-raft" rel="nofollow noreferrer">https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-minikube-raft</a></p>
<pre class="lang-bash prettyprint-override"><code># Join the vault-1 pod to the Raft cluster.
kubectl exec -ti vault-1 -- vault operator raft join http://vault-0.vault-internal:8200
# Join the vault-2 pod to the Raft cluster.
kubectl exec -ti vault-2 -- vault operator raft join http://vault-0.vault-internal:8200
</code></pre>
| Suge |
<p>Any help much appreciated , I have couple of spring boot application running in aks with default profile , i am trying to change the profile from my deployment.yaml using helm</p>
<pre><code>
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
</code></pre>
<p>what i end up is my pod is been put to crashloopbackoff state saying</p>
<p>Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.</p>
<p>2022-01-12 12:42:49.054 ERROR 1 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :</p>
<hr />
<p>APPLICATION FAILED TO START</p>
<hr />
<p>Description:</p>
<p>The Tomcat connector configured to listen on port 8207 failed to start. The port may already be in use or the connector may be misconfigured.</p>
<p>I tried to delete the existing pod and service for the application and did a fresh deploy i still get the same error ..</p>
<p>methods tried :(in all methods docker file is created , pod is created , application in pod is setup to dev profile but the thing is it not able to start the application with the above error , when i remove the profile setting , every thing is workly perfectly fine expect the fact is the application is set to default profile)</p>
<ol>
<li>in docker file :</li>
</ol>
<p>option a. CMD ["java","-jar","/app.jar", "--spring.profiles.active=dev"]
option b. CMD ["java","-jar","-Dspring.profiles.active=dev","/app.jar"]</p>
<ol start="2">
<li>changed in deployment.yml as mentioned above</li>
</ol>
<p>ps : i dont have property file in my application on src/main/resources , i have only application-(env).yml files there .</p>
<p>The idea is to set the profile first and based on the profile the application_(env).yml has to be selected</p>
<p>output from helm</p>
<pre><code>
Release "app" has been upgraded. Happy Helming!
NAME: email-service
LAST DEPLOYED: Thu Jan 13 16:09:46 2022
NAMESPACE: default
STATUS: deployed
REVISION: 19
TEST SUITE: None
USER-SUPPLIED VALUES:
image:
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app
service:
targetPort: 8207
COMPUTED VALUES:
image:
pullPolicy: Always
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app-service
tag: latest
replicaCount: 1
service:
port: 80
targetPort: 8207
type: ClusterIP
</code></pre>
<p>Any help is appreciated , thanks</p>
| Dilu | <p>First of all, please check what profile the application is using, search for line like this (in log):</p>
<pre class="lang-none prettyprint-override"><code>The following profiles are active: test
</code></pre>
<p>When I tested with Spring Boot v2.2.2.RELEASE, <code>application_test.yml</code> file is not used, it has to be renamed to <code>application-test.yml</code>, for a better highlighting of a difference:</p>
<pre><code>application_test.yml # NOT working
application-test.yml # working as expected
</code></pre>
<p>What I like even more (but it is Spring Boot specific), you can use application.yml like this:</p>
<pre class="lang-yaml prettyprint-override"><code>foo: 'foo default'
bar: 'bar default'
---
spring:
profiles:
- test
bar: 'bar test2'
</code></pre>
<p>Why I prefer this? Because you can use multiple profiles then, e.g. <code>profile1,profile2</code> and it behaves as last wins, I mean it will override the values from <code>profile1</code> with values from <code>profile2</code>, as it was defined in this order... The same does not work with application-profileName.yml approach.</p>
| Betlista |
<p>I have a cluster that scales based on the CPU usage of my pods. The documentation states that i should prevent <em>thrashing</em> by scaling to fast. I want to play around with the autoscaling speed but i can't seem to find where to apply the following flags: </p>
<ul>
<li>--horizontal-pod-autoscaler-downscale-delay</li>
<li>--horizontal-pod-autoscaler-upscale-delay</li>
</ul>
<p>My goal is to set the cooldown timer lower then <em>5m</em> or <em>3m</em>, does anyone know how this is done or where I can find documentation on how to configure this? Also if this has to be configured in the hpa autoscaling YAML file, does anyone know what definition should be used for this or where I can find documentation on how to configure the YAML?
<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="noreferrer">This is a link to the Kubernetes documentation about scaling cooldowns i used.</a></p>
| Dimitrih | <p>The HPA controller is part of the controller manager and you'll need to pass the flags to it, see also the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="noreferrer">docs</a>. It is not something you'd do via kubectl. It's part of the control plane (master) so depends on how you set up Kubernetes and/or which offering you're using. For example, in GKE the control plane is not accessible, in Minikube you'd ssh into the node, etc.</p>
| Michael Hausenblas |
<p>I have set up Kubernetes secrets.</p>
<pre><code>kubectl create secret generic mysecret --from-file=mysecret=/home/ubuntu/secret.txt
</code></pre>
<p>And this secret can be converted to plaintext using the same <code>kubectl</code> command:</p>
<pre><code>kubectl get secret mysecret -o yaml
# and base64 decode
</code></pre>
<p>How do I limit access to this secret? I only want a certain pods and only me as an operator to have access to this secret.</p>
| enerudfwqenq | <p>OK, so you need to define a (cluster) role and then bind it to you (== human user is the target entity) and/or to a service account (== app is the target entity) which you then <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">use in the pod</a> instead of the <code>default</code> one.</p>
<p>The respective <code>secretadmin</code> role (or choose whatever name you prefer) would look something like this (vary verbs as required):</p>
<pre><code>$ kubectl create clusterrole secretadmin \
--verb=get --verb=list --verb=create --verb=update \
--resource=secret \
--namespace=mysuperproject
</code></pre>
<p>Once you've defined the role, you can attach (or: bind) it to a certain entity. Let's go through the case of the service account (similar then for a human user, just simpler). So first we need to create the service account, here called <code>thepowerfulapp</code> which you will then use in your deployment/pod/whatever:</p>
<pre><code>$ kubectl -n mysuperproject create sa thepowerfulapp
</code></pre>
<p>And now it's time to tie everything together with the following binding called <code>canadminsecret</code></p>
<pre><code>$ kubectl create clusterrolebinding canadminsecret \
--role=secretadmin \
--serviceaccount=mysuperproject:thepowerfulapp \
--namespace=mysuperproject
</code></pre>
| Michael Hausenblas |
<p>I am stuggling with a simple one replica deployment of the <a href="https://hub.docker.com/r/eventstore/eventstore/" rel="nofollow noreferrer">official event store image</a> on a Kubernetes cluster. I am using a persistent volume for the data storage. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-eventstore
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: my-eventstore
spec:
imagePullSecrets:
- name: runner-gitlab-account
containers:
- name: eventstore
image: eventstore/eventstore
env:
- name: EVENTSTORE_DB
value: "/usr/data/eventstore/data"
- name: EVENTSTORE_LOG
value: "/usr/data/eventstore/log"
ports:
- containerPort: 2113
- containerPort: 2114
- containerPort: 1111
- containerPort: 1112
volumeMounts:
- name: eventstore-storage
mountPath: /usr/data/eventstore
volumes:
- name: eventstore-storage
persistentVolumeClaim:
claimName: eventstore-pv-claim
</code></pre>
<p>And this is the yaml for my persistent volume claim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eventstore-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>The deployments work fine. It's when I tested for durability that I started to encounter a problem. I delete a pod to force actual state from desired state and see how Kubernetes reacts.</p>
<p>It immediately launched a new pod to replace the deleted one. And the admin UI was still showing the same data. But after deleting a pod for the second time, the new pod did not come up. I got an error message that said "record too large" that indicated corrupted data according to this discussion. <a href="https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw" rel="nofollow noreferrer">https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw</a></p>
<p>I tried again for a couple of times. Same result every time. After deleting the pod for the second time the data is corrupted. This has me worried that an actual failure will cause similar result.</p>
<p>However, when deploying new versions of the image or scaling the pods in the deployment to zero and back to one no data corruption occurs. After several tries everything is fine. Which is odd since that also completely replaces pods (I checked the pod id's and they changed).</p>
<p>This has me wondering if deleting a pod using kubectl delete is somehow more forcefull in the way that a pod is terminated. Do any of you have similar experience? Of insights on if/how delete is different? Thanks in advance for your input.</p>
<p>Regards,</p>
<p>Oskar</p>
| Oskar | <p>I was refered to this pull request on Github that stated the the proces was not killed properly: <a href="https://github.com/EventStore/eventstore-docker/pull/52" rel="nofollow noreferrer">https://github.com/EventStore/eventstore-docker/pull/52</a></p>
<p>After building a new image with the Docker file from the pull request put this image in the deployment. I am killing pods left and right, no data corruption issues anymore.</p>
<p>Hope this helps someone facing the same issue.</p>
| Oskar |
<p>I have a container that runs some data fetching from a MySQL database and simply displays the result in console.log(), and want to run this as a cron job in GKE. So far I have the container working on my local machine, and have successfully deployed this to GKE (in terms of there being no errors thrown so far as I can see). </p>
<p>However, the pods that were created were just left as Running instead of stopping after completion of the task. Are the pods supposed to stop automatically after executing all the code, or do they require explicit instruction to stop and if so what is the command to terminate a pod after creation (by the Cron Job)?</p>
<p>I'm reading that there is supposedly some kind of termination grace period of ~30s by default, but after running a minutely-executed cronjob for ~20minutes, all the pods were still running. Not sure if there's a way to terminate the pods from inside the code, otherwise it would be a little silly to have a cronjob generating lots of pods left running idly..My cronjob.yaml below:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: gcr.io/project/test:v1
# env:
# - name: "DELAY"
# value: 15
restartPolicy: OnFailure
</code></pre>
| jlyh | <p>A <code>CronJob</code> is essentially a cookie cutter for jobs. That is, it knows how to create jobs and execute them at a certain time. Now, that being said, when looking at garbage collection and clean up behaviour of a <code>CronJob</code>, we can simply look at what the Kubernetes docs have to say about this topic <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="nofollow noreferrer">in the context of jobs</a>:</p>
<blockquote>
<p>When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl (e.g. <code>kubectl delete jobs/pi</code> or <code>kubectl delete -f ./job.yaml</code>). </p>
</blockquote>
| Michael Hausenblas |
<p>How can I reset RESTART counts to zero in below Kubernetes command output. I know stats are saved in etcd, but how to reset/erase the data?</p>
<pre><code>/home/mytest>kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default load-generator 1/1 Running 2 3d21h 192.168.252.102 testnode <none> <none>
kube-system calico-kube-controllers-65f8bc95db-ph48b 1/1 Running 7 51d 192.168.252.80 testnode <none> <none>
kube-system calico-node-tr8dr 1/1 Running 6 51d 10.61.62.152 testnode <none> <none>
kube-system coredns-66bff467f8-lcn5z 1/1 Running 18 51d 192.168.252.116 testnode <none> <none>
kube-system coredns-66bff467f8-tmgml 1/1 Running 17 51d 192.168.252.123 testnode <none> <none>
kube-system etcd-testnode 1/1 Running 23 51d 10.61.62.152 testnode <none> <none>
kube-system kube-apiserver-testnode 1/1 Running 50 51d 10.61.62.152 testnode <none> <none>
kube-system kube-controller-manager-testnode 1/1 Running 238 51d 10.61.62.152 testnode <none> <none>
kube-system kube-proxy-wn28b 1/1 Running 6 51d 10.61.62.152 testnode <none> <none>
kube-system kube-scheduler-testnode 1/1 Running 233 51d 10.61.62.152 testnode <none> <none>
kube-system metrics-server-585bd46ccb-55q59 1/1 Running 1 37h 192.168.252.84 testnode <none> <none>
kube-system tiller-deploy-56b574c76d-kj45f 1/1 Running 4 45d 192.168.252.65 testnode <none> <none>
</code></pre>
| myquest4 sh | <p>The only way to accomplish this is by restarting the pod.</p>
<p>Also, a feature to artificially reset the counter has been rejected.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/50375" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/50375</a></p>
| KevinO |
<p>Is there a variant of <code>kubectl delete all --all</code> command or some other command to delete all resources except the <em>kubernetes service</em>?</p>
| Nimsa | <p>I don't think there's a built-in command for it, which means you'll have to script your way out of it, something like this (add an <code>if</code> for the namespace you want to spare):</p>
<pre><code>$ for ns in $(kubectl get ns --output=jsonpath={.items[*].metadata.name}); do kubectl delete ns/$ns; done;
</code></pre>
<p>Note: deleting a namespace deletes all its resources.</p>
| Michael Hausenblas |
<p>Looking into Kubernetes documentation:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="noreferrer">Pod Security Policy</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">Pod Security Context</a></li>
</ul>
<p>Mmmm... aren't eventually they doing the same? What is the difference?</p>
| Illidan | <p>I have no idea why folks are down-voting this question, it's spot on and actually we've got our docs to blame and not the OP. OK, here goes:</p>
<p>The pod security context (which is preceded by and largely based on OpenShift <a href="https://docs.openshift.com/container-platform/3.9/admin_guide/manage_scc.html" rel="noreferrer">Security Context Constraints</a>) allows you (as a developer?) to define runtime restrictions and/or settings on a per-pod basis.</p>
<p>But how do you enforce this? How do you make sure that folks are actually defining the constraints? That's where pod security policies (PSP) come into play: as a cluster or namespace admin you can define and enforce those security context-related policies using PSPs. See also the <a href="https://kubernetes-security.info/" rel="noreferrer">Kubernetes Security</a> book for more details. </p>
| Michael Hausenblas |
<p>I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules. </p>
<p>My <code>main.tf</code> call just two modules, <code>gke</code> for the google kubernetes engine and <code>storage</code> which creates a persistent volume on the cluster created previously.</p>
<p>Module <code>gke</code> has an <code>outputs.tf</code> which outputs the following:</p>
<pre><code>output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
</code></pre>
<p>Then in the <code>main.tf</code> for the storage module, I have:</p>
<pre><code>client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
host = "${var.host}"
</code></pre>
<p>Then in the root <code>main.tf</code> I have the following:</p>
<pre><code>client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
host = "${module.gke.host}"
</code></pre>
<p>From what I see, it looks right. The values for the certs, key and host variables should be outputted from the <code>gke</code> module by <code>outputs.tf</code>, picked up by <code>main.tf</code> of root, and then delivered to <code>storage</code> as a regular variable.</p>
<p>Have I got it the wrong way around? Or am I just going crazy, something doesn't seem right.</p>
<p>I get questioned about the variable not being filled when I run a plan.</p>
<p>EDIT:</p>
<p>Adding some additional information including my code.</p>
<p>If I manually add dummy entries for the variables it's asking for I get the following error:</p>
<pre><code>Macbook: $ terraform plan
var.client_certificate
Enter a value: 1
var.client_key
Enter a value: 2
var.cluster_ca_certificate
Enter a value: 3
var.host
Enter a value: 4
...
(filtered out usual text)
...
* module.storage.data.google_container_cluster.kube-cluster: 1 error(s) occurred:
* module.storage.data.google_container_cluster.kube-cluster: data.google_container_cluster.kube-cluster: project: required field is not set
</code></pre>
<p>It looks like it's complaining that the data.google_container_cluster resource needs the project attribute. But it doesn't it's not a valid resource. It is for the provider, but it's filled out for provider.</p>
<p>Code below:</p>
<p>Folder structure:</p>
<pre><code>root-folder/
├── gke/
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── storage/
│ ├── main.tf
│ └── variables.tf
├── main.tf
├── staging.json
├── terraform.tfvars
└── variables.tf
</code></pre>
<p>root-folder/gke/main.tf:</p>
<pre><code>provider "google" {
credentials = "${file("staging.json")}"
project = "${var.project}"
region = "${var.region}"
zone = "${var.zone}"
}
resource "google_container_cluster" "kube-cluster" {
name = "kube-cluster"
description = "kube-cluster"
zone = "europe-west2-a"
initial_node_count = "2"
enable_kubernetes_alpha = "false"
enable_legacy_abac = "true"
master_auth {
username = "${var.username}"
password = "${var.password}"
}
node_config {
machine_type = "n1-standard-2"
disk_size_gb = "20"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
</code></pre>
<p>root-folder/gke/outputs.tf:</p>
<pre><code>output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
</code></pre>
<p>root-folder/gke/variables.tf:</p>
<pre><code>variable "region" {
description = "GCP region, e.g. europe-west2"
default = "europe-west2"
}
variable "zone" {
description = "GCP zone, e.g. europe-west2-a (which must be in gcp_region)"
default = "europe-west2-a"
}
variable "project" {
description = "GCP project name"
}
variable "username" {
description = "Default admin username"
}
variable "password" {
description = "Default admin password"
}
</code></pre>
<p>/root-folder/storage/main.cf:</p>
<pre><code>provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
data "google_container_cluster" "kube-cluster" {
name = "${var.cluster_name}"
zone = "${var.zone}"
}
resource "kubernetes_storage_class" "kube-storage-class" {
metadata {
name = "kube-storage-class"
}
storage_provisioner = "kubernetes.io/gce-pd"
parameters {
type = "pd-standard"
}
}
resource "kubernetes_persistent_volume_claim" "kube-claim" {
metadata {
name = "kube-claim"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "kube-storage-class"
resources {
requests {
storage = "10Gi"
}
}
}
}
</code></pre>
<p>/root/storage/variables.tf:</p>
<pre><code>variable "username" {
description = "Default admin username."
}
variable "password" {
description = "Default admin password."
}
variable "client_certificate" {
description = "Client certificate, output from the GKE/Provider module."
}
variable "client_key" {
description = "Client key, output from the GKE/Provider module."
}
variable "cluster_ca_certificate" {
description = "Cluster CA Certificate, output from the GKE/Provider module."
}
variable "cluster_name" {
description = "Cluster name."
}
variable "zone" {
description = "GCP Zone"
}
variable "host" {
description = "Host endpoint, output from the GKE/Provider module."
}
</code></pre>
<p>/root-folder/main.tf:</p>
<pre><code>module "gke" {
source = "./gke"
project = "${var.project}"
region = "${var.region}"
username = "${var.username}"
password = "${var.password}"
}
module "storage" {
source = "./storage"
host = "${module.gke.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
cluster_name = "${var.cluster_name}"
zone = "${var.zone}"
}
</code></pre>
<p>/root-folder/variables.tf:</p>
<pre><code>variable "project" {}
variable "region" {}
variable "username" {}
variable "password" {}
variable "gc_disk_size" {}
variable "kpv_vol_size" {}
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
variable "cluster_name" {}
variable "zone" {}
</code></pre>
<p>I won't paste the contents of my <code>staging.json</code> and <code>terraform.tfvars</code> for obvious reasons :)</p>
| jonnybinthemix | <p>In your <code>/root-folder/variables.tf</code>, delete the following entries:</p>
<pre><code>variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
</code></pre>
<p>Those are not variables per se that the Terraform code at the root level needs. Instead, they are being passed as 1 module's output --> input to the 2nd module.</p>
| KJH |
<p>I am new to Kubernetes and started reading through the documentation.
There often the term 'endpoint' is used but the documentation lacks an explicit definition.</p>
<p>What is an 'endpoint' in terms of Kubernetes? Where is it located?</p>
<p>I could image the 'endpoint' is some kind of access point for an individual 'node' but that's just a guess. </p>
| Chris | <p>While you're correct that in the <a href="https://kubernetes.io/docs/reference/glossary" rel="noreferrer">glossary</a> there's indeed no entry for endpoint, it is a well defined Kubernetes network concept or abstraction. Since it's of secondary nature, you'd usually not directly manipulate it. There's a core resource <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#endpoints-v1-core" rel="noreferrer">Endpoint</a> defined and it's also supported on the command line:</p>
<pre><code>$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.64.13:8443 10d
</code></pre>
<p>And there you see what it effectively is: an IP address and a port. Usually, you'd let a service manage endpoints (one EP per pod the service routes traffic to) but you can also <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">manually manage</a> them if you have a use case that requires it.</p>
| Michael Hausenblas |
<p>I am following the steps in the getting started guide for <a href="https://github.com/kubeflow/website/blob/master/content/docs/started/getting-started-minikube.md" rel="nofollow noreferrer">kubeflow</a> and i got stuck at verify the setup works.</p>
<p>I managed to get this:-</p>
<pre><code>$ kubectl get ns
NAME STATUS AGE
default Active 2m
kube-public Active 2m
kube-system Active 2m
kubeflow-admin Active 14s
</code></pre>
<p>but when i do </p>
<pre><code>$ kubectl -n kubeflow get svc
No resources found.
</code></pre>
<p>I also got </p>
<pre><code>$ kubectl -n kubeflow get pods
No resources found.
</code></pre>
<p>I repeated these both on my mac and my ubuntu VM, and both returned the same problem. Am i missing something here? </p>
<p>Thanks.</p>
| Chirrut Imwe | <p>Yes you're missing something here and that is to use the correct namespace. Use:</p>
<pre><code>$ kubectl -n kubeflow-admin get all
</code></pre>
| Michael Hausenblas |
<p>I am using docker + AKS to manage my containers. When I run my containers locally/or on a VM using docker-compose ..my services(which are containerized) can communicate with my databases which are also in containers. The bridge between these containers is created using networks. After I converted the docker-compose file for all of my applications to the respective yaml counterparts and deployed my containers to AKS (single node), my containerized services are not able to reach the database.</p>
<p>All my containers have 3 yaml files </p>
<ol>
<li>Pvc</li>
<li>deployment(for pods) </li>
<li>svc. </li>
</ol>
<p>I've gone through many of the getting started with AKS examples and for some reason am not able to figure it out. All application services are exposed publicly using load balancers. My question is more like how do I define which db the application services should connect to now that the concept of networks doesn't exist anymore. </p>
<p>In the examples provided for KS all the the front end services do is create a env and specify the name of the backend service. I tried that as well and my application still doesn't work. Sample that I referred to validate my setup is <a href="https://learn.microsoft.com/en-gb/azure/aks/kubernetes-walkthrough#run-the-application" rel="nofollow noreferrer">https://learn.microsoft.com/en-gb/azure/aks/kubernetes-walkthrough#run-the-application</a>.</p>
<p>Any help would be great.</p>
| Ashish Chettri | <p>If you need these services internally only, you should not expose it publicly using load balancers.</p>
<p>Kubernetes has two possibilities for service discovery. DNS and environment variables. While DNS is an optional component, I did not see any cluster without it. Also I assume that AKS uses it.</p>
<p>So, for example you have a Postgres database and want to use it somewhere else:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: db
image: postgres:11
ports:
- name: postgres
containerPort: 5432
</code></pre>
<p>This creates a deployment with exposes the port 5432. The label <code>app: postgres</code> is also important here, since we need it later to identify the created Pods.</p>
<p>Now we need to create a service for it:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: ClusterIP # default value
selector:
app: postgres
ports:
- port: 5432
</code></pre>
<p>This creates a virtual IP address and registers all ready pods with the label <code>app: postgres</code> to it. Since the name of the service is <code>postgres</code> and it is the default namespace, postgres is now accessible via <code>postgres.default.svc.cluster.local:5432</code>. You can you this address and port in your other application (eg Python) to connect to the database.</p>
| svenwltr |
<p>I have created a local ubuntu Kubernetes cluster, having 1 master and 2 slave nodes.</p>
<p>I deployed 2 applications in 2 pods and created service for both of the pods, it's working fine.
I entered inside pod by typing this command ,</p>
<pre><code>$ kubectl exec -it firstpod /bin/bash
# apt-get update
</code></pre>
<p>Unable to make update and I'm getting an error:</p>
<pre><code>Err http://security.debian.org jessie/updates InRelease
Err http://deb.debian.org jessie InRelease
Err http://deb.debian.org jessie-updates InRelease
Err http://security.debian.org jessie/updates Release.gpg Temporary failure resolving 'security.debian.org' Err http://deb.debian.org jessie-backports InRelease
Err http://deb.debian.org jessie Release.gpg Temporary failure resolving 'deb.debian.org' Err http://deb.debian.org jessie-updates Release.gpg Temporary failure resolving 'deb.debian.org' Err http://deb.debian.org jessie-backports Release.gpg Temporary failure resolving 'deb.debian.org' Reading package lists... Done W: Failed to fetch http://deb.debian.org/debian/dists/jessie/InRelease
W: Failed to fetch http://deb.debian.org/debian/dists/jessie-updates/InRelease
W: Failed to fetch http://security.debian.org/dists/jessie/updates/InRelease
W: Failed to fetch http://deb.debian.org/debian/dists/jessie-backports/InRelease
W: Failed to fetch http://security.debian.org/dists/jessie/updates/Release.gpg Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/jessie/Release.gpg Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/jessie-updates/Release.gpg Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/jessie-backports/Release.gpg Temporary failure resolving 'deb.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
</code></pre>
<p>I'm trying to ping my second pod service:</p>
<pre><code># ping secondservice (This is the service name of secondpod)
PING secondservice.default.svc.cluster.local (10.100.190.196): 56 data bytes
unable to ping.
</code></pre>
<p>How can I ping/call the second service from the first node?</p>
| Chintamani | <p>There are two (unrelated) questions I see there. I'm going to focus on the second one since the first is unclear to me (what is the ask?).</p>
<p>So, you wonder why the following doesn't work:</p>
<pre><code># ping secondservice
</code></pre>
<p>This is not a bug or unexpected (actually, I wrote about it <a href="https://blog.openshift.com/kubernetes-services-by-example/" rel="nofollow noreferrer">here</a>). In short: the FQDN <code>secondservice.default.svc.cluster.local</code> gets resolved via the DNS plugin to a virtual IP (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">VIP</a>), the very essence of this VIP is that it is virtual, that is, it's not attached to a network interface, it's just a bunch of iptables rules. Hence, the ICMP-based ping has nothing to work against, since it's not a 'real' IP. You can <code>curl</code> the service, though. Assuming the service runs on port 9876, the following should work:</p>
<pre><code># curl secondservice:9876
</code></pre>
| Michael Hausenblas |
<p>I have asked myself this question and invested time researching it. Running out of time. Can someone point me in the right direction?
I have created a kubernetes cluster on minikube, with its Ingress, Services and Deployments. There is a whole configuration of services in there.
Can, now, I point this kubectl command to another provider like VMWareFusion, AWS , Azure, not to forget Google Cloud.
I know about kops. My understanding is that although this is the design goal of kops but presently it only supports AWS. </p>
| Tauqir Chaudhry | <p>Yes, you can use different clusters via the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">context</a>. List them using <code>kubectl config get-contexts</code> and switch between them using <code>kubectl config use-context</code>.</p>
| Michael Hausenblas |
<p>I am creating a Replication controller with one init-container. however the init container fails to start and the status of the pod is:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
testcontainer 0/1 CrashLoopBackOff 12 37m
</code></pre>
<p>I am not sure what part is failing exactly, and the logs do not help.
My kubectl server version is 1.4 (different from client version) so I am using:</p>
<pre><code>annotations:
pod.beta.kubernetes.io/init-containers:
</code></pre>
<p>Here is the replication controller yaml file I am using.
I am using the "hello-world" image (instead of the nginx to make it faster)</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: testcontainer
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "install",
"image": "hello-world"
}
]'
spec:
containers:
- name: nginx
image: hello-world
dnsPolicy: Default
nodeName: x.x.x.x
</code></pre>
<p>logs from kubectl describe pod:</p>
<pre><code>Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=nginx pod=testcontainer()"
32m 16s 145 {kubelet x.x.x.x} spec.containers{nginx} Warning BackOff Back-off restarting failed docker container
</code></pre>
<p>when I check the logs of both containers (nginx and testcontainer) it shows the output of running the hello-world image, so I guess the image is downloaded and started successfully. Im not sure what fails after that ( I even tried creating a single pod, using the example provided on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container</a> , but still fails)</p>
| greg | <p>I think the problem here isn't the init container. The <code>hello-world</code> image print a text and exits immediately. Since <a href="https://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec" rel="nofollow noreferrer"><code>.spec.restartPolicy</code></a> of the pod defaults to <code>Always</code>, it just restarts the pod every time.</p>
<p>The error message might be a bit confusing, but since the pod is intended to run forever it quite makes sense to display an error, even if the exit code is <code>0</code>.</p>
<p>If you want to run a pod only a single time, you should use the <a href="https://kubernetes.io/docs/user-guide/jobs/" rel="nofollow noreferrer">job API</a>.</p>
<hr>
<p>Since you are interested in an example for the init-container, I fixed your example:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: testcontainer
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "install",
"image": "hello-world"
}
]'
spec:
containers:
- name: nginx
image: nginx # <--- this image shouldn't be a single shot application
dnsPolicy: Default
nodeName: x.x.x.x
</code></pre>
| svenwltr |
<p>Is there any easy command line option to export my entire ETCD database to json file but also decode the keys and values automatically from base64?</p>
<p>What I succeeded to the moment is this(example show 1x key/value):</p>
<pre><code> ./etcdctl get "" --prefix -w json | jq -r ".[] | .[] "
{
"key": "YnktZGV2L21ldGEvc25hcHNob3RzL3Jvb3QtY29vcmQvcGFydGl0aW9ucy80NDAwNDc0MjQ2MTgzNjUxNzAvNDQwMDQ3NDI0NjE4MzY1MTcxX3RzNDQwMDQ5NDg5ODkxODE5NTI0",
"create_revision": 44536,
"mod_revision": 44536,
"version": 1,
"value": "CPOB0OXRmdeNBhIIX2RlZmF1bHQYhIDgxN/V140GIPKB0OXRmdeNBg=="
}
</code></pre>
<p>But I need to decode the entire database keys and values to human readable format?</p>
<p>Thanks</p>
<p>P.S.
Final solution after @Jeff Mercado help:</p>
<pre><code>1. /etcdctl get "" --prefix -w json | jq '.[]' > etcd_filter.txt
2. Clear output to form array of objects [{},{} ...{}]
3. cat etcd_filter.txt | jq '.[] | (.key, .value) |= @base64d'
</code></pre>
<p><a href="https://jqplay.org/s/rglpDglWHNB" rel="nofollow noreferrer">jq playground</a></p>
| R2D2 | <p>If the encoded data is a string and not binary data, you can decode it to a UTF-8 string using the <code>@base64d</code> filter. This should be available in jq 1.6.</p>
<pre><code>$ ./etcdctl ... | jq '.[][] | (.key, .value) |= @base64d'
{
"key": "by-dev/meta/snapshots/root-coord/partitions/440047424618365170/440047424618365171_ts440049489891819524",
"create_revision": 44536,
"mod_revision": 44536,
"version": 1,
"value": "\b���љ\u0006\u0012\b_default\u0018������\u0006 ���љ\u0006"
}
</code></pre>
<p>It appears the value is not a UTF-8 string in your example so beware. Unfortunately, it doesn't return a byte array so it may not be very useful for these cases.</p>
<p><a href="https://jqplay.org/s/HV4PtqYSnPi" rel="nofollow noreferrer">jqplay</a></p>
| Jeff Mercado |
<p>Hei,</p>
<p>I'm looking for the documentation for Kubernetes's configuration files. The ones used by kubectl (e.g. <code>kubectl create -f whatever.yaml</code>).</p>
<p>Basically, the Kubernetes equivalent of this <a href="https://docs.docker.com/compose/compose-file/" rel="nofollow noreferrer">Docker Compose</a> document.</p>
<p>I did search a lot but I didn't find much, or 404 links from old stackoverflow questions.</p>
| e741af0d41bc74bf854041f1fbdbf | <p>You could use the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/" rel="nofollow noreferrer">official API docs</a> but a much more user-friendly way on the command line is the <code>explain</code> command, for example, I never remember what exactly goes into the spec of a pod, so I do:</p>
<pre><code>$ kubectl explain Deployment.spec.template.spec
</code></pre>
| Michael Hausenblas |
<p>I know that PVC can be used as a volume in k8s. I know how to create them and how to use, but I couldn't understand why there are two of them, PV and PVC. </p>
<p>Can someone give me an architectural reason behind PV/PVC distinction? What kind of problem it try to solve (or what historical is behind this)?</p>
| George Shuklin | <p>Despite their names, they serve two different purposes: an abstraction for storage (PV) and a request for such storage (PVC). Together, they enable a clean separation of concerns (using a figure from our <a href="http://shop.oreilly.com/product/0636920064947.do" rel="noreferrer">Kubernetes Cookbook</a> here to illustrate this):</p>
<p><a href="https://i.stack.imgur.com/tfniF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tfniF.png" alt="enter image description here"></a></p>
<p>The storage admin focuses on provisioning PVs (ideally <a href="http://shop.oreilly.com/product/0636920064947.do" rel="noreferrer">dynamically</a> through defining storage classes) and the developer uses a PVC to acquire a PV and use it in a pod.</p>
| Michael Hausenblas |
<p>I'm new to k8s, so some of my terminology might be off. But basically, I'm trying to deploy a simple web api: one load balancer in front of n pods (where right now, n=1). </p>
<p>However, when I try to visit the load balancer's IP address it doesn't show my web application. When I run kubectl get deployments, I get this:</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tl-api 1 1 1 0 4m
</code></pre>
<p>Here's my YAML file. Let me know if anything looks off--I'm very new to this!</p>
<pre><code>---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tl-api
spec:
replicas: 1
template:
metadata:
labels:
app: tl-api
spec:
containers:
- name: tl-api
image: tlk8s.azurecr.io/devicecloudwebapi:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: tl-api
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: tl-api
</code></pre>
<p>Edit 2: When I try using ACS (which supports Windows), I get this:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned tl-api-3466491809-vd5kg to dc9ebacs9000
Normal SuccessfulMountVolume 11m kubelet, dc9ebacs9000 MountVolume.SetUp succeeded for volume "default-token-v3wz9"
Normal Pulling 4m (x6 over 10m) kubelet, dc9ebacs9000 pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
Warning FailedSync 1s (x50 over 10m) kubelet, dc9ebacs9000 Error syncing pod
Normal BackOff 1s (x44 over 10m) kubelet, dc9ebacs9000 Back-off pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
</code></pre>
<p>I then try examining the failed pod:</p>
<pre><code>PS C:\users\<me>\source\repos\DeviceCloud\DeviceCloud\1- Presentation\DeviceCloud.Web.API> kubectl logs tl-api-3466491809-vd5kg
Error from server (BadRequest): container "tl-api" in pod "tl-api-3466491809-vd5kg" is waiting to start: trying and failing to pull image
</code></pre>
<p>When I run <code>docker images</code> I see the following:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
devicecloudwebapi latest ee3d9c3e231d 24 hours ago 7.85GB
tlk8s.azurecr.io/devicecloudwebapi v1 ee3d9c3e231d 24 hours ago 7.85GB
devicecloudwebapi dev bb33ab221910 25 hours ago 7.76GB
</code></pre>
| Slothario | <p>Your problem is that the container image <code>tlk8s.azurecr.io/devicecloudwebapi:v1</code> is in a private container registry. See the events at the bottom of the following command:</p>
<pre><code>$ kubectl describe po -l=app=tl-api
</code></pre>
<p>The official Kubernetes docs describe how to resolve this issue, see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">Pull an Image from a Private Registry</a>, essentially:</p>
<ul>
<li>Create a secret <code>kubectl create secret docker-registry</code></li>
<li>Use it in your deployment, under the <code>spec.imagePullSecrets</code> key</li>
</ul>
| Michael Hausenblas |
<p>What happened:
I have been following this guidelines: <a href="https://kubernetes.io/docs/setup/minikube/" rel="noreferrer">https://kubernetes.io/docs/setup/minikube/</a> and I have the "connection refused" issue when trying to curl the application. Here are the steps I did</p>
<pre><code>~~> minikube status
minikube: Stopped
cluster:
kubectl:
~~> minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
~~> kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=9500
deployment.apps/hello-minikube created
~~> kubectl expose deployment hello-minikube --type=NodePort
service/hello-minikube exposed
~~> kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-79577c5997-24gt8 1/1 Running 0 39s
~~> curl $(minikube service hello-minikube --url)
curl: (7) Failed to connect to 192.168.99.100 port 31779: Connection refused
</code></pre>
<p><strong>What I expect to happen:</strong>
When I curl the pod, it should give a proper reply (like in the quickstart: <a href="https://kubernetes.io/docs/setup/minikube/" rel="noreferrer">https://kubernetes.io/docs/setup/minikube/</a>)</p>
<p>minikube logs: <a href="https://docs.google.com/document/d/1o2-ebiZTsoCzQNSn_rQSkcuVzOJABmwT2KKzGoUQNiQ/edit" rel="noreferrer">https://docs.google.com/document/d/1o2-ebiZTsoCzQNSn_rQSkcuVzOJABmwT2KKzGoUQNiQ/edit</a></p>
| Developer | <p>Not sure where you got the port <code>9500</code> from but that's the reason it doesn't work. NGINX serves on port <code>8080</code>. This should work (it does for me, at least):</p>
<pre><code>$ kubectl expose deployment hello-minikube \
--type=NodePort \
--port=8080 --target-port=8080
$ curl $(minikube service hello-minikube --url)
Hostname: hello-minikube-79577c5997-tf49z
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=172.17.0.1
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_uri=http://192.168.64.11:8080/
Request Headers:
accept=*/*
host=192.168.64.11:32141
user-agent=curl/7.54.0
Request Body:
-no body in request-
</code></pre>
| Michael Hausenblas |
<p>Could someone explain the benefits/issues with hosting a database in Kubernetes via a persistent volume claim combined with a storage volume over using an actual cloud database resource? </p>
| Barry Jacobs | <p>It's essentially a trade-off: convenience vs control. Take a concrete example: let's say you pay Amazon money to use <a href="https://aws.amazon.com/athena/" rel="nofollow noreferrer">Athena</a>, which is really just a nicely packaged version of <a href="https://prestodb.io/" rel="nofollow noreferrer">Facebook Presto</a> which AWS kindly operates for you in exchange for $$$. You could run Presto on EKS yourself, but why would you. </p>
<p>Now, let's say you want to or need to use Apache Drill or Apache Impala. Amazon doesn't offer it. Nor does any of the other big public cloud providers at time of writing, as far as I know.</p>
<p>Another thought: what if you want to migrate off of AWS? Your data has gravity as well.</p>
| Michael Hausenblas |
<p>I have set up an EKS cluster using eksctl using all the default settings and now need to communicate with an external service which uses IP whitelisting. Obviously requests made to the service from my cluster come from whichever node the request was made from, but the list of nodes (and their ips) can and will change frequently so I cannot supply a single IP address for them to whitelist. After looking into this I found that I need to use a NAT Gateway.</p>
<p>I am having some trouble getting this to work, I have tried setting AWS_VPC_K8S_CNI_EXTERNALSNAT to true however doing so prevents all outgoing traffic on my cluster, I assume because the return packets do not know where to go so I never get the response. I've tried playing around with the route tables to no avail.</p>
<p>Any assistance is much appreciated.</p>
| JazzyP | <p>You can follow <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html" rel="noreferrer">this guide</a> to create public subnets and private subnets in your VPC.</p>
<p>Then create NAT gateways in public subnets. Also run all EKS nodes in private subnets. The pods in K8S will use NAT gateway to access the internet services.</p>
| Kane |
<p>I am really new on this kind of stuff, new on Kubernetes and Docker, but already have some experience on Java.</p>
<p>I tried using Docker Hub by connecting it to GitHub.</p>
<p>On my Git-hub there are only 2 codes:</p>
<ol>
<li>Dockerfile</li>
<li>Simple hello world Java code.</li>
</ol>
<p>Every time I run it on Kubernetes and check it with kubectl get pods, I always get CrashLoopBackOff.</p>
<p>I don't understand what's the problem, I already check the code and try to run it on Docker and it works, it prints out hello world. But not on Kubernetes.</p>
<p>This is the code on Dockerfile</p>
<pre><code>FROM openjdk:8
COPY helloworld.java .
RUN javac helloworld.java
ENTRYPOINT ["java", "helloworld"]
</code></pre>
<p>This is the code on simple helloworld java</p>
<pre><code>public class helloworld {
public static void main(String[] args) {
System.out.println("Hello World!");
}
}
</code></pre>
<p>What I expected is: as I run this on Kubernetes I hope it says that it's ready and I can deploy it to an IP and show simple hello world.</p>
| newbielearner | <p>Since you didn't specify how you executed it, I will assume you've been using <code>kubectl run</code> (by default creates a deployment) or a manifest defining a deployment. If so, then the <code>CrashLoopBackOff</code> is expected because <a href="http://kubernetesbyexample.com/deployments/" rel="nofollow noreferrer">deployments</a> are for long-running processes. You Java code is not long-running. It prints something and then exits, that is, doesn't have an endless loop in there. </p>
<p>So either do the <code>System.out.println</code> in a loop (with a sleep inbetween?) or use a run command or a workload type (such as <a href="http://kubernetesbyexample.com/jobs/" rel="nofollow noreferrer">jobs</a>) that is for one-off execution.</p>
<p>BTW, even with deployments, you should still be able to use <a href="http://kubernetesbyexample.com/logging/" rel="nofollow noreferrer">kubectl logs</a> to see the output from the first execution.</p>
| Michael Hausenblas |
<p>I have some questions about the golang API for kubernetes.</p>
<ol>
<li><p>which one should I use? k8s.io/client-go or k8s.io/kubernetes/pkg/client? What's the difference?</p></li>
<li><p>I want to get list of all pods and then listen to add/update/delete events, what's the difference between using the api.Pods("").Watch method and using an informer?</p></li>
<li><p>I'm using the API from inside the cluster, how can I fetch the name of the node I'm currently in? is it just the hostname of the machine?</p></li>
</ol>
| areller | <blockquote>
<p>which one should I use? k8s.io/client-go or k8s.io/kubernetes/pkg/client?</p>
</blockquote>
<p>Use <code>k8s.io/client-go</code>.</p>
<blockquote>
<p>what's the difference between using the api.Pods("").Watch method and using an informer?</p>
</blockquote>
<p>The informer is essentially a shared cache, reducing the load on the API server. Unless you're doing something trivial, this is the preferred way.</p>
<blockquote>
<p>how can I fetch the name of the node I'm currently in? </p>
</blockquote>
<p>Use <a href="https://godoc.org/k8s.io/api/core/v1#Node" rel="noreferrer">k8s.io/api/core/v1.Node</a>, see for example <a href="https://github.com/openshift-talks/k8s-go/blob/master/client-go-basic/main.go" rel="noreferrer">this code</a>.</p>
<p>BTW, a colleague of mine and myself gave a workshop on this topic (using the Kube API with Go) last week at GopherCon UK—maybe the <a href="https://301.sh/2018-gopherconuk-slides" rel="noreferrer">slide deck</a> and the <a href="https://github.com/openshift-talks/k8s-go" rel="noreferrer">repo</a> are useful for you; also, there is an accompanying online <a href="https://www.katacoda.com/mhausenblas/scenarios/k8s-go" rel="noreferrer">Katacoda scenario</a> you can use to play around.</p>
| Michael Hausenblas |
<p>I am trying to add new user to EKS cluster and giving then access. So far I was able to add the user just by editing <code>configmap/aws-auth</code> (<code>kubectl edit -n kube-system configmap/aws-auth</code>) and adding new user to </p>
<pre><code>mapUsers: |
- userarn: arn:aws:iam::123456789:user/user01
username: user01
groups:
- system:masters
</code></pre>
<p>How can add user to EKS cluster and give full access to to specific namespace, but nothing outside of it ?</p>
<p>I tried to create Roles & RoleBinding as </p>
<pre><code>---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: namespace1
name: namespace1-user
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# This role binding allows "user01" to read pods in the "namespace1" namespace.
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace1-user-role-binding
namespace: namespace1
subjects:
- kind: User
name: user01
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: namespace1-user
</code></pre>
<p><code>user01</code> can see all the pods from other users with <code>kubectl get pods --all-namespaces</code>, is there any way to restrict this ?</p>
| roy | <p>Essentially what you want is to define a cluster role and use a role binding to apply it to a specific namespace. Using a cluster role (rather than a role) allows you to re-use it across namespaces. Using a role binding allows you to target a specific namespace rather than giving cluster-wide permissions.</p>
| Michael Hausenblas |
<p>I had a t2.micro server running where i had deployed Minikube, however due to memory issue i scared up the server size. For which i had to stop and start the instance.</p>
<p>But now after restarting, when i try with kubectl commands i get the below error.</p>
<pre><code>root@ip-172-31-23-231:~# kubectl get nodes
The connection to the server 172.31.23.231:6443 was refused - did you
specify the right host or port?
</code></pre>
<p>So, how can i bring my earlier kube cluster up once i restart my AWS instance?</p>
| Shruthi Bhaskar | <p>I had the same error. In my case minikube was not running. I started it with</p>
<pre><code>minikube start
</code></pre>
| mkumar118 |
<p>Consider the following shell script, where <code>POD</code> is set to the name of a K8 pod.</p>
<pre><code>kubectl exec -it $POD -c messenger -- bash -c "echo '$@'"
</code></pre>
<p>When I run this script with one argument, it works fine.</p>
<pre><code>hq6:bot hqin$ ./Test.sh x
x
</code></pre>
<p>When I run it with two arguments, it blows up.</p>
<pre><code>hq6:bot hqin$ ./Test.sh x y
y': -c: line 0: unexpected EOF while looking for matching `''
y': -c: line 1: syntax error: unexpected end of file
</code></pre>
<p>I suspect that something is wrong with how the arguments are passed.</p>
<p><strong>How might I fix this so that arguments are expanded literally by my shell and then passed in as literals to the <code>bash</code> running in <code>kubectl exec</code>?</strong></p>
<p>Note that removing the single quotes results in an output of <code>x</code> only.
Note also that I need the <code>bash -c</code> so I can eventually pass in file redirection: <a href="https://stackoverflow.com/a/49189635/391161">https://stackoverflow.com/a/49189635/391161</a>.</p>
| merlin2011 | <p>I managed to work around this with the following solution:</p>
<pre><code>kubectl exec -it $POD -c messenger -- bash -c "echo $*"
</code></pre>
<p>This appears to have the additional benefit that I can do internal redirects.</p>
<pre><code>./Test.sh x y '> /tmp/X'
</code></pre>
| merlin2011 |
<p>In the Amazon EKS User Guide, there is <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">a page</a> dedicated to creating ALB ingress controllers by using an eponymous third-party tool, <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller" rel="nofollow noreferrer">AWS ALB Ingress Controller for Kubernetes</a>.</p>
<p>Both the EKS user guide and the documentation for the controller have their own walkthroughs for how to set up the controller. </p>
<p>The <a href="https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/walkthrough/echoserver/" rel="nofollow noreferrer">walkthrough provided by the controller</a> has you either hard-code your AWS secret key into a <code>Deployment</code> manifest, or else install yet another third-party tool called <a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">Kube2iam</a>. </p>
<p>The <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">walkthrough in the AWS EKS user guide</a> has you post exactly the same <code>Deployment</code> manifest, but you don't have to modify it at all. Instead, you create both an IAM role (step 5) and a Kubernetes service account (step 4) for the controller, and then you link them together by annotating the service account with the ARN for the IAM role. Prima facie, this seems to be what Kube2iam is for.</p>
<p>This leads me to one of three conclusions, which I rank in rough order of plausibility:</p>
<ol>
<li>EKS contains the functionality of Kube2iam as one of its features (possibly by incorporating Kube2iam into its codebase), and so installing Kube2iam is superfluous.</li>
<li><code>eksctl</code> installs Kube2iam behind the scenes as part of <code>associate-iam-oidc-provider</code>.</li>
<li>The documentation for the controller was written for an earlier version of Kubernetes, and this functionality is now built into the stock control plane.</li>
</ol>
<p>Does anyone happen to know which it is? Why doesn't the AWS walkthrough need me to install Kube2iam?</p>
| David Bruce Borenstein | <blockquote>
<p>Does anyone happen to know which it is? Why doesn't the AWS walkthrough need me to install Kube2iam?</p>
</blockquote>
<p>Yes, I can authoritatively answer this. In 09/2019 <a href="https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="noreferrer">we launched</a> a feature in EKS called <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="noreferrer">IAM Roles for Service Accounts</a>. This makes <code>kube2iam</code> and other solutions obsolete since we support least-privileges access control on the pod level now natively.</p>
<p>Also, yes, the ALB IC walkthrough should be updated.</p>
| Michael Hausenblas |
<p>When I try to run <code>kubectl get namespaces</code> or <code>kubectl get nodes</code> commands etc. I am getting this error (I am using Azure Kubernetes Service). I would appreciate any help with this issue.</p>
<pre><code>Error from server (Forbidden): namespaces is forbidden: User "XXXXXXXXXXXXXX" cannot list namespaces at the cluster scope
</code></pre>
| krishna m | <p>This is an authorization module error message: as explained in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes docs</a> and, for example, <a href="https://speakerdeck.com/mhausenblas/kubernetes-security-from-image-hygiene-to-network-policies?slide=29" rel="nofollow noreferrer">shown here</a> you need to have the permissions to carry out a certain action (in this case: list namespaces and nodes).</p>
<p>Since you didn't share more background on how this cluster is set up or who is responsible for it, I can only suggest to either reach out to the cluster admin to give you the rights, or, if you've set up the cluster yourself, have a look at the <a href="https://learn.microsoft.com/en-us/azure/aks/aad-integration" rel="nofollow noreferrer">AD integration</a>, which may be of use here.</p>
| Michael Hausenblas |
<p>I have several docker images that I want to use with <code>minikube</code>. I don't want to first have to upload and then download the same image instead of just using the local image directly. How do I do this?</p>
<p>Stuff I tried:
<br>1. I tried running these commands (separately, deleting the instances of minikube both times and starting fresh)</p>
<pre><code>kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 imagePullPolicy=Never
</code></pre>
<p>Output:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
hdfs-2425930030-q0sdl 0/1 ContainerCreating 0 10m
</code></pre>
<p>It just gets stuck on some status but never reaches the ready state.</p>
<p><br>2. I tried creating a registry and then putting images into it but that didn't work either. I might've done that incorrectly but I can't find proper instructions to do this task.</p>
<p>Please provide instructions to use local docker images in local kubernetes instance.
<br>OS: ubuntu 16.04
<br>Docker : Docker version 1.13.1, build 092cba3
<br>Kubernetes :</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>If someone could help me get a solution that uses docker-compose to do this, that'd be awesome.</p>
<p><strong>Edit:</strong></p>
<p>Images loaded in <code>eval $(minikube docker-env)</code>:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
fluxcapacitor/jupyterhub latest e5175fb26522 4 weeks ago 9.59 GB
fluxcapacitor/zeppelin latest fe4bc823e57d 4 weeks ago 4.12 GB
fluxcapacitor/prediction-pmml latest cae5b2d9835b 4 weeks ago 973 MB
fluxcapacitor/scheduler-airflow latest 95adfd56f656 4 weeks ago 8.89 GB
fluxcapacitor/loadtest latest 6a777ab6167c 5 weeks ago 899 MB
fluxcapacitor/hdfs latest 00fa0ed0064b 6 weeks ago 1.16 GB
fluxcapacitor/sql-mysql latest 804137671a8c 7 weeks ago 679 MB
fluxcapacitor/metastore-1.2.1 latest ea7ce8c5048f 7 weeks ago 1.35 GB
fluxcapacitor/cassandra latest 3cb5ff117283 7 weeks ago 953 MB
fluxcapacitor/apachespark-worker-2.0.1 latest 14ee3e4e337c 7 weeks ago 3.74 GB
fluxcapacitor/apachespark-master-2.0.1 latest fe60b42d54e5 7 weeks ago 3.72 GB
fluxcapacitor/package-java-openjdk-1.8 latest 1db08965289d 7 weeks ago 841 MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 7 weeks ago 104 MB
fluxcapacitor/stream-kafka-0.10 latest f67750239f4d 2 months ago 1.14 GB
fluxcapacitor/pipeline latest f6afd6c5745b 2 months ago 11.2 GB
gcr.io/google-containers/kube-addon-manager v6.1 59e1315aa5ff 3 months ago 59.4 MB
gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 3 months ago 47 MB
gcr.io/google_containers/kube-dnsmasq-amd64 1.4 3ec65756a89b 5 months ago 5.13 MB
gcr.io/google_containers/exechealthz-amd64 1.2 93a43bfb39bf 5 months ago 8.37 MB
gcr.io/google_containers/pause-amd64
</code></pre>
| Kapil Gupta | <p>As the <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env" rel="noreferrer">handbook</a> describes, you can reuse the Docker daemon from Minikube with <code>eval $(minikube docker-env)</code>.</p>
<p>So to use an image without uploading it, you can follow these steps:</p>
<ol>
<li>Set the environment variables with <code>eval $(minikube docker-env)</code></li>
<li>Build the image with the Docker daemon of Minikube (eg <code>docker build -t my-image .</code>)</li>
<li>Set the image in the pod spec like the build tag (eg <code>my-image</code>)</li>
<li>Set the <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="noreferrer"><code>imagePullPolicy</code></a> to <code>Never</code>, otherwise Kubernetes will try to download the image.</li>
</ol>
<p><strong>Important note:</strong> You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use, since it only sets the environment variables for the current shell session.</p>
| svenwltr |
<p><strong>Updated</strong></p>
<p>So, I followed the <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html" rel="nofollow noreferrer">AWS docs</a> on how to setup an EKS cluster with Fargate using the <a href="https://eksctl.io/usage/fargate/" rel="nofollow noreferrer">eksctl</a> tool. That all went smoothly but when I get to the part where I deploy my actual app, I get no endpoints and the ingress controller has no address associated with it. As seen here:</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
testapp-ingress * 80 129m
</code></pre>
<p>So, I can't hit it externally. But the test app (2048 game) had an address from the elb associated with the ingress. I thought it might be the subnet-tags as suggested <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/771#issuecomment-444965502" rel="nofollow noreferrer">here</a> and my subnets weren't tagged the right way so I tagged them the way suggested in that article. Still no luck. </p>
<p>This is the initial article I followed to get set up. I've performed all the steps and only hit a wall with the alb: <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps</a></p>
<p>This is the alb article I've followed: <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html</a></p>
<p>I followed the steps to deploy the sample app 2048 and that works just fine. I've made my configs very similar and it should work. I've followed all of the steps. Here are my <strong>old</strong> configs, new config below:</p>
<pre><code>deployment yaml>>>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp-deployment"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxxxxxtestapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 80
---
service yaml>>>
apiVersion: v1
kind: Service
metadata:
name: "testapp-service"
namespace: "testapp-qa"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
ingress yaml >>>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: testapp-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp-service"
servicePort: 80
---
namespace yaml>>>
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
</code></pre>
<p>Here are some of the logs from the ingress controller>></p>
<pre><code>E0316 22:32:39.776535 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
E0316 22:36:28.222391 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
</code></pre>
<hr>
<p>Per the suggestion in the comments from @Michael Hausenblas, I've added an annotation to my service for the alb ingress. </p>
<p>Now that my ingress controller is using the correct ELB, I checked the logs because I still can't hit my app's <code>/healthcheck</code>. The logs:</p>
<pre><code>E0317 16:00:45.643937 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxx-3a7d-4794-95fb-a18835abe0d3" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp"}
I0317 16:00:47.868939 1 rules.go:82] testapp-qa/testapp-ingress: modifying rule 1 on arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:listener/app/xxxxxxxxxxx-testappqa-testappin-b879/xxxxxxxxxxx/6b41c0d3ce97ae6b
I0317 16:00:47.890674 1 rules.go:98] testapp-qa/testapp-ingress: rule 1 modified with conditions [{ Field: "path-pattern", Values: ["/*"] }]
</code></pre>
<hr>
<p><strong>Update</strong></p>
<p>I've updated my config. I don't have any more errors but still unable to hit my endpoints to test if my app is accepting traffic. It might have something to do with fargate or on the AWS side I'm not seeing. Here's my updated config:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: 673312057223.dkr.ecr.us-east-1.amazonaws.com/wood-testapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 9898
---
apiVersion: v1
kind: Service
metadata:
name: "testapp"
namespace: "testapp-qa"
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 80
targetPort: 9898
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
labels:
app: testapp
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp"
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
</code></pre>
| Kryten | <p>In your service, try adding the following annotation:</p>
<pre><code> annotations:
alb.ingress.kubernetes.io/target-type: ip
</code></pre>
<p>And also you'd need to explicitly tell the Ingress resource via the <code>alb.ingress.kubernetes.io/healthcheck-path</code> annotation where/how to perform the health checks for the target group. See the ALB Ingress controller <a href="https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/" rel="nofollow noreferrer">docs</a> for the annotation semantics.</p>
| Michael Hausenblas |
<p>I have a config file named "pod.yaml" for making a pod like bellow:</p>
<p><code>
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: comet-app
image: gcr.io/my-project/my-app:v2
ports:
- containerPort: 5000
</code></p>
<p>and a config file named "service.yaml" for running a service in that "myapp" pod.</p>
<p><code>
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 5000
selector:
run: myapp
</code></p>
<p>When I run </p>
<pre><code> kubectl apply -f pod.yaml
kubectl apply -f service.yaml
</code></pre>
<p>The 'myapp' service is created but I couldn't access my website by the internal ip and it returned ERR_CONNECTION_TIMED_OUT.</p>
<p><code>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xx.xxx.1 <none> 443/TCP 11d
myapp LoadBalancer 10.xx.xxx.133 35.xxx.xx.172 80:30273/TCP 3s
</code></p>
<p>But when I deleted that service and re-run by exposing a service with bellow command, everything worked well and I could access to my website by the external-ip.</p>
<pre><code> kubectl expose pod myapp --type=LoadBalancer --port=80 --target-port=5000
</code></pre>
<p>Could anyone explain it for me and tell me what is wrong in my service.yaml?</p>
| Quoc Lap | <p>The problem with <code>service.yaml</code> is that the selector is wrong. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">How it works</a> is that a service by default routes traffic to pods with a certain label. Your pod has the label <code>app: myapp</code> whereas in the service your selector is <code>run: myapp</code>. So, changing <code>service.yaml</code> to the following should solve the issue:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 5000
selector:
app: myapp
</code></pre>
| Michael Hausenblas |
<p>I have a Kubernetes cluster with 2 containers running in a single workload. </p>
<p>One container is running a Flask server application and the other is running an angular application. I need to have this pod set up in a way where both applications can communicate with each other within the localhost. I need the angular container which is exposed in port 4200 to communicate with the unexposed flask server which is on port 5000. I am stuck when it comes to having these containers communicate within the pod. </p>
| Jithin Thomas | <p>Rather than localhost (<code>127.0.0.1</code>), make sure your flask server is reachable via <em>any</em> local IP, that is, <code>app.run(host='0.0.0.0')</code>.</p>
| Michael Hausenblas |
<p>I have a Microk8s cluster running gitea, harbor and droneci. Everything is hosted under *.dev.mydomain.com and there is a wildcard certificate for that. The certificate is signed using a private CA.</p>
<p>I'm trying to push the CA certificate to the Pods running the Drone CI builds such that they can push/pull from Gitea and Harbor while also being able to connect to external sources to (to fetch other docker images from dockerhub for example).</p>
<p>DroneCI and the drone runner are installed using Helm. I have tried the following in the <code>values.yaml</code> file for the runner:</p>
<pre><code>DRONE_RUNNER_VOLUMES: "/sslcerts:/etc/ssl/certs"
</code></pre>
<p>This overwrites the <code>/etc/ssl/certs/</code> folder in the runner pod. Any requests made from the pod to harbor or gitea work, any requests to anything else fail with error <code>x509 certificate signed by unknown authority</code></p>
<p>I also tried</p>
<pre><code>DRONE_RUNNER_VOLUMES: "/sslcerts/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt"
</code></pre>
<p>This returned the error <code>mounting "/sslcerts/ca-certificates.crt" to rootfs at "/etc/ssl/certs/ca-certificates.crt" caused: mount through procfd: not a directory: unknown"</code></p>
<p>Any ideas on how to go about what I'm trying to do? Thanks!</p>
| Constantine Loukas | <p>As the runners are Alpine Linux based all you should have to do is to mount your certificates in the <code>/usr/local/share/ca-certificates/</code> folder (<em>not</em> in a subfolder but right into that folder).
Alpine should then add all certificates from there to <code>/etc/ssl/certs</code> for you.</p>
| omni |
<p>I need to create a shell-script which examine the cluster
Status.**</p>
<p>I saw that the <code>kubectl describe-nodes</code> provides lots of data
I can output it to json and then parse it but maybe it’s just overkill.
Is there a simple way to with <code>kubectl</code> command to get the status of the cluster ? just if its up / down</p>
| Jenny M | <p>The least expensive way to check if you can reach the API server is <code>kubectl version</code>. In addition <code>kubectl cluster-info</code> gives you some more info.</p>
| Michael Hausenblas |
<p>There are some methods natively supported such as basic auth , X509 certificates and webhook tokens.</p>
<p>Is there some workaround/project to use LDAP for user authentication with Kubernetes. I need users to be grouped in LDAP , and then use role binding to bind the group to a specific namespace with a role.</p>
| Ijaz Ahmad | <p>Yes you can integrate with LDAP, for example:</p>
<ul>
<li>Using <a href="https://github.com/dexidp/dex#connectors" rel="noreferrer">dex</a></li>
<li>With Torchbox's <a href="https://github.com/torchbox/kube-ldap-authn" rel="noreferrer">kube-ldap-authn</a> (hint: read this <a href="https://icicimov.github.io/blog/virtualization/Kubernetes-LDAP-Authentication/" rel="noreferrer">post</a>)</li>
<li>Vis <a href="https://github.com/keycloak/keycloak-gatekeeper" rel="noreferrer">keycloak</a></li>
</ul>
<p>Also, there's a nice intro-level <a href="https://medium.com/@pmvk/step-by-step-guide-to-integrate-ldap-with-kubernetes-1f3fe1ec644e" rel="noreferrer">blog post</a> to get you started.</p>
| Michael Hausenblas |
<p>I am trying to setup Fluent Bit for Kuberentes on EKS + Fargate. I was able to get logs all going to one general log group on Cloudwatch but now when I add fluent-bit.conf: | to the data: field and try to apply the update to my cluster, I get this error:</p>
<blockquote>
<p>for: "fluentbit-config.yaml": admission webhook "0500-amazon-eks-fargate-configmaps-admission.amazonaws.com" denied the request: fluent-bit.conf is not valid. Please only provide output.conf, filters.conf or parsers.conf in the logging configmap</p>
</blockquote>
<p>What sticks out the most to me is that the error message is asking me to only provide output, filter or parser configurations.</p>
<p>It matches up with other examples I found online, but it seems like I do not have the fluent-bit.conf file on the cluster that I am updating or something. The tutorials I have followed do not mention installing a file so I am lost as to why I am getting this error.</p>
<p>The</p>
<p>My fluentbit-config.yaml file looks like this</p>
<pre><code>kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
@INCLUDE input-kubernetes.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Parser docker
Tag logger
Path /var/log/containers/*logger-server*.log
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match logger
region us-east-1
log_group_name fluent-bit-cloudwatch
log_stream_prefix from-fluent-bit-
auto_create_group On
</code></pre>
| Frederick Haug | <p>As per <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html" rel="nofollow noreferrer">docs</a> (at the very bottom of that page and yeah, we're in the process of improving them, not happy with the current state) you have a couple of sections in there that are not allowed in the context of EKS on Fargate logging, more specifically what can go into the <code>ConfigMap</code>. What you want is something along the lines of the following (note: this is from an actual deployment I'm using, slightly adapted):</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region eu-west-1
log_group_name something-fluentbit
log_stream_prefix fargate-
auto_create_group On
[OUTPUT]
Name es
Match *
Host blahblahblah.eu-west-1.es.amazonaws.com
Port 443
Index something
Type something_type
AWS_Auth On
AWS_Region eu-west-1
tls On
</code></pre>
<p>With this config, you're streaming logs to both CW and AES, so feel free to drop the second OUTPUT section if not needed. However, you notice that there can not be the other sections that you had there such as <code>input-kubernetes.conf</code> for example.</p>
| Michael Hausenblas |
<p>I have tried setting max nodes per pod using the following upon install:</p>
<pre class="lang-sh prettyprint-override"><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--max-pods 250" sh -s -
</code></pre>
<p>However, the K3s server will then fail to load. It appears that the <code>--max-pods</code> flag has been deprecated per the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubernetes docs</a>:</p>
<blockquote>
<p>--max-pods int32 Default: 110</p>
<p>(DEPRECATED: This parameter should be set via the config file
specified by the Kubelet's <code>--config</code> flag. See
<a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/</a>
for more information.)</p>
</blockquote>
<p>So with K3s, where is that kubelet config file and can/should it be set using something like the above method?</p>
| Kyle | <p>To update your existing installation with an increased max-pods, add a kubelet config file into a k3s associated location such as <code>/etc/rancher/k3s/kubelet.config</code>:</p>
<pre><code>apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 250
</code></pre>
<p>edit <code>/etc/systemd/system/k3s.service</code> to change the k3s server args:</p>
<pre><code>ExecStart=/usr/local/bin/k3s \
server \
'--disable' \
'servicelb' \
'--disable' \
'traefik' \
'--kubelet-arg=config=/etc/rancher/k3s/kubelet.config'
</code></pre>
<p>reload systemctl to pick up the service change:</p>
<p><code>sudo systemctl daemon-reload</code></p>
<p>restart k3s:</p>
<p><code>sudo systemctl restart k3s</code></p>
<p>Check the output of describe nodes with <code>kubectl describe <node></code> and look for allocatable resources:</p>
<pre><code>Allocatable:
cpu: 32
ephemeral-storage: 199789251223
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 131811756Ki
pods: 250
</code></pre>
<p>and a message noting that allocatable node limit has been updated in Events:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 20m kube-proxy Starting kube-proxy.
Normal Starting 20m kubelet Starting kubelet.
...
Normal NodeNotReady 7m52s kubelet Node <node> status is now: NodeNotReady
Normal NodeAllocatableEnforced 7m50s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 7m50s kubelet Node <node> status is now: NodeReady
</code></pre>
| dols |
<p>Is it possible to visualize kubernetes topology and see it update on-the-fly as objects are added/deleted/linked?</p>
<p>I saw a video at <a href="https://www.youtube.com/watch?v=38SNQPhsGBk" rel="nofollow noreferrer">https://www.youtube.com/watch?v=38SNQPhsGBk</a> where service/pods show up as icons on a graph. For example, see</p>
<p><a href="https://i.stack.imgur.com/Ol4SQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ol4SQ.png" alt="enter image description here"></a></p>
<p>I am new to kubernetes and have installed minikube. How do I visualize my cluster's topology? Thank you.</p>
| user674669 | <p>There are many options but the one I like most is <a href="https://www.weave.works/oss/scope/" rel="nofollow noreferrer">Weave Scope</a> where you get visualizations such as:</p>
<p><a href="https://i.stack.imgur.com/lTFKn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lTFKn.jpg" alt="Weave scope screen shot" /></a><br />
<sub>(source: <a href="https://images.contentstack.io/v3/assets/blt300387d93dabf50e/blt94bf2945b508e588/5b8fb098fffba0957b6ea7e6/download" rel="nofollow noreferrer">contentstack.io</a>)</sub></p>
| Michael Hausenblas |
<p>For our use-case, we need to access a lot of services via NodePort. By default, the NodePort range is 30000-32767. With <strong>kubeadm</strong>, I can set the port range via <em>--service-node-port-range</em> flag.</p>
<p>We are using Google Kubernetes Engine (GKE) cluster. How can I set the port range for a GKE cluster?</p>
| Huy Hoang Pham | <p>In GKE, the control plane is managed by Google. This means you don't get to set things on the API Server yourself. That being sad, I <em>believe</em> you can use the <code>kubemci</code> CLI tool to achieve it, see <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress" rel="nofollow noreferrer">Setting up a multi-cluster Ingress</a>.</p>
| Michael Hausenblas |
<p>I have to setup a monitoring environment for my EKS cluster.
Prometheus is running on external node and I am trying to use node exporter daemonset for getting metrics.
But on prometheus when I see the targets I am not able to see any target instead of just local host.</p>
<p><strong>Kubernetes_sd_config block</strong></p>
<pre><code>global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 15s
static_configs:
- targets: ['localhost:9100']
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name]
action: keep
regex: default;kubernetes;https
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- job_name: 'kubernetes-kube-state'
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
kubernetes_sd_configs:
- role: pod
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_pod_label_grafanak8sapp]
regex: .*true.*
action: keep
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: ['__meta_kubernetes_pod_label_daemon', '__meta_kubernetes_pod_node_name']
regex: 'node-exporter;(.*)'
action: replace
target_label: nodename
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_name]
regex: (.+);(.+)
target_label: __metrics_path__
replacement: /api/v1/namespaces/${1}/pods/${2}/proxy/metrics
###################################################################################
# Scrape config for nodes (kubelet). #
# #
# Rather than connecting directly to the node, the scrape is proxied though the #
# Kubernetes apiserver. This means it will work if Prometheus is running out of #
# cluster, or can't connect to nodes for some other reason (e.g. because of #
# firewalling). #
###################################################################################
- job_name: 'kubernetes-kubelet'
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
kubernetes_sd_configs:
- role: node
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
kubernetes_sd_configs:
- role: node
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
###################################################################################
# Example scrape config for service endpoints. #
# #
# The relabeling allows the actual service scrape endpoint to be configured #
# for all or only some endpoints. #
###################################################################################
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
#########################################################################################
# Example scrape config for probing services via the Blackbox Exporter. #
# #
# The relabeling allows the actual service scrape endpoint to be configured #
# for all or only some services. #
#########################################################################################
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name]
regex: (.+);(.+)
target_label: __metrics_path__
replacement: /api/v1/namespaces/$1/services/$2/proxy/metrics
##################################################################################
# Example scrape config for pods #
# #
# The relabeling allows the actual pod scrape to be configured #
# for all the declared ports (or port-free target if none is declared) #
# or only some ports. #
##################################################################################
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- source_labels: [__address__, __meta_kubernetes_pod_annotation_example_io_scrape_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pods
- job_name: 'kubernetes-service-endpoints-e'
kubernetes_sd_configs:
- role: endpoints
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (\d+)
target_label: __meta_kubernetes_pod_container_port_number
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
regex: ()
target_label: __meta_kubernetes_service_annotation_prometheus_io_path
replacement: /metrics
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_pod_container_port_number, __meta_kubernetes_service_annotation_prometheus_io_path]
target_label: __metrics_path__
regex: (.+);(.+);(.+);(.+)
replacement: /api/v1/namespaces/$1/services/$2:$3/proxy$4
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: instance
</code></pre>
<p>This is the Prometheus.yml file that I have on my prometheus instance.</p>
<p><strong>Prometheus instance logs /var/log/messages</strong></p>
<pre><code>Jul 1 15:18:53 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:53.655Z caller=log.go:124 component=k8s_client_runtime level=debug func=Verbose.Infof msg="Listing and watching *v1.Endpoints from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167"
Jul 1 15:18:53 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:53.676Z caller=log.go:124 component=k8s_client_runtime level=debug func=Infof msg="GET https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/endpoints?limit=500&resourceVersion=0 in 20 milliseconds"
Jul 1 15:18:53 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:53.676Z caller=log.go:124 component=k8s_client_runtime level=error func=ErrorDepth msg="pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \"https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/endpoints?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority"
Jul 1 15:18:56 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:56.445Z caller=log.go:124 component=k8s_client_runtime level=debug func=Verbose.Infof msg="Listing and watching *v1.Pod from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167"
Jul 1 15:18:56 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:56.445Z caller=log.go:124 component=k8s_client_runtime level=debug func=Infof msg="GET https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/pods?limit=500&resourceVersion=0 in 0 milliseconds"
Jul 1 15:18:56 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:56.445Z caller=log.go:124 component=k8s_client_runtime level=error func=ErrorDepth msg="pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/pods?limit=500&resourceVersion=0\": unable to read authorization credentials file /etc/prometheus/token: open /etc/prometheus/token: no such file or directory"
</code></pre>
| meghashukla | <p>The logs you shared point to the problem:</p>
<pre><code>... unable to read authorization credentials file /etc/prometheus/token: open /etc/prometheus/token: no such file or directory"
</code></pre>
<p>The token file for in-cluster workloads is by default mounted at <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> but since you mentioned Prometheus is running "on external node" (no idea what you mean by this) this may or may not be useful for you to be possible to change.</p>
| Michael Hausenblas |
<p>We have an openshift container platform url that contains multiple projects like</p>
<ul>
<li>project1</li>
<li>project2</li>
<li>project3</li>
</ul>
<p>Each project contains several pods that we are currently monitoring with NewRelic like </p>
<ul>
<li>pod1</li>
<li>pod2</li>
<li>pod3</li>
</ul>
<p>We are trying to implement Prometheus + Grafana for all these projects separately. </p>
<p>It's too confusing with online articles as none of them described with the configuration that we have now. </p>
<p>Where do we start?</p>
<p>What do we add to docker images? </p>
<p>Is there any procedure to monitor the containers using cAdvisor on openshift?</p>
<p>Some say we need to add maven dependency in project. Some say we need to modify the code. Some say we need to add prometheus annotations for docker containers. Some say add node-exporter. What is the node-exporter in first place? Is it another container that looks for containers metrics? Can I install that as part of my docker images? Can anyone point me to an article or something with similar configuration?</p>
| Tywin Lannister | <p>Your question is pretty broad, so the answer will be the same :)
Just to clarify - in your question:</p>
<blockquote>
<p>implement Prometheus + Grafana for all these projects <strong>separately</strong></p>
</blockquote>
<p>Are going to have for each project dedicated installation of Kubernetes? Prometheus + Grfana? Or you are going to have 1 cluster for all of them?</p>
<p>In general, I think, the answer should be:</p>
<ol>
<li>Use Prometheus Operator as recommended (<a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator</a>)</li>
<li>Once operator installed - you'll be able to get most of your data just by config changes - for example, you will get the Grafa and Node Exporters in the cluster by single config changes</li>
<li>In our case (we are not running Open Shift, but vanilla k8s cluster) - we are running multiple namespaces (like your projects), which has it representation in Prometheus</li>
<li>To be able to monitor pod's "applicational metrics", you need to use <a href="https://prometheus.io/docs/instrumenting/clientlibs/" rel="nofollow noreferrer">Prometheus client</a> for your language, and to tell Prometheus to scrape the metrics (usually, it is done by <a href="https://github.com/coreos/prometheus-operator#customresourcedefinitions" rel="nofollow noreferrer">ServiceMonitors</a>).</li>
</ol>
<p>Hope this will shed some light.</p>
| evgenyl |
<p>I am using Kubernetes Service of type Cluster IP, which will expose a deployment.
In my container I want to use the Service IP (cluster IP). Is there any way I can get the IP Address inside the Pod/container? </p>
<p>Is it possible to get the cluster IP from Service name?</p>
| Karthik | <p>Yes, via the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">environment-level service discovery</a> mechanism. Note, however that any service that you want to access like this must be created <strong>before</strong> the pod itself has been launched, otherwise the environment variables are not populated.</p>
| Michael Hausenblas |
<p>What is the simplest way to find out the Availability of a K8s service over a period of time, lets say 24h. Should I target a pod or find a way to calculate service reachability</p>
| Nesim Pllana | <p>I'd recommend to not approach it from a binary (is it up or down) but from a "how long does it take to serve requests" perspective. In other words, phrase your availability in terms of SLOs. You can get a very nice automatically generated SLO-based alter rules from <a href="https://promtools.dev/alerts/latency" rel="nofollow noreferrer">PromTools</a>. One concrete example rule from there, showing the PromQL part:</p>
<pre class="lang-yaml prettyprint-override"><code>1 - (
sum(rate(http_request_duration_seconds_bucket{job="prometheus",le="0.10000000000000001",code!~"5.."}[30m]))
/
sum(rate(http_request_duration_seconds_count{job="prometheus"}[30m]))
)
</code></pre>
<p>Above captures the ratio of how long it took the service to serve non-500 (non-server errors, that is, assumed good responses) in less than 100ms to overall responses over the last 30 min with <code>http_request_duration_seconds</code> being the histogram, capturing the distribution of the requests of your service.</p>
| Michael Hausenblas |
<p>I need to start kubernetes pods in a sequence like pod2 should start only when pod1 is up and running.</p>
<p>we can do this in <code>docker-compose.yml</code> using <code>depends_on</code></p>
| sam | <p>No, there is no built-in dependency management equivalent to <code>depends_on</code> available. In general, we assume loosely coupled services and as a good practice there should be no hard dependency in terms of start-up order, but retries and timeouts should be used. If you have to hardcode dependencies, you can use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init containers</a>. In your case, a init container in <code>pod2</code> could simply query if <code>pod1</code> (or better: the service in front of it) is ready in a while loop. The main container in <code>pod2</code> is guaranteed only to be launched if and when the init container exits successfully. </p>
| Michael Hausenblas |
<p>I'm moving an Apache Mesos application, where my configurations are based in JSON, to Kubernetes, where my configurations are based in YAML. Would the JSON configuration files work as a YAML file since YAML is a superset of JSON, or would I need to write a new YAML file?</p>
| emp440 | <p>Yes, JSON works as well, it's just more pain than YAML to write it, manually. Also, you may be able to use <a href="https://github.com/micahhausler/container-transform" rel="nofollow noreferrer">micahhausler/container-transform</a> to convert your Marathon specs to Kubernetes specs.</p>
| Michael Hausenblas |
<p>I have a Kubernetes Service that selects by doing:</p>
<pre><code>spec:
selector:
backend: nlp-server
</code></pre>
<p>If there are multiple <code>Pods</code> which match the selector, which <code>Pod</code> does the <code>Service</code> route a request to? </p>
<p>I am using the default <code>ClusterIP</code> setup. Search for "ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType." in the <a href="https://kubernetes.io/docs/concepts/services-networking/" rel="nofollow noreferrer">docs</a></p>
<p>If I want the Service to route to a Pod that makes sense (having lesser load),<br>
is the <a href="https://kubernetes.io/docs/concepts/services-networking/#internal-load-balancer" rel="nofollow noreferrer">internal load-balancer</a> what I need?</p>
| cryanbhu | <p>In a nutshell, no you don't need the internal load-balancer you linked to. The <code>Service</code> resource <em>is</em> indeed a load-balancer. Depending on the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="noreferrer">proxy mode</a> it could be round-robin or random. If you're going with the default (iptables-based proxy) it would be a <a href="http://kubernetesbyexample.com/services/" rel="noreferrer">random pod selected</a> every time you hit the virtual IP of the service.</p>
<p>Note: you <em>could</em> use the internal load-balancer type, typically in a cloud environment <a href="https://medium.com/google-cloud/internal-load-balancing-for-kubernetes-services-on-google-cloud-f8aef11fb1c4" rel="noreferrer">such as GKE</a>, for example to cut down on costs when all you need is cluster-internal connectivity, however they are (as far as I know) usually L4 load-balancers. </p>
| Michael Hausenblas |
<p>If a node loses communication with the master, will it continue to run its workload in a self-healing way?</p>
<p>For instance, if the master is unavailable and a pod exceeds its cpu limit and is killed, will the node independently restart the pod because that pod has already been scheduled on the node?</p>
| Dan Bowling | <p>Yes. The local (node) supervisor looking after your pods is the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> and while you can't change things while the connection to the API server is not available, the pods already scheduled on the node will continue to be supervised by the kubelet. In this context, an interesting but not super useful thing to know (for end-users) is that there are also so called <a href="https://kubernetes.io/docs/tasks/administer-cluster/static-pod/" rel="nofollow noreferrer">static pods</a> which you can launch manually, on the node.</p>
| Michael Hausenblas |
<p>currently i used docker-compose to arrange my application that consists of 3 dockerimages - a postgresql database and 2 wildfly application servers (Frontend-ui, backend).</p>
<p>My <strong>docker-compose.yml</strong> looks like this:</p>
<pre><code>version: '3.0'
services:
my-webgui-service:
image: test/mywebgui
ports:
- "18081:8080"
links:
- my-app-service
my-app-service:
image: test/myapp
ports:
- "18080:8080"
- "29990:9990"
links:
- db-service
db-service:
image: test/postgres
ports:
- "15432:5432
</code></pre>
<p>Now, i would like to implement the same thing via kubernetes. </p>
<p>Is it possible to arrange this in a single yaml-File, that contains the configuration for service, deployment and pods?
I thought that it is easier to manage automated deployments when not having seperated yml-files.</p>
<p>Is this a best practise? </p>
<p>Best regards, Shane</p>
| Shannon | <p>Yes it's possible, simply separate the different resources such as deployments, services, etc. with <code>---</code>. Concerning if it's a good practice or not: a matter of taste, rather. If you have all in one file it's more self-contained but for <code>kubectl apply -f</code> it doesn't really matter since it operates on directories as well.</p>
| Michael Hausenblas |
<p>I basically want to find the hard eviction strategy that kubelet is currently using.<br>
I checked the settings in the /etc/systemd/system/kubelet.service file on my K8s node. In that the strategy I mentioned is as follows:<br>
<code>--eviction-hard=nodefs.available<3Gi</code> </p>
<p>However, my pods seem to be evicted when the nodefs.available is <10% (default kubernetes settings)
I have been unable to find a way a way to know the current parameters that are being used by kubernetes.</p>
| Amanjeet Singh | <p>It is possible to <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/#generate-the-configuration-file" rel="noreferrer">dump the current kubelet configuration</a> using <code>kubectl proxy</code> along with the <code>/api/v1/nodes/${TARGET_NODE_FOR_KUBELET}/proxy/configz</code> path, details see linked Kubernetes docs.</p>
| Michael Hausenblas |
<p>I have a kubernetes setup with the configuration like below:</p>
<pre class="lang-yaml prettyprint-override"><code>#---
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
selector:
app: my-service
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8080
# Port to forward to inside the pod
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service
image: my-custom-docker-regisry/my-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
</code></pre>
<p>and my ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /myservice
backend:
serviceName: myservice
servicePort: 80
</code></pre>
<p>What I tried to do is pulling the image from my docker registry and run it in the kubernetes. I have configured here one deployment and one service and expose the service to the outside with the ingress.</p>
<p>My minikube is running under ip 192.168.99.100 and when I tried to access my application with address: curl 192.168.99.100:80/myservice, I got 502 Bad Gateway.</p>
<p>Does anyone have an idea why it happended or did I do something wrong with the configuration? Thank you in advanced!</p>
| Ock | <p>Your ingress targets this service:</p>
<pre><code> serviceName: myservice
servicePort: 80
</code></pre>
<p>but the service named <code>myservice</code> exposes port <code>8080</code> rather than <code>80</code>:</p>
<pre><code> ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8080
# Port to forward to inside the pod
targetPort: 80
</code></pre>
<p>Your ingress should point to one of the ports exposed by the service.</p>
<p>Also, the service itself targets port 80, but the pods in your deployment seem to expose port 8080, rather than 80:</p>
<pre><code> containers:
- name: my-service
image: my-custom-docker-regisry/my-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<p>So long story short, looks like you could swap <code>port</code> with <code>targetPort</code> in your <code>service</code> so that:</p>
<ul>
<li>the pods expose port <code>8080</code></li>
<li>the service exposes port <code>8080</code> of all the pods under service name <code>myservice</code> port <code>80</code>,</li>
<li>the ingress configures nginx to proxy your traffic to service <code>myservice</code> port <code>80</code>.</li>
</ul>
| Kos |
<p>What about ReplicaSet_B and ReplicaSet_A update the same db? I hoped the pods in ReplicaSet_A were stopped with taking a snapshot. But there is not any explanation like this in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a>. I think, It is assumed that the containers are running online applications in the pods. What if they are batch applications? I mean the old pods belonging to old replicas will update the dbs in old manner. This will require also a data migration issue.</p>
| Tolga Golelcin | <p>Yes. <code>ReplicaSets</code> (managed by <code>Deployments</code>) make two assumptions: 1. your workload is stateless, and 2. all pods are identical clones (other than their IP addresses). Now, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> address some aspects, for example, you can assign pods a certain identity (for example: leader or follower), but really only work for specific workloads. Also, the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> abstractions in Kubernetes won't really help you a lot concerning stateful workloads. What you likely are looking at is a custom controller or operator. We're collecting good practices and tooling via <a href="https://stateful.kubernetes.sh/" rel="nofollow noreferrer">stateful.kubernetes.sh</a>, maybe there's something there that can be of help?</p>
| Michael Hausenblas |
<p>I have a problem that my pods in minikube cluster are not able to see the service through the domain name.</p>
<p>to run my minikube i use the following commands (running on windows 10):<br>
<code>minikube start --vm-driver hyperv;</code><br>
<code>minikube addons enable kube-dns;</code><br>
<code>minikube addons enable ingress;</code> </p>
<p>This is my <code>deployment.yaml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
</code></pre>
<p>this is the <code>service.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
</code></pre>
<p>this is my <code>ingress.yaml</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
</code></pre>
<p>So, if i go inside the <code>hello-world</code> pod and from <code>/bin/bash</code> will run <code>curl minikube.local.com</code> or <code>nslookup minikube.local.com</code>.</p>
<p>So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify <code>hostAlias</code> in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?</p>
| GrimSmiler | <p>So, you want to expose your app on Minikube? I've just tried it using the default <code>ClusterIP</code> service type (essentially, removing the <code>ExternalName</code> stuff you had) and with <a href="https://gist.github.com/mhausenblas/37e43f1755f2895a2f87719bb4144daa" rel="nofollow noreferrer">this YAML file</a> I can see your service on <code>https://192.168.99.100</code> where the Ingress controller lives:</p>
<p><a href="https://i.stack.imgur.com/9FhbL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9FhbL.png" alt="screen shot of hello-world app in browser"></a></p>
<p>The service now looks like so:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
</code></pre>
<p>And the ingress is:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
</code></pre>
<p>Note: Within the cluster your service is now available via <code>hello-world.default</code> (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say <code>hello-world.local</code> to 192.168.99.100 in your <code>/etc/hosts</code> file on your host machine. </p>
<p>Alternatively, if you change the <code>Ingress</code> resource to <code>- host: hello-world.local</code> then you can (from the host) reach your service using this FQDN like so: <code>curl -H "Host: hello-world.local" 192.168.99.100</code>.</p>
| Michael Hausenblas |
<p>I have Windows 11 Home (which does not allow Hyper-V, only Pro edition does).
Installed WSL2 and Docker Desktop.</p>
<p>Installed Minikube using Chocolatey but it refused to start.
Searching on SO, I found this advice in several posts, but it failed to work.</p>
<pre><code>PS C:\WINDOWS\system32> docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect.: Post "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/prune": open //./pipe/docker_engine: The system cannot find the file specified.
PS C:\WINDOWS\system32> minikube delete
* Removed all traces of the "minikube" cluster.
PS C:\WINDOWS\system32> minikube start --driver=docker
* minikube v1.25.2 on Microsoft Windows 11 Home 10.0.22000 Build 22000
* Using the docker driver based on user configuration
X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/version": open //./pipe/docker_engine: The system cannot find the file specified.
* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
</code></pre>
| likejudo | <p>I thought of trying to have <em>Docker Desktop</em> already running <strong>before</strong> I start minikube.</p>
<p>From the Windows Start menu, I ran <em>Docker Desktop in Administrator mode</em>.</p>
<p>Now I ran the command again
to remove old stuff,</p>
<pre><code>PS C:\WINDOWS\system32> minikube delete
* Removed all traces of the "minikube" cluster.
</code></pre>
<p>and now specify the docker driver</p>
<pre><code>PS C:\WINDOWS\system32> minikube start --driver=docker
* minikube v1.25.2 on Microsoft Windows 11 Home 10.0.22000 Build 22000
* Using the docker driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
> gcr.io/k8s-minikube/kicbase: 379.06 MiB / 379.06 MiB 100.00% 10.23 MiB p
* Creating docker container (CPUs=2, Memory=3000MB) ...
* Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
- kubelet.housekeeping-interval=5m
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
</code></pre>
<p>Now verify minikube status</p>
<pre class="lang-bash prettyprint-override"><code>PS C:\WINDOWS\system32> minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
</code></pre>
<p>I don't know kubernetes as I am learning it, but it appears to have worked. I hope this will be useful to someone so they do not have to go off and spend $99 to upgrade to Windows Pro - as I was going to do if this did not work.</p>
<p><strong>Update</strong>: Here is a link with more details <a href="https://juwo.blogspot.com/" rel="nofollow noreferrer">How to run Kubernetes on Windows 11</a></p>
| likejudo |
<p>Is <a href="https://jfrog.com/container-registry/" rel="noreferrer">the registry</a> a pivot for JFrog product portfolio or is it some set of additional capabilities? The functionality is very interesting either way but it would be nice to to understand the details.</p>
| rhatr | <p>In a nutshell, <a href="https://jfrog.com/container-registry/" rel="noreferrer">JFrog Container Registry</a> <strong>is</strong> Artifactory. It is the same codebase, the same architecture and mostly the same features.
You get:</p>
<ul>
<li>Unlimited Docker and Helm registries*
<ul>
<li>local registries for your images</li>
<li>remote proxies of remote registries</li>
<li>virtual registries (a single URL to access any combination of other registries)</li>
</ul></li>
<li>Free and immediate promotion (you can move your images between registries with an API call, without pulling/pushing)</li>
<li>Build metadata with Artifactory Query Language
and other stuff you might know from Artifactory, like the flexible and intuitive RBAC. We will also introduce security scanning with JFrog Xray soon.</li>
</ul>
<p>Best difference? The JFrog Container Registry is free, both on-prem and in the cloud!</p>
<p><sup>*We call them “repositories” in Artifactory</sup></p>
<hr>
<p><sup>I am with <a href="http://jfrog.com" rel="noreferrer">JFrog</a>, the company behind <a href="/questions/tagged/artifactory" class="post-tag" title="show questions tagged 'artifactory'" rel="tag">artifactory</a> and <a href="/questions/tagged/jfrog-container-registry" class="post-tag" title="show questions tagged 'jfrog-container-registry'" rel="tag">jfrog-container-registry</a>, see <a href="https://stackoverflow.com/users/402053/jbaruch">my profile</a> for details and links.</sup></p>
| JBaruch |
<p>We are planning to setup Highly Available Jenkins setup in container platform using kubernetes. We are looking at setting up one Active master and another master in standby mode.
Jenkins data volume is going to be stored in a global storage that is shared between the two master containers.</p>
<p>In case the active master is not available then requests should fail over to other master. And the slaves should be communicating only with active master.</p>
<p>How do we accomplish Jenkins HA setup in active/passive mode in kubernetes. please provide your suggestions.</p>
<p>We would like to achieve as shown in the diagram from below link</p>
<p><a href="https://endocode.com/img/blog/jenkins-ha-setup_concept.png" rel="nofollow noreferrer">https://endocode.com/img/blog/jenkins-ha-setup_concept.png</a></p>
| P Ekambaram | <p>This contradicts with how one should IMHO run applications in Kubernetes. Active/passive is a concept for the past century.</p>
<p>Instead, configure a health check for the Jenkins Deployment. If that fails, Kubernetes will automatically kill the task and start a replacement (which will be available only a few seconds after detecting the active one being unhealthy).</p>
| StephenKing |
<p>So I'm setting up a NATS cluster at work in OpenShift. I can easily get things to work by having each NATS server instance broadcast its Pod IP to the cluster. The guy I talked to at work strongly advised against using the Pod IP and suggested using the Pod name. In the email, he said something about if a pod restarted. But like I tried deleting the pod and the new Pod IP was in the list of connect urls for NATS and it worked fine. I know Kubernetes has DNS and you can use the headless service but it seems somewhat flaky to me. The Pod IP works.</p>
| Fred Ma | <p>I believe "the guy at work" has a point, to a certain extent, but it's hard to tell to which extent it's cargo-culting and what is half knowledge. The point being: the pod IPs are not stable, that is, every time a pod gets re-launched (on the same node or somewhere else, doesn't matter) it will get a new IP from the pod CIDR-range assigned.</p>
<p>Now, services provide stability by introducing a virtual IP (VIP): this acts as a cluster-internal mini-load balancer sitting in front of pods and yes, the recommended way to talk to pods, in the general case, is via services. Otherwise, you'd need to keep track of the pod IPs out-of-band, no bueno.</p>
<p>Bottom-line: if NATS manages that for you, keeps track and maps pod IPs then fine, use it, no harm done.</p>
| Michael Hausenblas |
<p>In CI, with gcp auth plugin I was using gcloud auth activate-service-account ***@developer.gserviceaccount.com --key-file ***.json prior to execute kubectl commands.
Now with gke-gcloud-auth-plugin I can’t find any equivalent to use a gcp service account key file.
I've installed <code>gke-gcloud-auth-plugin</code> and <code>gke-gcloud-auth-plugin --version</code> is giving me <code>Kubernetes v1.25.2-alpha+ae91c1fc0c443c464a4c878ffa2a4544483c6d1f</code>
Would you know if there’s a way?</p>
<p>I tried to add this command:
<code>kubectl config set-credentials my-user --auth-provider=gcp</code>
But I still get:</p>
<pre><code>error: The gcp auth plugin has been removed. Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
</code></pre>
| bonomo | <p>You will need to set the env variable to use the new plugin before doing the <code>get-credentials</code>:</p>
<pre class="lang-bash prettyprint-override"><code>export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $CLUSTER \
--region $REGION \
--project $PROJECT \
--internal-ip
</code></pre>
<p>I would not have expected the env variable to still be required (now that the gcp auth plugin is completely deprecated) - but it seems it still is.</p>
<p>Your kubeconfig will end up looking like this if the new auth provider is in use.</p>
<pre class="lang-yaml prettyprint-override"><code>...
- name: $NAME
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
</code></pre>
| Ben Walding |
<p>My namespace has some custom metadata labels. Some have the labels some don't. Is there any way to get the namespaces which has a particular label using kubectl?</p>
| codec | <p>Yes. Like so:</p>
<pre><code>$ kubectl create ns nswithlabels
$ kubectl label namespace nswithlabels this=thing
$ kubectl describe ns/nswithlabels
Name: nswithlabels
Labels: this=thing
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
$ kubectl get ns -l=this
NAME STATUS AGE
nswithlabels Active 6m
</code></pre>
<p>Note: I could have also used <code>-l=this=thing</code> in the last command to specify both key and value required to match.</p>
| Michael Hausenblas |
<p>I'm doing some tutorials using k3d (k3s in docker) and my yml looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
</code></pre>
<p>With the resulting node port being 31747:</p>
<pre><code>:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18m
nginx NodePort 10.43.254.138 <none> 80:31747/TCP 17m
:~$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.18.0.2:6443 22m
nginx 10.42.0.8:80 21m
</code></pre>
<p>However wget does not work:</p>
<pre><code>:~$ wget localhost:31747
Connecting to localhost:31747 ([::1]:31747)
wget: can't connect to remote host: Connection refused
:~$
</code></pre>
<p>What have I missed? I've ensured that my labels all say <code>app: nginx</code> and my <code>containerPort</code>, <code>port</code> and <code>targetPort</code> are all 80</p>
| A G | <p>The question is, is the NodePort range mapped from the host to the docker container acting as the node. The command <code>docker ps</code> will show you, for more details you can <code>docker inspect $container_id</code> and look at the <code>Ports</code> attribute under <code>NetworkSettings</code>. I don't have k3d around, but here is an example from kind.</p>
<pre><code>$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2225b83a73 kindest/node:v1.17.0 "/usr/local/bin/entr…" 18 hours ago Up 18 hours 127.0.0.1:32769->6443/tcp kind-control-plane
</code></pre>
<pre><code>$ docker inspect kind-control-plane
[
{
# [...]
"NetworkSettings": {
# [...]
"Ports": {
"6443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
]
},
# [...]
}
]
</code></pre>
<p>If it is not, working with <code>kubectl port-forward</code> as suggested in the comment is probably the easiest approach. Alternatively, start looking into Ingress. Ingress is the preferred method to expose workloads outside of a cluster, and in the case of kind, <a href="https://kind.sigs.k8s.io/docs/user/ingress/" rel="noreferrer">they have support</a> for Ingress. It seems k3d also has a way to <a href="https://k3d.io/usage/guides/exposing_services/" rel="noreferrer">map the ingress port to the host</a>.</p>
| pst |
<p>I am trying to delete <a href="https://github.com/tektoncd/pipeline" rel="nofollow noreferrer">tekton</a> kubernetes resources in the context of a service account with an on-cluster kubernetes config, and am experiencing errors specific to accessing <code>deletecollection</code> with all tekton resources. An example error is as follows:</p>
<blockquote>
<p>pipelines.tekton.dev is forbidden: User "system:serviceaccount:my-account:default" cannot deletecollection resource "pipelines" in API group "tekton.dev" in the namespace "my-namespace"</p>
</blockquote>
<p>I have tried to apply <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> to help here, but continue to experience the same errors. My RBAC attempt is as follows:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: my-namespace
rules:
- apiGroups: ["tekton.dev"]
resources: ["pipelines", "pipelineruns", "tasks", "taskruns"]
verbs: ["get", "watch", "list", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-role-binding
namespace: my-namespace
subjects:
- kind: User
name: system:serviceaccount:my-account:default
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: my-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>These RBAC configurations continue to result in the same error. Is this, or similar necessary? Are there any examples of RBAC when interfacing with, specifically deleting, tekton resources?</p>
| scniro | <p>Given two namespaces <code>my-namespace</code> and <code>my-account</code> the default service account in the <code>my-account</code> namespace is correctly granted permissions to the <code>deletecollection</code> verb on <code>pipelines</code> in <code>my-namespace</code>.</p>
<p>You can verify this using <code>kubectl auth can-i</code> like this after applying:</p>
<pre><code>$ kubectl -n my-namespace --as="system:serviceaccount:my-account:default" auth can-i deletecollection pipelines.tekton.de
yes
</code></pre>
<p>Verify that you have actually applied your RBAC manifests.</p>
| pst |
<p>I have a few different containerized web apps running on Azure Container Instance (ACI). I recently noticed that some of these containers just restart with no apparent reason once in a month or so. Since the restarts are on different apps/containers each time, I have no reason to suspect that the apps are crashing.</p>
<p>The restart policy on all of them are set to "Always".</p>
<p>Is it normal or expected for the containers to restart even when there is no app crash? Perhaps when Azure does maintenance on the host machines or maybe a noisy neighbor on the same host causing a pod movement to another host?</p>
<p>(I am in the process of adding a log analytics workspace so that I can view the logs before the restart. Since the restarts are so infrequent, I wouldn't have any logs to look at for quite some time.)</p>
| rahulmohan | <p>Same here</p>
<p>I've contacted MS support and got the response that <strong>per design</strong> ACI maintenance can restart the hosts so <strong>it can't be expected to run ACI for weeks uninterrupted</strong></p>
<p>Recommendation is to</p>
<ul>
<li>adapt your app to be resilient (so you don't care about restarts)</li>
<li>use AKS to gain full control over lifecycle</li>
<li>use VM as host for your app with appropriate policies (no updates / restarts...)</li>
</ul>
<p>For me this was a deal-breaker since I couldn't find this info anywhere. I've ended up with VM.</p>
| Tomas |
<p>I have 3 applications in separate directories handled with the kubernetes manifest files.</p>
<p>And I want to use terraform to deploy all the applications in different directories.</p>
<p>The structure of the applications show below.</p>
<pre><code>app-1
L kubernetes_manifest
L deployment.yaml. ## use Dockerfile to run the app
L app
L app.py
Dockerfile. ## wrap the app
app-2
L kubernetes_manifest
L deployment.yaml. ## use Dockerfile to run the app
L app
L app.py
Dockerfile. ## wrap the app
app-3
L kubernetes_manifest
L deployment.yaml. ## use Dockerfile to run the app
L app
L app.py
Dockerfile. ## wrap the app
app_terraform
L main.tf
</code></pre>
<p>main.tf</p>
<pre><code>resource "google_container_cluster" "test_cluster" {
project = "test"
name = "test-cluster"
initial_node_count = 1
}
</code></pre>
<p>Is there a way to deploy all the applications in <code>main.tf</code>?</p>
| Eric Lee | <p>I maintain the <a href="https://registry.terraform.io/providers/kbst/kustomization/latest" rel="nofollow noreferrer">Kustomization Provider</a> which allows you to use native Kubernetes YAML in Terraform.</p>
<p>You can either use the provider directly or use a convenience module I additionally provide. I maintain both as part of my <a href="https://www.kubestack.com/" rel="nofollow noreferrer">Terraform framework for AKS, EKS and GKE</a>.</p>
<p>The module approach may be more convenient, because then you can simply call the module once per app and have a good balance between maintainable code and no unnecessary coupling between the apps. The module also handles the <a href="https://registry.terraform.io/providers/kbst/kustomization/latest/docs/resources/resource#terraform-limitation" rel="nofollow noreferrer">ids_prio approach</a>, which is more robust for larger numbers of K8s resources, out of the box.</p>
<h1>Option 1: Provider</h1>
<p><code>app-1.tf</code>:</p>
<pre><code>data "kustomization_overlay" "app_1" {
resources = [
"${path.root}/app-1/kubernetes_manifest/deployment.yaml"
]
}
resource "kustomization_resource" "test" {
for_each = data.kustomization_build.app_1.ids
manifest = data.kustomization_build.app_1.manifests[each.value]
}
</code></pre>
<h1>Option 2: Module</h1>
<p><code>app-1.tf</code>:</p>
<pre><code>module "app_1" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
"${path.root}/app-1/kubernetes_manifest/deployment.yaml"
]
}
}
}
</code></pre>
| pst |
<p>I am trying to create a Dynamic storage volume on Kubernetes in Ali cloud.
First I have created a storage class.</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: alicloud-pv-class
provisioner: alicloud/disk
parameters:
type: cloud_ssd
regionid: cn-beijing
zoneid: cn-beijing-b
</code></pre>
<p>Then, tried creating a persistence volume claim as per below.</p>
<pre><code>apiVersion: v1
kind: List
items:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: node-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: alicloud-pv-class
resources:
requests:
storage: 64Mi
</code></pre>
<p>Creation of persistence volume fails with the following error.</p>
<blockquote>
<p>Warning ProvisioningFailed 0s alicloud/disk alicloud-disk-controller-68dd8f98cc-z6ql5 5ef317c7-f110-11e8-96de-0a58ac100006 Failed to provision volume with StorageClass "alicloud-pv-class": Aliyun API Error: RequestId: 7B2CA409-3FDE-4BA1-85B9-80F15109824B Status Code: 400 Code: InvalidParameter Message: The specified parameter "Size" is not valid.</p>
</blockquote>
<p>I am not sure where this Size parameter is specified. Did anyone come across a similar problem?</p>
| ygnr | <p>As pointed out in <a href="https://www.alibabacloud.com/help/doc-detail/86612.htm" rel="noreferrer">the docs</a>, the minimum size for SSD is <code>20Gi</code>, so I'd suggest to change <code>storage: 64Mi</code> to <code>storage: 20Gi</code> to fix it.</p>
| Michael Hausenblas |
<p>I need to forward port of one Kubernetes pods. One possible way is to execute kubectl command like bellow:</p>
<p>kubectl port-forward podm-resource-manager-56b9ccd59c-8pmdn 8080</p>
<p>Is there a way to achieve the same using python (for example python kubernetes-client)?</p>
| japiasec | <p>The method <a href="https://github.com/kubernetes-client/python/blob/release-9.0/kubernetes/docs/CoreV1Api.md#connect_get_namespaced_pod_portforward" rel="noreferrer">connect_get_namespaced_pod_portforward</a> is available in in the python kubernetes-client to do a port forward.</p>
| Jaime M. |
<p>Is it possible to configure which storageclasses can be used by namespace?</p>
<p>So for example I have a single cluster for production and development.</p>
<p>I want to configure a set of storageclasses for development and a different set of storageclasses for production.</p>
<p>I want to strictly configure that in development no one could use the storageclasses of production.</p>
<p>Is this possible?</p>
<p>I have only seen the option to use the resource quotas at namespace level, but it is not the same, with quotas I can configure the amount of disk that can be used in each storageclass, so if I create a new storageclass I will have to modify all the quotas in all the namespaces to add the constraints about the new storageclass.</p>
| Jxadro | <p>A <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">storage class</a> in Kubernetes is a cluster-wide resource, so you can't restrict the usage regarding a namespace out-of-the-box. What you can do, however, is to write a customer controller akin to what Banzai did with their <a href="https://banzaicloud.com/blog/pvc-operator/" rel="noreferrer">PVC Operator</a> or Raffaele Spazzoli's <a href="https://github.com/raffaelespazzoli/namespace-configuration-controller" rel="noreferrer">Namespace Configuration Controller</a>.</p>
| Michael Hausenblas |
Subsets and Splits