Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>How do I use minikube's (cluster's) DNS? I want to receive all IP addresses associated with all pods for selected headless service? I don’t want to expose it outside the cluster. I am currently creating back-end layer.</p>
<p>As stated in the following answer:
<a href="https://stackoverflow.com/questions/52707840/what-exactly-is-a-headless-service-what-does-it-do-accomplish-and-what-are-som/52713482">What exactly is a headless service, what does it do/accomplish, and what are some legitimate use cases for it?</a></p>
<p><em>„Instead of returning a single DNS A record, the DNS server will return multiple A records for the service, each pointing to the IP of an individual pod backing the service at that moment.”</em></p>
<p>Thus the pods in back-end layer can communicate to each other.</p>
<p>I can’t use dig command. It is not installed in minikube. Eventually how do I install it? There is no apt available.</p>
<p>I hope this explains more accurately what I want to achieve.</p>
| hal | <p>You mentioned that you want to receive IP addresses associated with pods for selected service name for testing how does headless service work.</p>
<p>For only testing purposes you can use port-forwarding. You can forward traffic from your local machine to dns pod in your cluster. To do this, you need to run:</p>
<pre><code>kubectl port-forward svc/kube-dns -n kube-system 5353:53
</code></pre>
<p>and it will expose kubs-dns service on your host. Then all you need is to use <code>dig</code> command (or alternative) to query the dns server.</p>
<pre><code>dig @127.0.0.1 -p 5353 +tcp +short <service>.<namespace>.svc.cluster.local
</code></pre>
<p>You can also test your dns from inside of cluster e.g. by running a pod with interactive shell:</p>
<pre><code>kubectl run --image tutum/dnsutils dns -it --rm -- bash
root@dns:/# dig +search <service>
</code></pre>
<p>Let me know it it helped.</p>
| Matt |
<p>The problem i'm running into is very similar to the other existing post, except they all have the same solution therefore im creating a new thread. </p>
<p><strong>The Problem:</strong>
The Master node is still in "NotReady" status after installing Flannel.</p>
<p><strong>Expected result:</strong>
Master Node becomes "Ready" after installing Flannel.</p>
<p><strong>Background:</strong>
I am following <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="noreferrer">this</a> guide when installing Flannel</p>
<p>My concern is that I am using Kubelet <a href="https://kubernetes.io/docs/setup/release/notes/#downloads-for-v1-17-0" rel="noreferrer">v1.17.2</a> by default that just came out like last month (Can anyone confirm if v1.17.2 works with Flannel?"</p>
<p>Here is the output after running the command on master node: <em>kubectl describe node machias</em></p>
<pre><code>Name: machias
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=machias
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"be:78:65:7f:ae:6d"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.122.172
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 15 Feb 2020 01:00:01 -0500
Taints: node.kubernetes.io/not-ready:NoExecute
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: machias
AcquireTime: <unset>
RenewTime: Sat, 15 Feb 2020 13:54:56 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 192.168.122.172
Hostname: machias
Capacity:
cpu: 2
ephemeral-storage: 38583284Ki
hugepages-2Mi: 0
memory: 4030364Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 35558354476
hugepages-2Mi: 0
memory: 3927964Ki
pods: 110
System Info:
Machine ID: 20cbe0d737dd43588f4a2bccd70681a2
System UUID: ee9bc138-edee-471a-8ecc-f1c567c5f796
Boot ID: 0ba49907-ec32-4e80-bc4c-182fccb0b025
Kernel Version: 5.3.5-200.fc30.x86_64
OS Image: Fedora 30 (Workstation Edition)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-machias 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-apiserver-machias 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-controller-manager-machias 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-flannel-ds-amd64-rrfht 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 12h
kube-system kube-proxy-z2q7d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-scheduler-machias 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (32%) 100m (5%)
memory 50Mi (1%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
</code></pre>
<p>And the following command: <em>kubectl get pods --all-namespaces</em></p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-7nz46 0/1 Pending 0 12h
kube-system coredns-6955765f44-xk5r2 0/1 Pending 0 13h
kube-system etcd-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-apiserver-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-controller-manager-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-flannel-ds-amd64-rrfht 1/1 Running 0 12h
kube-system kube-flannel-ds-amd64-t7p2p 1/1 Running 0 12h
kube-system kube-proxy-fnn78 1/1 Running 0 12h
kube-system kube-proxy-z2q7d 1/1 Running 0 13h
kube-system kube-scheduler-machias.cs.unh.edu 1/1 Running 0 13h
</code></pre>
<p>Thank you for your help!</p>
| Hexalogy | <p>I've reproduced your scenario using the same versions you are using to make sure these versions work with Flannel. </p>
<p>After testing it I can affirm that there is no problem with the version you are using. </p>
<p>I created it following these steps: </p>
<p><strong>Ensure iptables tooling does not use the nftables backend</strong> <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#ensure-iptables-tooling-does-not-use-the-nftables-backend" rel="nofollow noreferrer">Source</a></p>
<pre><code>update-alternatives --set iptables /usr/sbin/iptables-legacy
</code></pre>
<p><strong>Installing runtime</strong></p>
<pre><code>sudo yum remove docker docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce-19.03.5-3.el7
sudo systemctl start docker
</code></pre>
<p><strong>Installing kubeadm, kubelet and kubectl</strong></p>
<pre><code>sudo su -c "cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF"
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet-1.17.2-0 kubeadm-1.17.2-0 kubectl-1.17.2-0 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
</code></pre>
<p><strong>Note:</strong></p>
<ul>
<li>Setting SELinux in permissive mode by running <code>setenforce 0</code> and <code>sed ...</code> effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.</li>
<li><p>Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure <code>net.bridge.bridge-nf-call-iptables</code> is set to 1 in your <code>sysctl</code> config, e.g.</p>
<pre class="lang-sh prettyprint-override"><code>cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
</code></pre></li>
<li><p>Make sure that the <code>br_netfilter</code> module is loaded before this step. This can be done by running <code>lsmod | grep br_netfilter</code>. To load it explicitly call <code>modprobe br_netfilter</code>.</p></li>
</ul>
<p><strong>Initialize cluster with Flannel CIDR</strong></p>
<pre><code>sudo kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p><strong>Add Flannel CNI</strong></p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
</code></pre>
<p>By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>As can be seen, my master node is Ready. Please, follow this How-to and let me know if you can achieve your desired state. </p>
<pre><code>$ kubectl describe nodes
Name: kubeadm-fedora
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=kubeadm-fedora
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"8e:7e:bf:d9:21:1e"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.128.15.200
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Feb 2020 11:31:59 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: kubeadm-fedora
AcquireTime: <unset>
RenewTime: Mon, 17 Feb 2020 11:47:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:32:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.128.15.200
Hostname: kubeadm-fedora
Capacity:
cpu: 2
ephemeral-storage: 104844988Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7493036Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 96625140781
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7390636Ki
pods: 110
System Info:
Machine ID: 41689852cca44b659f007bb418a6fa9f
System UUID: 390D88CD-3D28-5657-8D0C-83AB1974C88A
Boot ID: bff1c808-788e-48b8-a789-4fee4e800554
Kernel Version: 3.10.0-1062.9.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6955765f44-d9fb4 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system coredns-6955765f44-l7xrk 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system etcd-kubeadm-fedora 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-apiserver-kubeadm-fedora 250m (12%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-controller-manager-kubeadm-fedora 200m (10%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-flannel-ds-amd64-v6m2w 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 15m
kube-system kube-proxy-d65kl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-scheduler-kubeadm-fedora 100m (5%) 0 (0%) 0 (0%) 0 (0%) 15m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 100m (5%)
memory 190Mi (2%) 390Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 16m (x6 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 16m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kubelet, kubeadm-fedora Starting kubelet.
Normal NodeHasSufficientMemory 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kube-proxy, kubeadm-fedora Starting kube-proxy.
Normal NodeReady 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeReady
</code></pre>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubeadm-fedora Ready master 17m v1.17.2
</code></pre>
<pre><code>$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-d9fb4 1/1 Running 0 17m
kube-system coredns-6955765f44-l7xrk 1/1 Running 0 17m
kube-system etcd-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-apiserver-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-controller-manager-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-flannel-ds-amd64-v6m2w 1/1 Running 0 17m
kube-system kube-proxy-d65kl 1/1 Running 0 17m
kube-system kube-scheduler-kubeadm-fedora 1/1 Running 0 17m
</code></pre>
| Mark Watney |
<p>I am looking for a "<em>simple</em>" deployment system that can manage a cluster of computers.</p>
<p>I have one Docker image (hosted on <a href="https://hub.docker.com" rel="nofollow noreferrer">DockerHub</a>) that will run with different environment parameters in this cluster. For this image I have a docker_compose file that I can start on a machine directly (this works right now).</p>
<p>What I am looking for is a cluster management system to which I can add physical computers (nodes) and then I can issue commands like:</p>
<pre><code>$ docker-compose up
</code></pre>
<p>or</p>
<pre><code>$ docker run --device /dev/sda -e ENV1 -e ENV2 image_id
</code></pre>
<p>And ideally the cluster (manager) schedules it on one available node. All the nodes that I will join in the cluster have the necessary resources to run the container, so I am not interested in a cluster management system that can schedule containers depending on their hardware needs. Also, it doesn't necessarily need to have support for Docker, just to be able to issue the commands remotely on the cluster's nodes. Ideally, this would also an API other than command line that I could talk to.</p>
<h2>What I've tried / looked at</h2>
<ol>
<li><a href="https://docs.docker.com/engine/swarm/" rel="nofollow noreferrer">Docker swarm mode</a> - seemed like the perfect choice, but I hit a dead-end because I use the "--device" parameter, which is <a href="https://github.com/docker/swarmkit/issues/2682" rel="nofollow noreferrer">not supported</a> yet (and it might as well never be).</li>
<li><a href="https://docs.docker.com/machine/overview/" rel="nofollow noreferrer">Docker machine</a></li>
</ol>
<p><a href="https://i.stack.imgur.com/3I6yM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3I6yM.png" alt="Docker run image" /></a></p>
<ul>
<li>this seems exactly what I want, but it's not supported anymore, so I don't think it's a good choice.</li>
</ul>
<ol start="3">
<li><a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> sounds good but seems at first sight like an overkill. Also not sure it can do "--device" so I hesitate in going through learning it and hitting another dead-end.</li>
</ol>
<p>Any suggestions are welcome!</p>
| Copil tembel | <p>As it comes to <strong>kubernetes</strong> and support for something like <code>--device</code> in <code>docker</code>, <a href="https://stackoverflow.com/a/59291859/11714114">this</a> answer should dispel your doubts.</p>
<p>It was widely discussed in <a href="https://github.com/kubernetes/kubernetes/issues/5607" rel="nofollow noreferrer">this</a> thread on github. Although there is no exact <code>--device</code> equivalent in kubernetes, it's worth repeating that it's possible to use host devices in your kubernetes <code>Pods</code> by enabling <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers" rel="nofollow noreferrer">privileged mode</a> as suggested in <a href="https://github.com/kubernetes/kubernetes/issues/5607#issuecomment-258195005" rel="nofollow noreferrer">this</a> comment:</p>
<blockquote>
<pre><code> containers:
- name: foo
...
volumeMounts:
- mountPath: /dev/snd
name: dev-snd
securityContext:
privileged: true
volumes:
- name: dev-snd
hostPath:
path: /dev/snd
</code></pre>
</blockquote>
<p>It enables you to mount into your <code>Pod</code> any device available on specific node using <code>hostPath</code> and providing the divice path such as <code>/dev/snd</code> from the above example, enabling you to use soundcard available on the host.</p>
<p>You need to decide however, if running privileged containers is acceptable from security perspective in your particular case.</p>
<p>If you look for more secure way of mounting particular host devices that gives you more granular level of control, take a look at <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/" rel="nofollow noreferrer">device plugins</a> e.g. specific ones like the one mentioned <a href="https://stackoverflow.com/questions/59231393/how-to-mount-dev-kvm-in-a-non-privileged-pod/59254950#59254950">here</a> for exposing <code>/dev/kvm</code> or <a href="https://github.com/honkiko/k8s-hostdev-plugin#hostdev-device-plugin-for-kubernetes" rel="nofollow noreferrer">more general one</a>, allowing you to configure practically any devices under host <code>/dev</code> into your <strong>kubernetes</strong> <code>Pods</code> through device cgroup.</p>
<p>When you're planning to run and manage your <strong>docker containers</strong> on multiple-node cluster, <strong>Kubernetes</strong> doesn't have to be an overkill, especially if you decide to use a managed solution already suggested by @DannyB in comments. It's worth mentioning that its currently available in offers of all major cloud providers: <strong>GKE</strong> on <strong>GCP</strong>, <strong>EKS</strong> on <strong>AWS</strong> or <strong>AKS</strong> on <strong>Azure</strong>, which also says a lot about its growing popularity.</p>
<p><a href="http://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a> is also very scalable and dynamically developing solution, gaining popularity quite fast in recent years, so it's definitely worth of having a closer look at it.</p>
| mario |
<pre><code>eksctl get nodegroups --cluster=cluster-name --profile=dev
aws eks list-nodegroups --cluster=cluster-name --profile=dev
</code></pre>
<p>First result is correct <br />
Second result is air as follows:</p>
<pre><code>{
"nodegroups": []
}
</code></pre>
<p>I used these two commands to get the nodegroup of the cluster, but found that the results were not consistent. <br />
The configuration file I used was the same ~/.aws/config. <br />
The cluster_name was checked by the command. Come out, these two commands can correctly detect cluster but cannot detect nodegroup <br />
Thanks in advance</p>
| gophergfer | <p>According to eksctl <a href="https://eksctl.io/usage/managing-nodegroups/#listing-nodegroups" rel="noreferrer">documentation</a>:</p>
<blockquote>
<h3>Listing nodegroups</h3>
<p>To list the details about a nodegroup or all of the nodegroups, use:</p>
<p><code>eksctl get nodegroup --cluster=<clusterName> [--name=<nodegroupName>]</code></p>
<h3>Nodegroup immutability</h3>
<p>By design, nodegroups are immutable. This means that if you need to
change something (other than scaling) like the AMI or the instance
type of a nodegroup, you would need to create a new nodegroup with the
desired changes, move the load and delete the old one. Check
<a href="https://eksctl.io/usage/managing-nodegroups/#deleting-and-draining" rel="noreferrer">Deleting and
draining</a>.</p>
</blockquote>
<p>And for <code>list-nodegroup</code> from AWS <a href="https://docs.aws.amazon.com/cli/latest/reference/eks/list-nodegroups.html#list-nodegroups" rel="noreferrer">documentation</a></p>
<blockquote>
<p>Lists the Amazon EKS managed node groups associated with the specified cluster in your AWS account in the specified Region. Self-managed node groups are not listed.</p>
</blockquote>
<p>As you can see there are differences in these commands such as Self-managed node groups are not listed in the second command.</p>
| Piotr Malec |
<p>I have setup statsd-exporter to scrape metric from gunicorn web server. My goal is to filter request duration metric only for successful request(non 5xx), however in statsd-exporter there is no way to tag status code in duration metric. Can anyone suggest a way to add status code in request duration metric or a way to filter only successful request duration in prometheus.</p>
<p>In particular I want to extract successful request duration hitogram from statsd-exporter to prometheus. </p>
| Santosh Kumar | <p>To export successful request duration histogram metrics from gunicorn web server to prometheus you would need to add this functionality in gunicorn sorcecode.</p>
<p>First take a look at the code that exports statsd metrics <a href="https://github.com/benoitc/gunicorn/blob/aa8b258f937867a8a453b426e5c26db84a8ab879/gunicorn/instrument/statsd.py#L99" rel="nofollow noreferrer">here</a>.
You should see this peace of code:</p>
<pre><code>status = resp.status
...
self.histogram("gunicorn.request.duration", duration_in_ms)
</code></pre>
<p>By changing the code to sth like this:</p>
<pre><code>self.histogram("gunicorn.request.duration.%d" % status, duration_in_ms)
</code></pre>
<p>from this moment you will have metrics names exported with status codes like <code>gunicorn_request_duration_200</code> or <code>gunicorn_request_duration_404</code> etc.</p>
<p>You can also modify it a little bit and move status codes to label by adding a configuration like below to your <code>statsd_exporter</code>:</p>
<pre><code>mappings:
- match: gunicorn.request.duration.*
name: "gunicorn_http_request_duration"
labels:
status: "$1"
job: "gunicorn_request_duration"
</code></pre>
<p>So your metrics will now look like this:</p>
<pre><code># HELP gunicorn_http_request_duration Metric autogenerated by statsd_exporter.
# TYPE gunicorn_http_request_duration summary
gunicorn_http_request_duration{job="gunicorn_request_duration",status="200",quantile="0.5"} 2.4610000000000002e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="200",quantile="0.9"} 2.4610000000000002e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="200",quantile="0.99"} 2.4610000000000002e-06
gunicorn_http_request_duration_sum{job="gunicorn_request_duration",status="200"} 2.4610000000000002e-06
gunicorn_http_request_duration_count{job="gunicorn_request_duration",status="200"} 1
gunicorn_http_request_duration{job="gunicorn_request_duration",status="404",quantile="0.5"} 3.056e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="404",quantile="0.9"} 3.056e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="404",quantile="0.99"} 3.056e-06
gunicorn_http_request_duration_sum{job="gunicorn_request_duration",status="404"} 3.056e-06
gunicorn_http_request_duration_count{job="gunicorn_request_duration",status="404"} 1
</code></pre>
<p>And now to query all metrics except these with 5xx status in prometheus you can run:</p>
<pre><code>gunicorn_http_request_duration{status=~"[^5].*"}
</code></pre>
<p>Let me know if it was helpful.</p>
| Matt |
<p>I deployed my cluster with the --pod-network-cidr added, and have created the new ip pool using calicoctl to change the pods to this range. The problem I am having is exactly what I need to change on the kubernetes side to make the pod cidr range changes? Do I make changes in the API server, Controller manager, and scheduler or is there only specific parts I need to change. I have attempted only changing the controller manager, and those control plane pods go into a crash loop after changing the --cluster-cidr in the yaml. </p>
<p>The output in the controller-manager logs are below?</p>
<p>controllermanager.go:235] error starting controllers: failed to mark cidr[192.168.0.0/24] at idx [0] as occupied for node: : cidr 192.168.0.0/24 is out the range of cluster cidr 10.0.0.0/16</p>
| mmiara | <p>Changing a cluster CIDR isn't a simple task. I managed to reproduce your scenario and I managed to change it using the following steps. </p>
<p><strong>Changing an IP pool</strong></p>
<p>The process is as follows :</p>
<ol>
<li>Install calicoctl as a Kubernetes pod (<a href="https://docs.projectcalico.org/getting-started/calicoctl/install#installing-calicoctl-as-a-kubernetes-pod" rel="noreferrer">Source</a>)</li>
<li>Add a new IP pool (<a href="https://docs.projectcalico.org/v3.6/networking/changing-ip-pools" rel="noreferrer">Source</a>).</li>
<li>Disable the old IP pool. This prevents new IPAM allocations from the old IP pool without affecting the networking of existing workloads.</li>
<li>Change nodes <code>podCIDR</code> parameter (<a href="https://serverfault.com/questions/976513/is-it-possible-to-change-cidr-network-flannel-and-kubernetes">Source</a>) </li>
<li>Change <code>--cluster-cidr</code> on <code>kube-controller-manager.yaml</code> on master node. (Credits to <a href="https://stackoverflow.com/users/6759406/mm-wvu18">OP</a> on that)</li>
<li>Recreate all existing workloads that were assigned an address from the old IP pool.</li>
<li>Remove the old IP pool.</li>
</ol>
<p>Let’s get started.</p>
<p>In this example, we are going to replace <code>192.168.0.0/16</code> to <code>10.0.0.0/8</code>. </p>
<ol>
<li><strong>Installing calicoctl as a Kubernetes pod</strong>
<pre><code>$ kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
</code></pre>
Setting an alias:
<pre><code>$ alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl "
</code></pre></li>
<li><p>Add a new IP pool:</p>
<pre><code>calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: new-pool
spec:
cidr: 10.0.0.0/8
ipipMode: Always
natOutgoing: true
EOF
</code></pre>
<p>We should now have two enabled IP pools, which we can see when running <code>calicoctl get ippool -o wide</code>:</p>
<pre><code>NAME CIDR NAT IPIPMODE DISABLED
default-ipv4-ippool 192.168.0.0/16 true Always false
new-pool 10.0.0.0/8 true Always false
</code></pre></li>
<li><p>Disable the old IP pool.</p>
<p>First save the IP pool definition to disk:</p>
<pre><code>calicoctl get ippool -o yaml > pool.yaml
</code></pre>
<p><code>pool.yaml</code> should look like this:</p>
<pre><code>apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
cidr: 192.168.0.0/16
ipipMode: Always
natOutgoing: true
- apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: new-pool
spec:
cidr: 10.0.0.0/8
ipipMode: Always
natOutgoing: true
</code></pre>
<blockquote>
<p>Note: Some extra cluster-specific information has been redacted to improve readibility.</p>
</blockquote>
<p>Edit the file, adding <code>disabled: true</code> to the <code>default-ipv4-ippool</code> IP pool:</p>
<pre><code>apiVersion: projectcalico.org/v3
kind: IPPool
metadata:5
name: default-ipv4-ippool
spec:
cidr: 192.168.0.0/16
ipipMode: Always
natOutgoing: true
disabled: true
</code></pre>
<p>Apply the changes:</p>
<pre><code>calicoctl apply -f pool.yaml
</code></pre>
<p>We should see the change reflected in the output of <code>calicoctl get ippool -o wide</code>:</p>
<pre><code>NAME CIDR NAT IPIPMODE DISABLED
default-ipv4-ippool 192.168.0.0/16 true Always true
new-pool 10.0.0.0/8 true Always false
</code></pre></li>
<li><p>Change nodes <code>podCIDR</code> parameter: </p>
<p>Override <code>podCIDR</code> parameter on the particular k8s Node resource with a new IP source range, desirable way with the following commands:</p>
<pre><code>$ kubectl get no kubeadm-0 -o yaml > file.yaml; sed -i "s~192.168.0.0/24~10.0.0.0/16~" file.yaml; kubectl delete no kubeadm-0 && kubectl create -f file.yaml
$ kubectl get no kubeadm-1 -o yaml > file.yaml; sed -i "s~192.168.1.0/24~10.1.0.0/16~" file.yaml; kubectl delete no kubeadm-1 && kubectl create -f file.yaml
$ kubectl get no kubeadm-2 -o yaml > file.yaml; sed -i "s~192.168.2.0/24~10.2.0.0/16~" file.yaml; kubectl delete no kubeadm-2 && kubectl create -f file.yaml
</code></pre>
<p>We had to perform this action for every node we have. Pay attention to the IP Ranges, they are different from one node to the other. </p></li>
<li><p>Change CIDR on kubeadm-config ConfigMap and kube-controller-manager.yaml</p></li>
</ol>
<p>Edit kubeadm-config ConfigMap and change podSubnet to the new IP Range:</p>
<pre><code>kubectl -n kube-system edit cm kubeadm-config
</code></pre>
<p>Also, change the <code>--cluster-cidr</code> on /etc/kubernetes/manifests/kube-controller-manager.yaml located in the master node. </p>
<pre><code>$ sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.0.0.0/8
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
</code></pre>
<ol start="7">
<li><p>Recreate all existing workloads using IPs from the disabled pool. In this example, kube-dns is the only workload networked by Calico:</p>
<pre><code>kubectl delete pod -n kube-system kube-dns-6f4fd4bdf-8q7zp
</code></pre>
<p>Check that the new workload now has an address in the new IP pool by running <code>calicoctl get wep --all-namespaces</code>:</p>
<pre><code>NAMESPACE WORKLOAD NODE NETWORKS INTERFACE
kube-system kube-dns-6f4fd4bdf-8q7zp vagrant 10.0.24.8/32 cali800a63073ed
</code></pre></li>
<li><p>Delete the old IP pool:</p>
<pre><code>calicoctl delete pool default-ipv4-ippool
</code></pre></li>
</ol>
<p><strong>Creating it correctly from scratch</strong></p>
<p>To deploy a cluster under a specific IP range using Kubeadm and Calico you need to init the cluster with <code>--pod-network-cidr=192.168.0.0/24</code> (where <code>192.168.0.0/24</code> is your desired range) and than you need to tune the Calico manifest before applying it in your fresh cluster. </p>
<p>To tune Calico before applying, you have to download it's yaml file and change the network range. </p>
<ol>
<li>Download the Calico networking manifest for the Kubernetes.
<pre><code>$ curl https://docs.projectcalico.org/manifests/calico.yaml -O
</code></pre></li>
<li>If you are using pod CIDR <code>192.168.0.0/24</code>, skip to the next step. If you are using a different pod CIDR, use the following commands to set an environment variable called <code>POD_CIDR</code> containing your pod CIDR and replace <code>192.168.0.0/24</code> in the manifest with your pod CIDR.
<pre><code>$ POD_CIDR="<your-pod-cidr>" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
</code></pre></li>
<li>Apply the manifest using the following command.
<pre><code>$ kubectl apply -f calico.yaml
</code></pre></li>
</ol>
| Mark Watney |
<p>I have a helm chart that installs/creates an instance of our app. Our app consist of multiple micro-services and one of them is nginx. The nginx service is of type loadbalancer. </p>
<p>So when user first tries to hit the loadbalancer IP from browser, I want to open a web page which will ask him to bind some domains (e.g. a.yourdomain.com and b.yourdomain.com) with the loadbalancer IP and once he does that, he will click on "verify" button and at that time I want to check on the server side if the domains are correctly pointing to the loadbalancer IP or not.</p>
<p>Now the problem is <em>how can I get the loadbalancer external IP inside the nginx pod so that I can ping the domains and check if they are poining to the loadbalancer IP or not.</em></p>
<p><strong>Edit</strong></p>
<p>Note: I would like to avoid using kubectl because I do not want to install this extra utility for one time job.</p>
| Yogeshwar Singh | <p>I have found a solution, tested and it's working.</p>
<p>To find ExternalIP associated with nginx service of type LoadBalancer you want to create a service account:</p>
<pre><code>kubectl create serviceaccount hello
</code></pre>
<p>and also create Role and RoleBindind like folllowing:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-services
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-services
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-services
subjects:
- kind: ServiceAccount
name: hello
namespace: default
</code></pre>
<p>Then you create your pod with <code>serviceAccount: hello</code></p>
<p>and now you can make a curl request to api-server <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-using-a-proxy" rel="nofollow noreferrer">like shown in k8s documentation</a>:</p>
<pre><code>APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/$NAMESPACE/services/nginx/
</code></pre>
<p>under <code>.status.loadBalancer.ingress[0].ip</code> should be IP you are looking for.</p>
<p>Let me know if it was helpful.</p>
| Matt |
<p>Can I replicate data between Kubernetes PV into two separate clusters situated in different data centers?</p>
<p>I have a cluster with associated PV running in Primary site. I have a separate cluster running in DR site.</p>
<p>How do I continuously replicate data in primary site to DR site so that when application is running from from DR? The data written to PR PVs are available in DR.</p>
<p>Application writes files to the PV like xls, csv etc.</p>
<p>I can use any OSS storage orchestrator like openebs, rook, storageos etc.</p>
<p>Database is outside of kubernetes.</p>
| Vij P | <p><a href="https://stackoverflow.com/users/1328672/narain">Narain</a> is right. <strong>Kubernetes doesn't contain any functionality that would allow you to synchronize two PVs used by two different clusters</strong>. So you would need to find your own solution to synchronize those two filesystems. It can be an existing solution like <code>lsyncd</code>, proposed in <a href="https://unix.stackexchange.com/a/307049">this thread</a> or any custom solution like the above mentioned <code>rsync</code> which can be wrapped into a simple <code>bash</code> script and run periodically in <code>cron</code>.</p>
| mario |
<p>I had <strong>Istio 1.3.3</strong> gateway and <a href="https://hub.docker.com/r/karthequian/helloworld" rel="nofollow noreferrer">helloworld</a> gateway toward my application service.</p>
<p><strong>Istio Gateway</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
chart: gateways-1.0.0
heritage: Tiller
istio: ingressgateway
release: RELEASE-NAME
name: istio-ingressgateway
namespace: istio-system
spec:
externalTrafficPolicy: Cluster
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 31390
port: 443
protocol: TCP
targetPort: 443
- name: tcp
nodePort: 31400
port: 31400
protocol: TCP
targetPort: 31400
- name: tcp-pilot-grpc-tls
nodePort: 32565
port: 15011
protocol: TCP
targetPort: 15011
- name: tcp-citadel-grpc-tls
nodePort: 32352
port: 8060
protocol: TCP
targetPort: 8060
- name: http2-helloworld
nodePort: 31750
port: 15033
protocol: TCP
targetPort: 15033
selector:
app: istio-ingressgateway
istio: ingressgateway
type: LoadBalancer
</code></pre>
<p><strong>HelloWorld Gateway</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 15033
name: http2-helloworld
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- port: 15033
route:
- destination:
host: helloworld
port:
number: 5000
</code></pre>
<p><strong>HelloWorld.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
version: v1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: karthequian/helloworld:latest
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
</code></pre>
<p>When I tried to access the application from Istio gateway using <code>localhost:15033</code>, working with different port and docker images are working fine, but <a href="https://hub.docker.com/r/karthequian/helloworld" rel="nofollow noreferrer">this docker image</a> that used nginx doesn't work well.</p>
<p>I got an error when access to <code>localhost:15033</code></p>
<blockquote>
<p>upstream connect error or disconnect/reset before headers. reset reason: connection termination</p>
</blockquote>
<hr>
<p><strong>Informations</strong></p>
<p>Kubernetes started and installed from Mac Desktop Docker Application. Context was desktop-docker.</p>
<ol>
<li>kubectl version</li>
</ol>
<pre><code>Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<ol start="2">
<li>kubectl cluster-info</li>
</ol>
<pre><code>Kubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
</code></pre>
<ol start="3">
<li>kubectl cluster-info dump > clusterInfoDump.txt</li>
</ol>
<p><a href="https://justpaste.it/5n1op" rel="nofollow noreferrer">https://justpaste.it/5n1op</a></p>
<ol start="4">
<li>istioctl version</li>
</ol>
<pre><code>client version: 1.3.3
control plane version: 1.3.3
</code></pre>
| Abel | <p>In Your HelloWorld.yaml You are missing <code>targetPort</code> and this is why nginx is unreachable.</p>
<p>This is how it should look:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
targetPort: 80
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
version: v1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: karthequian/helloworld:latest
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
</code></pre>
| Piotr Malec |
<p>I created a persistent volume using the following YAML</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: dq-tools-volume
labels:
name: dq-tools-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: volume-class
nfs:
server: 192.168.215.83
path: "/var/nfsshare"
</code></pre>
<p>After creating this I created two persistentvolumeclaims using following YAMLS</p>
<p><strong>PVC1:</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-1
labels:
name: jenkins-volume-1
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
</code></pre>
<p><strong>PVC2:</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-2
labels:
name: jenkins-volume-2
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
</code></pre>
<p>But i noticed that both of these persistent volume claims are writing to same backend volume.</p>
<p>How can i isolate data of one persistentvolumeclaim from another. I am using this for multiple installations of Jenkins. I want workspace of each Jenkins to be isolated.</p>
| thinkingmonster | <p>As @D.T. explained a persistent volume claim is exclusively bound to a persistent volume.<br>
<strong>You cannot bind 2 pvc to the same pv</strong>.</p>
<p><a href="https://stackoverflow.com/questions/44204223/kubernetes-nfs-persistent-volumes-multiple-claims-on-same-volume-claim-stuck">Here</a> you can find another case where it was discussed. </p>
<p>There is a better solution for your scenario and it involves using <a href="https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner" rel="nofollow noreferrer">nfs-client-provisioner</a>. To achive that, firstly you have to install helm in your cluster an than follow these steps that I created for a previous <a href="https://serverfault.com/questions/990520/nfs-server-and-client-are-working-but-data-is-not-on-the-server/990621#990621">answer</a> on ServerFault. </p>
<p>I've tested it and using this solution you can isolate one PVC from the other. </p>
<p><strong>1 - Install and configur NFS Server on my Master Node (Debian Linux, this might change depending on your Linux distribution):</strong></p>
<p>Before installing the NFS Kernel server, we need to update our system’s repository index:</p>
<pre><code>$ sudo apt-get update
</code></pre>
<p>Now, run the following command in order to install the NFS Kernel Server on your system:</p>
<pre><code>$ sudo apt install nfs-kernel-server
</code></pre>
<p>Create the Export Directory</p>
<pre><code>$ sudo mkdir -p /mnt/nfs_server_files
</code></pre>
<p>As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy): </p>
<pre><code>$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files
</code></pre>
<p>Assign server access to client(s) through NFS export file</p>
<pre><code>$ sudo nano /etc/exports
</code></pre>
<p>Inside this file, add a new line to allow access from other servers to your share.</p>
<pre><code>/mnt/nfs_server_files 10.128.0.0/24(rw,sync,no_subtree_check)
</code></pre>
<p>You may want to use different options in your share. 10.128.0.0/24 is my k8s internal network.</p>
<p>Export the shared directory and restart the service to make sure all configuration files are correct. </p>
<pre><code>$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server
</code></pre>
<p>Check all active shares: </p>
<pre><code>$ sudo exportfs
/mnt/nfs_server_files
10.128.0.0/24
</code></pre>
<p><strong>2 - Install NFS Client on all my Worker Nodes:</strong></p>
<pre><code>$ sudo apt-get update
$ sudo apt-get install nfs-common
</code></pre>
<p>At this point you can make a test to check if you have access to your share from your worker nodes: </p>
<pre><code>$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client
</code></pre>
<p>Notice that at this point you can use the name of your master node. K8s is taking care of the DNS here.
Check if the volume mounted as expected and create some folders and files to male sure everything is working fine. </p>
<pre><code>$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file
</code></pre>
<p>Go back to your master node and check if these files are at /mnt/nfs_server_files folder. </p>
<p><strong>3 - Install NFS Client Provisioner</strong>.</p>
<p>Install the provisioner using helm:</p>
<pre><code>$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner
</code></pre>
<p>Notice that I've specified a namespace for it.
Check if they are running: </p>
<pre><code>$ kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
ext-nfs-client-provisioner-f8964b44c-2876n 1/1 Running 0 84s
</code></pre>
<p>At this point we have a storageclass called nfs-client: </p>
<pre><code>$ kubectl get storageclass -n nfs
NAME PROVISIONER AGE
nfs-client cluster.local/ext-nfs-client-provisioner 5m30s
</code></pre>
<p>We need to create a PersistentVolumeClaim: </p>
<pre><code>$ more nfs-client-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: nfs
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
</code></pre>
<pre><code>$ kubectl apply -f nfs-client-pvc.yaml
</code></pre>
<p>Check the status (Bound is expected):</p>
<pre><code>$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5 1Mi RWX nfs-client 24s
</code></pre>
<p><strong>4 - Create a simple pod to test if we can read/write out NFS Share:</strong></p>
<p>Create a pod using this yaml: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod0
labels:
env: test
namespace: nfs
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
</code></pre>
<pre><code>$ kubectl apply -f pod.yaml
</code></pre>
<p>Let's list all mounted volumes on our pod:</p>
<pre><code>$ kubectl exec -ti -n nfs pod0 -- df -h /mnt
Filesystem Size Used Avail Use% Mounted on
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1 99G 11G 84G 11% /mnt
</code></pre>
<p>As we can see, we have a NFS volume mounted on /mnt. (Important to notice the path <code>kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1</code>) </p>
<p>Let's check it: </p>
<pre><code>root@pod0:/# cd /mnt
root@pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:33 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
</code></pre>
<p>It's empty. Let's create some files: </p>
<pre><code>$ for i in 1 2; do touch file$i; done;
$ ls -l
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:58 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file2
</code></pre>
<p>Now let's where are these files on our NFS Server (Master Node): </p>
<pre><code>$ cd /mnt/nfs_server_files
$ ls -l
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file2
</code></pre>
<p>And here are the files we just created inside our pod! </p>
| Mark Watney |
<p>I'm trying to deploy mongodb on my k8s cluster as mongodb is my db of choice. To do that I've config files (very similar to what I did with postgress few weeks ago). </p>
<p>Here's mongo's deployment k8s object:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
</code></pre>
<p>In order to get into the pod I made a service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
</code></pre>
<p>And of cource I need a PVC as well:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
</code></pre>
<p>In order to get to the db from my server I used server deployment object:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-api
template:
metadata:
labels:
component: panel-admin-api
spec:
containers:
- name: panel-admin-api
image: my-image
ports:
- containerPort: 3001
env:
- name: MONGO_URL
value: panel-admin-mongo-cluster-ip-service // This is important
imagePullSecrets:
- name: gcr-json-key
</code></pre>
<p>But for some reason when I'm booting up all containers with kubectl apply command my server says:
<code>MongoDB :: connection error: MongoParseError: Invalid connection string</code></p>
<p>Can I deploy it like that (as it was possible with postgress)? Or what am I missing here?</p>
| Murakami | <p>Use <em>mongodb://</em> in front of your <em>panel-admin-mongo-cluster-ip-service</em></p>
<p>So it should look like this:
<code>mongodb://panel-admin-mongo-cluster-ip-service</code></p>
| Matt |
<p>I'm struggling to setup a kubernetes secret using GoDaddy certs in order to use it with the Ingress Nginx controller in a Kubernetes cluster.</p>
<p>I know that GoDaddy isn't the go-to place for that but that's not on my hands...</p>
<p>Here what I tried (mainly based on this <a href="https://rammusxu.github.io/2019/11/06/k8s-Use-crt-and-key-in-Ingress/" rel="nofollow noreferrer">github post</a>):</p>
<p>I have a mail from GoDaddy with two files: <code>generated-csr.txt</code> and <code>generated-private-key.txt</code>.</p>
<p>Then I downloaded the cert package on GoDaddy's website (I took the "Apache" server type, as it's the recommended on for Nginx). The archive contains three files (with generated names): <code>xxxx.crt</code> and <code>xxxx.pem</code> (same content for both files, they represent the domain cert) and <code>gd_bundle-g2-g1.crt</code> (which is the intermediate cert).</p>
<p>Then I proceed to concat the domain cert and the intermediate cert (let's name it chain.crt) and tried to create a secret using these file with the following command:</p>
<pre><code>kubectl create secret tls organization-tls --key generated-private-key.txt --cert chain.crt
</code></pre>
<p>And my struggle starts here, as it throw this error:</p>
<pre><code>error: tls: failed to find any PEM data in key input
</code></pre>
<p>How can I fix this, or what I'm missing?</p>
<p>Sorry to bother with something trivial like this, but it's been two days and I'm really struggling to find proper documentation or example that works for the Ingress Nginx use case...</p>
<p>Any help or hint is really welcome, thanks a lot to you!</p>
| RPT102020 | <p><em>This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.</em></p>
<p>As OP mentioned in comments, <strong>the issue was solved by adding a new line in the beginning of the file</strong>.</p>
<blockquote>
<p>"The key wasn't format correctly as it was lacking a newline in the
beginning of the file. So this particular problem is now resolved."</p>
</blockquote>
<p>Similar issue was also addressed in <a href="https://stackoverflow.com/a/57614637/11714114">this answer</a>.</p>
| mario |
<p>I'm trying to install Kritis using :</p>
<pre><code>azureuser@Azure:~/kritis/docs/standalone$ helm install kritis https://storage.googleapis.com/kritis-charts/repository/kritis-charts-0.2.0.tgz --set certificates.ca="$(cat ca.crt)" --set certificates.cert="$(cat kritis.crt)" --set certificates.key="$(cat kritis.key)" --debug
</code></pre>
<p>But I'm getting the next error:</p>
<pre><code>install.go:148: [debug] Original chart version: ""
install.go:165: [debug] CHART PATH: /home/azureuser/.cache/helm/repository/kritis-charts-0.2.0.tgz
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(ClusterRole.metadata): unknown field "kritis.grafeas.io/install" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
helm.go:76: [debug] error validating "": error validating data: ValidationError(ClusterRole.metadata): unknown field "kritis.grafeas.io/install" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
helm.sh/helm/v3/pkg/kube.scrubValidationError
/home/circleci/helm.sh/helm/pkg/kube/client.go:520
helm.sh/helm/v3/pkg/kube.(*Client).Build
/home/circleci/helm.sh/helm/pkg/kube/client.go:135
</code></pre>
<p>Is there a way to know exactly on which file the error is being triggered? and what exactly that error means?
The original chart files are available here : <a href="https://github.com/grafeas/kritis/blob/master/kritis-charts/templates/preinstall/clusterrolebinding.yaml" rel="nofollow noreferrer">https://github.com/grafeas/kritis/blob/master/kritis-charts/templates/preinstall/clusterrolebinding.yaml</a></p>
| Judavi | <p>You cant get from where exactly this coming from but this output is giving some clues regarding that.
In your error message we have some useful information:</p>
<pre><code>helm.go:76: [debug] error validating "": error validating data: ValidationError(ClusterRole.metadata): unknown field "kritis.grafeas.io/install" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
</code></pre>
<ul>
<li><code>error validating ""</code></li>
<li><code>ClusterRole</code></li>
<li><code>kritis.grafeas</code></li>
</ul>
<p>You can download your chart and dig into it for these terms using <code>cat</code> as follows: </p>
<pre><code>$ wget https://storage.googleapis.com/kritis-charts/repository/kritis-charts-0.2.0.tgz
$ tar xzvf kritis-charts-0.2.0.tgz
$ cd kritis-charts/
</code></pre>
<p>If your grep for <code>kritis.grafeas.io/install</code>, you can see a "variable" being set:</p>
<pre><code>$ grep -R "kritis.grafeas.io/install" *
values.yaml:kritisInstallLabel: "kritis.grafeas.io/install"
</code></pre>
<p>Now we can grep this variable and check what we can find: </p>
<pre><code>$ grep -R "kritisInstallLabel" *
templates/rbac.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/rbac.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/kritis-server-deployment.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/preinstall/pod.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/preinstall/pod.yaml: - {{ .Values.kritisInstallLabel }}
templates/preinstall/serviceaccount.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/preinstall/clusterrolebinding.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/postinstall/pod.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/postinstall/pod.yaml: - {{ .Values.kritisInstallLabel }}
templates/secrets.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/predelete/pod.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/kritis-server-service.yaml: {{ .Values.kritisInstallLabel }}: ""
values.yaml:kritisInstallLabel: "kritis.grafeas.io/install"
</code></pre>
<p>In this output we can see a <code>rbac.yaml</code> file. That matches with one of the terms we are looking for (<code>ClusterRole</code>):</p>
<p>If we read this file, we can see the <code>ClusterRole</code> and a line referring to <code>kritisInstallLabel</code>:</p>
<pre><code>- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.clusterRoleBindingName }}
labels:
{{ .Values.kritisInstallLabel }}: ""
</code></pre>
<p><code>{{ .Values.kritisInstallLabel }}: ""</code> will be translated as <code>.Values.kritis.grafeas.io/install</code> by helm and that's where your error is coming from. </p>
| Mark Watney |
<p>I am running a Laravel application on Kubernetes and currently have a requirement to mount a storage/log folder from multiple pods where all pods will write to the same <code>laravel.log</code> file. To achieve this, I used AWS EFS with the EFS-Provisioner for Kubernetes to mount the storage.</p>
<p>While troubleshooting a logging issue from my application, I noticed that when I log an entry to the <code>laravel.log</code> file from Pod A/B, it shows up in both Pod A/B when I tail the log however it does not show up in Pod C. If I log from Pod C, it will only show up in Pod C. I am working inside the container using <code>php artisan tinker</code> and <code>Log::error('php-fpm');</code> for example as well as <code>tail -f /var/www/api/storage/logs/laravel.log</code>. The same behaviour happens if I <code>echo "php-fpm" >> /var/www/api/storage/logs/laravel.log</code>.</p>
<p>At first, I thought that I was possibly mounting the wrong storage however if I <code>touch test</code> in the folder, I can see it across Pod A, B, C. I can also see the other log files that are created with identical timestamps.</p>
<p>Any ideas on how to fix this?</p>
<p><strong>Edit:</strong> I noticed that pod A, B which are both seeing each others log entries are in the same AZ. Pod C (and another Pod D running Nginx) are running in a different AZ. I just want to outline this, but I feel like this really shouldn't matter. It should be the same storage no matter where you are connecting from.</p>
| leeman24 | <p>AWS EFS is accessed using NFS protocol and according to <a href="https://unix.stackexchange.com/a/299653">this stack exchange answer</a> simultaneous writes from multiple NFS clients to the same file will be corrupted.</p>
<p>I'm not sure there is a way of "fixing" NFS itself but you can always just log to separate files.</p>
| Matt |
<p>In Nginx, we can set up basic auth by adding annotation in the ingress.</p>
<pre><code>nginx.ingress.kubernetes.io/auth-realm: Authentication Required
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
</code></pre>
<p>But if we are using azure application gateway instead of Nginx how can we set up basic auth.
<a href="https://i.stack.imgur.com/Qiynp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qiynp.png" alt="enter image description here" /></a></p>
| Jerin Joy | <p>Unfortunately <strong>Azure Application Gateway</strong> doesn't support <strong>basic auth</strong> and I would say using an <strong>ingress controller</strong> like <strong>nginx-ingress</strong> is the proper choice in a scenario where you need this feature. The lack of support for authentication in <strong>Azure Application Gateway</strong> was already reported in <a href="https://feedback.azure.com/forums/217313-networking/suggestions/19473664-authentication-support-for-application-gateway" rel="nofollow noreferrer">this thread</a>.</p>
| mario |
<p>I need to use best-suited cni for Dotnet core with SQL server(Same Network but different IP) Kubernetes deployment.</p>
| Ramesh Hariharan | <p>CNI's are made in a way that they abstract network logic from apps. This allows us to use CNI's without worrying too much about the details.</p>
<p>If you are getting started with CNI, you could use pretty much any CNI you like, as your apps will use the network as usual. </p>
<p>It does not matter which app / language / framework / database you use, the CNI only takes care of networking which your app will never know anything about. </p>
<p>CNI is mainly there to allow network policies and manage node networking. </p>
<p>If you want to see a great post about CNI outside Kubernetes (<a href="https://medium.com/@vikram.fugro/project-calico-the-cni-way-659d057566ce" rel="nofollow noreferrer">This</a> will give you a idea of what CNI actually is and does).
*Not written by me, but a great post to understand CNI.</p>
<p>So whichever you choose, will be purely merited by the CNI and not how it works with your app. </p>
<p>For a start, Calico is a good choice as it's simple to deploy and use, and allows Networking policies. </p>
| Christiaan Vermeulen |
<p>For monitoring purposes, I want to rely on a pod's restartCount. However, I cannot seem to do that for certain apps, as restartCount is not reset even after rebooting the whole node the pod is scheduled to run on.</p>
<p>Usually, restarting a pod resets this, <strong>unless the pod name of the restarted pod is the same</strong> (e.g. true for etcd, kube-controller-manager, kube-scheduler and kube-apiserver).</p>
<p>For those cases, there is a <a href="https://github.com/kubernetes/kubernetes/issues/50375" rel="nofollow noreferrer">longrunning minor issue</a> as well as the idea to use <code>kubectl patch</code>.</p>
<p>To sum up the info there, <code>kubectl edit</code> will not allow to change anything in status. Unfortunately, neither does e.g.</p>
<pre><code>kubectl -n kube-system patch pod kube-controller-manager-some.node.name --type='json' -p='[{"op": "replace", "path": "/status/containerStatuses/0/restartCount", "value": 14}]'
The Pod "kube-controller-manager-some.node.name" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
</code></pre>
<p>So, I am checking if anyone has found a workaround?</p>
<p>Thanks!</p>
<p>Robert</p>
| valuecoach | <p>This seems to be quite an old issue (2017). Take a look <a href="https://github.com/kubernetes/kubernetes/issues/54476" rel="nofollow noreferrer">here</a>.</p>
<p>I believe the solution to it was supposed to be <a href="https://github.com/kubernetes/kubernetes/pull/43420" rel="nofollow noreferrer">implementing unique UIDs for static pods</a>. This issue got reopened few days ago as <a href="https://github.com/kubernetes/kubernetes/pull/87461" rel="nofollow noreferrer">another github issue</a> and hasn't been implemented to this day.</p>
<p>I have found a workaround for it. You need to change static pod manifest file e.g. by adding some random annotation to pod.</p>
<p>Let me know if it was helpful.</p>
| Matt |
<p>We are using NFS volume (GCP filestore with 1TB size) to set RWX Many access PVC in GCP, the problem here is:
for example I allot a PVC of 5Gi and mount it to a nginx pod under /etc/nginx/test-pvc, instead of just allotting 5Gi it allots the whole NFS volume size.</p>
<p>I logged into the nginx pod and did a df -kh:</p>
<pre><code>df -kh
Filesystem Size Used Avail Use% Mounted on
overlay 95G 16G 79G 17% /
tmpfs 64M 0 64M 0% /dev
tmpfs 63G 0 63G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 95G 16G 79G 17% /etc/hosts
10.x.10.x:/vol 1007G 5.0M 956G 1% /etc/nginx/test-pvc
tmpfs 63G 12K 63G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 63G 0 63G 0% /proc/acpi
tmpfs 63G 0 63G 0% /proc/scsi
tmpfs 63G 0 63G 0% /sys/firmware
</code></pre>
<p>size of /etc/nginx/test-pvc is 1007G, which is my whole volume size in NFS(1 TB), it should have been 5G instead, even the used space 5MB isn't actually used in /etc/nginx/test-pvc. Why is the behaviour so ?</p>
<p>PV and PVC yaml used:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-test
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
path: /vol
server: 10.x.10.x
persistentVolumeReclaimPolicy: Recycle
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim1
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
resources:
requests:
storage: 5Gi
volumeName: pv-nfs-test
</code></pre>
<p>Nginx deployment yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-pv-demo-depl
spec:
replicas: 1
selector:
matchLabels:
app: nfs-pv-demo
template:
metadata:
name: nfs-pv-pod
labels:
app: nfs-pv-demo
spec:
containers:
- image: nginx
name: nfs-pv-multi
imagePullPolicy: Always
name: ng
volumeMounts:
- name: nfs-volume-1
mountPath: "/etc/nginx/test-pvc"
volumes:
- name: nfs-volume-1
persistentVolumeClaim:
claimName: nfs-claim1
</code></pre>
<p>Is there anything I'm missing ? Or is this the behaviour of NFS ? If so what is the best way to handle it in production, as we will have multiple other PVCs and could cause some confusions and volume denial issues.</p>
| Sanjay M. P. | <blockquote>
<p>Is there anything I'm missing ? Or is this the behaviour of NFS ?</p>
</blockquote>
<p>No, nothing at all. This is simply the way it works. And it's nothing specific to NFS either.</p>
<p><code>5Gi</code> of storage capacity defined in your <code>PV</code> can be treated more like a <strong>declaration</strong> that you have a <code>PersistentVolume</code> object which has 5 gigabtes of uderlying storage. <strong>But it's nothing more than just a declaration. You can not put any constraint on your available disk capacity this way.</strong> So if you have a disk that has actually 100 gigabytes of capacity it is a good practice to declare in this field of your <code>PV</code> definition <code>100Gi</code> for the sake of consistency.</p>
<p>The storage capacity you set in your <code>PVC</code> is a bit different story. It can be understood as a <strong>minimum storage capacity that would satisfy your request for storage</strong>. So if you have let's say 3 different <code>PVs</code> which have following capacities (declared in <code>PV</code> definition, no matter what their real capacity is): <code>3Gi</code>, <code>10Gi</code> and <code>100Gi</code> and you claim for <code>5Gi</code> in your <code>PersistentVolumeClaim</code>, only 2 of them i.e. <code>10Gi</code> and <code>100Gi</code> can satisfy such request. And as I said above, it doesn't matter that the smallest one which has <code>3Gi</code> declared is in fact backed with quite a large disk which has <code>1000Gi</code>. If you defined a <code>PV</code> object which represents such disk in kubernetes environment (and makes it available to be consumed by some <code>PVC</code> and in the end by some <code>Pod</code> which uses it) and you declared that this particular <code>PV</code> has only <code>3Gi</code> of capacity, <code>PVC</code> in which you request for <code>5Gi</code> has no means to verify the actual capacity of the disk and "sees" such volume as the one with not enough capacity to satisfy the request made for <code>5Gi</code>.</p>
<p>To illustrate that it isn't specific to NFS, you can create a new GCE persistent disk of 100 gigabytes (e.g. via cloud console as it seems the easiest way) and then you can use such disk in a <code>PV</code> and <code>PVC</code> which in the end will be used by simple nginx pod. This is described <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#create_pv_pvc" rel="nofollow noreferrer">here</a>.</p>
<p>So you may declare in your <code>PV</code> 10Gi (in <code>PVC</code> at most 10Gi then) although your GCE persistent disk has in fact the capacity of 100 gigs. And if you connect to such pod, you won't see the declared capacity of 10Gi but the real capacity of the disk. And it's completely normal and works exactly as it was designed.</p>
<p>You may have thought that it works similar to <code>LVM</code> where you create a volume group consisting of one or more disks and than you can create as many logical volumes as your underlying capacity allows you. <code>PVs</code> in kubernetes don't allow you to do anything like this. <strong>Capacity that you "set" in a <code>PV</code> definition is only a declaration, not a constraint of any kind.</strong> If you need to mount separate chunks of a huge disks into different pods, you would need to divide it into partitions first and create separate <code>PV</code> objects, each one out of a single partition.</p>
| mario |
<p>As the title says, I can't figure out how you're supposed to do this. The pricing calculator allows for it, so I'm assuming it's possible.</p>
<p>I've tried:</p>
<p>1) Creating a new cluster </p>
<p>2) Creating a vm and adding it to an existing cluster, then deleting the initial node (Tried with and without the scaleset option)</p>
<p>For #1, I see no options to add a reserved instance during cluster initialization. For #2, I see no options to add an existing vm to an existing aks cluster.</p>
<p>Anyone know how to do this?</p>
| Teslavolt | <blockquote>
<p>After you buy an Azure Reserved Virtual Machine Instance, the reservation discount is automatically applied to virtual machines that match the attributes and quantity of the reservation. A reservation covers the compute costs of your virtual machines. <a href="https://learn.microsoft.com/en-us/azure/billing/billing-understand-vm-reservation-charges" rel="nofollow noreferrer">Source</a></p>
</blockquote>
<p>In the <a href="https://learn.microsoft.com/en-us/azure/billing/billing-understand-vm-reservation-charges#instance-size-flexibility-setting" rel="nofollow noreferrer">documentation</a> you can see that this also applies to AKS. </p>
<p>In other words, you buy a reserved instance and after you create your AKS cluster selecting instances with the same size, the discount will be automatically applied. </p>
<blockquote>
<p>A reservation discount applies to the base VMs that you purchase from the Azure Marketplace. </p>
</blockquote>
<p>By Marketplace you can also read AKS.</p>
| Mark Watney |
<p>Previously my kubernetes pod was running as root and I was running <code>ulimit -l <MLOCK_AMOUNT></code> in its startup script before starting its main program in foreground. However, I can no more run my pod as root, would you please know how can I set this limit now?</p>
| Fabrice Jammes | <p>To be able to set it per specific <code>Pod</code>, the way you did it before, unfortunatelly you need privilege escalation i.e. run your container as root.</p>
<p>As far as I understand you're interested in setting it only per specific <code>Pod</code>, not globally, right ? Because it can be done by changing docker configuration on a specific kubernetes node.</p>
<p>This topic has already been raised in <a href="https://stackoverflow.com/questions/33649192/how-do-i-set-ulimit-for-containers-in-kubernetes">another thread</a> and as you may read in <a href="https://stackoverflow.com/a/33666728/11714114">James Brown's answer</a>:</p>
<blockquote>
<p>It appears that you can't currently set a ulimit but it is an open
issue: <a href="https://github.com/kubernetes/kubernetes/issues/3595" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/3595</a></p>
</blockquote>
| mario |
<p>When trying to install Jupyterhub on Kubernetes (EKS) I am getting below error in the Hub pod. This is output of describe pod. There was similar issue reported and i tried the solution but it didn't work.</p>
<pre><code>Warning FailedScheduling 52s (x2 over 52s) default-scheduler 0/3 nodes are available: 1 Insufficient cpu, 2 node(s) had no available volume zone.
</code></pre>
<h2>This is my pvc.yaml</h2>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
annotations:
volume.alpha.kubernetes.io/storage-class: default
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- us-east-1a
- us-east-1b
- us-east-1c
</code></pre>
<hr>
<h1>Source: jupyterhub/templates/hub/pvc.yaml</h1>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hub-db-dir
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
</code></pre>
<p>Please let me know if I am missing something here.</p>
| Mohan | <p>According to AWS documentation, an EBS volume and the instance to which it attaches must be in the same Availability Zone. (<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html" rel="nofollow noreferrer">Source</a>)</p>
<p>In that case, the solution is using only one AZ. </p>
<blockquote>
<p>Kubernetes itself supports many other storage backends that could be
used zone independently, but of course with different properties (like
performance, pricing, cloud provider support, ...). For example there
is <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">AWS EFS</a> that can be used in any AZ within an AWS region but with its own tradeoffs (e.g. <a href="https://github.com/kubernetes-incubator/external-storage/issues/1030" rel="nofollow noreferrer">kubernetes-incubator/external-storage#1030</a>).</p>
</blockquote>
<p>This is a know issue reported <a href="https://github.com/kubernetes/kops/issues/6267" rel="nofollow noreferrer">here</a>.</p>
| Mark Watney |
<p>I'm running Jenkins on my K8s cluster, and it's currently accessible externally by node_name:port. Some of my users are bothered by accessing the service using a port name, is there a way I could just assign the service a name? for instance: jenkins.mydomain </p>
<p>Thank you.</p>
| Gaby | <p>Have a look at Kubernetes Ingress.
You can define rules that point internally to the Kubernetes Service in front of Jenkins.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
| Christiaan Vermeulen |
<p>When I create a LoadBalancer like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: webhook-event-source-service
namespace: argo-events
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
loadBalancerIP: 10.196.xxx.xxx
selector:
controller: eventsource-controller
ports:
- port: 1212
targetPort: 1212
protocol: TCP
</code></pre>
<p>Why does the GKE Console list it as an "External Load Balancer"?</p>
<p><a href="https://i.stack.imgur.com/FqobF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FqobF.png" alt="enter image description here" /></a></p>
| Raffael | <p>In fact, this problem has already been reported some time ago on Google's <a href="https://issuetracker.google.com/issues/179694570" rel="nofollow noreferrer">public issue tracker</a> and it's currently under investigation:</p>
<blockquote>
<p>Problem you have encountered:</p>
<p>I created a Deployment and a LoadBalancer Service as described in the
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="nofollow noreferrer">official
docs</a></p>
<p>Notice the LoadBalancer service is annotated with
<code>networking.gke.io/load-balancer-type: "Internal"</code></p>
<p>What you expected to happen:</p>
<p>I expected to see this service listed as <code>Internal Load Balancer</code> in
the <code>Services & Ingress</code> view of the GCP console.</p>
<p>Instead it is listed as an <code>External Load Balancer</code>. (See attachment)</p>
<p>Going to the specific load balancer in the <code>Load Balancing</code> view
shows it as Internal.</p>
<p>Steps to reproduce:</p>
<p>Just follow the docs and head to the <code>Services & Ingress</code> view in
the console.</p>
</blockquote>
<p>And the answer from <strong>GCP support</strong>, confirming that they were also able to reproduce the issue and are analyzing it at the moment:</p>
<blockquote>
<p>Hello,</p>
<p>Thank you for reaching out.</p>
<p>I've managed to reproduce the same scenario that you've included in
your message.</p>
<p>I forwarded this information to the Engineering team.</p>
<p>Please follow this issue in case of any further updates.</p>
<p>Best regards</p>
</blockquote>
<p>So if you are interested in progressing on this issue, feel free to follow <a href="https://issuetracker.google.com/issues/179694570" rel="nofollow noreferrer">this thread</a> for further updates.</p>
| mario |
<p>I am using <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">metric server</a> to get the usage of my Kubernetes cluster. But in order to use it from outside the host, I need to use "kubectl proxy". But i don't want to do that as it is not intended to run on background. I want it to be run as a service continuously </p>
<p>How can i achieve these</p>
<p><strong>expected output</strong>
curl clusterip:8001/apis/metrics.k8s.io/v1beta1/nodes</p>
<pre><code>{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "manhattan-master",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/manhattan-master",
"creationTimestamp": "2019-11-15T04:26:47Z"
},
"timestamp": "2019-11-15T04:26:33Z",
"window": "30s",
"usage": {
"cpu": "222998424n",
"memory": "3580660Ki"
}
}
]
</code></pre>
<p>I tried by using <strong>LoadBalancig service</strong>
<strong>metrics-server-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: main-port
externalTrafficPolicy: Local
type: LoadBalancer
</code></pre>
<p><strong>kubectl describe service metrics-master -n kube-system</strong></p>
<pre><code>[root@manhattan-master 1.8+]# kubectl describe service metrics-server -n kube-system
Name: metrics-server
Namespace: kube-system
Labels: kubernetes.io/cluster-service=true
kubernetes.io/name=Metrics-server
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Me...
Selector: k8s-app=metrics-server
Type: LoadBalancer
IP: 10.110.223.216
Port: <unset> 443/TCP
TargetPort: main-port/TCP
NodePort: <unset> 31043/TCP
Endpoints: 10.32.0.7:4443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32208
Events: <none>
</code></pre>
| UDIT JOSHI | <p>This is possible by creating a new service to expose the Metrics Server. Your Metrics Server Service should look like this: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/name: Metrics-server-ext
name: metrics-server-ext
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/metrics-server
spec:
ports:
- port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
sessionAffinity: None
type: LoadBalancer
</code></pre>
<p>If you try to access this service you will face some problems with authorization and you need to do some things to give all necessary authorizations.</p>
<p>After creating the service you will need to create a Cluster Role Binding so our service can have access to the data:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl create clusterrolebinding node-admin-default-svc --clusterrole=cluster-admin --serviceaccount=default:default
</code></pre>
<p>Before running <code>curl</code> command we need to get the token so we can pass this on our curl command: </p>
<pre class="lang-sh prettyprint-override"><code>$ TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
</code></pre>
<p>Get your service external IP:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get svc/metrics-server-ext -n kube-system -o jsonpath='{..ip}'
</code></pre>
<p>Your <code>curl</code> command should pass the Token key to get Authorization: </p>
<pre class="lang-sh prettyprint-override"><code>curl -k https://34.89.228.98/apis/metrics.k8s.io/v1beta1/nodes --header "Authorization: Bearer $TOKEN" --insecure
</code></pre>
<p>Sample output: </p>
<pre><code>{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "gke-lab-default-pool-993de7d7-ntmc",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-ntmc",
"creationTimestamp": "2019-11-19T10:26:52Z"
},
"timestamp": "2019-11-19T10:26:17Z",
"window": "30s",
"usage": {
"cpu": "52046272n",
"memory": "686768Ki"
}
},
{
"metadata": {
"name": "gke-lab-default-pool-993de7d7-tkj9",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-tkj9",
"creationTimestamp": "2019-11-19T10:26:52Z"
},
"timestamp": "2019-11-19T10:26:21Z",
"window": "30s",
"usage": {
"cpu": "52320505n",
"memory": "687252Ki"
}
},
{
"metadata": {
"name": "gke-lab-default-pool-993de7d7-v7m3",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-v7m3",
"creationTimestamp": "2019-11-19T10:26:52Z"
},
"timestamp": "2019-11-19T10:26:17Z",
"window": "30s",
"usage": {
"cpu": "45602403n",
"memory": "609968Ki"
}
}
]
}
</code></pre>
<p>EDIT:</p>
<p>You can also optionally access it from your pods since you created a Cluster Role Binding in your default Service Account with cluster-admin role. </p>
<p>As example, create a pod from a image that includes curl command:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl run bb-$RANDOM --rm -i --image=ellerbrock/alpine-bash-curl-ssl --restart=Never --tty -- /bin/bash
</code></pre>
<p>Than you need to exec into you pod and run:</p>
<pre class="lang-sh prettyprint-override"><code>$ curl -k -X GET https://kubernetes.default/apis/metrics.k8s.io/v1beta1/nodes --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --insecure
</code></pre>
<p>Here we are passing the same TOKEN mentioned before in a complete different way. </p>
| Mark Watney |
<p>I run <code>kubectl get events</code> to get the events details, now I'd like to do a fuzzy search to get the particular pods with prefix <code>nginx-*</code></p>
<p>Suppose, I have this output as below</p>
<pre><code>$ kubectl get events -o json
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2020-03-12T06:18:58Z",
"involvedObject": {
"apiVersion": "v1",
"kind": "Pod",
"name": "nginx-6db489d4b7-99xmd",
"namespace": "default",
"resourceVersion": "9683",
"uid": "64f6eeb1-c267-4ee1-b34d-14e65573d63f"
},
"kind": "Event",
"lastTimestamp": "2020-03-12T06:18:58Z",
"message": "Successfully assigned default/nginx-6db489d4b7-99xmd to kind-worker3",
"metadata": {
"creationTimestamp": "2020-03-12T06:18:58Z",
"name": "nginx-6db489d4b7-99xmd.15fb7a182197a184",
"namespace": "default",
"resourceVersion": "9703",
"selfLink": "/api/v1/namespaces/default/events/nginx-6db489d4b7-99xmd.15fb7a182197a184",
"uid": "de0ff737-e4d6-4218-b441-26c68a1ee8bd"
},
"reason": "Scheduled",
"reportingComponent": "",
"reportingInstance": "",
"source": {
"component": "default-scheduler"
},
"type": "Normal"
},
{
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2020-03-12T06:18:59Z",
"involvedObject": {
"apiVersion": "v1",
"fieldPath": "spec.containers{nginx}",
"kind": "Pod",
"name": "nginx-6db489d4b7-99xmd",
"namespace": "default",
"resourceVersion": "9693",
"uid": "64f6eeb1-c267-4ee1-b34d-14e65573d63f"
},
"kind": "Event",
"lastTimestamp": "2020-03-12T06:18:59Z",
"message": "Pulling image \"nginx\"",
"metadata": {
"creationTimestamp": "2020-03-12T06:18:59Z",
"name": "nginx-6db489d4b7-99xmd.15fb7a18754d0bfc",
"namespace": "default",
"resourceVersion": "9709",
"selfLink": "/api/v1/namespaces/default/events/nginx-6db489d4b7-99xmd.15fb7a18754d0bfc",
"uid": "d541f134-5e9c-4b7f-b035-ae4d49a3745f"
},
"reason": "Pulling",
"reportingComponent": "",
"reportingInstance": "",
"source": {
"component": "kubelet",
"host": "kind-worker3"
},
"type": "Normal"
},
{
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2020-03-12T06:18:26Z",
"involvedObject": {
"apiVersion": "v1",
"fieldPath": "spec.containers{nginx}",
"kind": "Pod",
"name": "nginx",
"namespace": "default",
"resourceVersion": "9555",
"uid": "f9d0ae86-4d7d-4553-91c2-efc0c3f8144f"
},
"kind": "Event",
"lastTimestamp": "2020-03-12T06:18:26Z",
"message": "Pulling image \"nginx\"",
"metadata": {
"creationTimestamp": "2020-03-12T06:18:26Z",
"name": "nginx.15fb7a10b4975ae0",
"namespace": "default",
"resourceVersion": "9565",
"selfLink": "/api/v1/namespaces/default/events/nginx.15fb7a10b4975ae0",
"uid": "f66cf712-1284-4f65-895a-5fbfa974e317"
},
"reason": "Pulling",
"reportingComponent": "",
"reportingInstance": "",
"source": {
"component": "kubelet",
"host": "kind-worker"
},
"type": "Normal"
},
{
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2020-03-12T06:18:38Z",
"involvedObject": {
"apiVersion": "v1",
"fieldPath": "spec.containers{nginx}",
"kind": "Pod",
"name": "nginx",
"namespace": "default",
"resourceVersion": "9555",
"uid": "f9d0ae86-4d7d-4553-91c2-efc0c3f8144f"
},
"kind": "Event",
"lastTimestamp": "2020-03-12T06:18:38Z",
"message": "Successfully pulled image \"nginx\"",
"metadata": {
"creationTimestamp": "2020-03-12T06:18:38Z",
"name": "nginx.15fb7a13a4aed9fc",
"namespace": "default",
"resourceVersion": "9613",
"selfLink": "/api/v1/namespaces/default/events/nginx.15fb7a13a4aed9fc",
"uid": "55a4a512-d5c0-41da-ae9c-c1654b6bbdfe"
},
"reason": "Pulled",
"reportingComponent": "",
"reportingInstance": "",
"source": {
"component": "kubelet",
"host": "kind-worker"
},
"type": "Normal"
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
</code></pre>
<p>I'd like to get the messages from pod <code>nginx-*</code> only. </p>
<pre><code>$ kubectl get events -o=jsonpath='{.items[*].involvedObject}'
</code></pre>
<p>But I am not sure how to check with name if it is <code>nginx-*</code> and then export its messages</p>
<pre><code> "involvedObject": {
"apiVersion": "v1",
"kind": "Pod",
"name": "nginx-6db489d4b7-99xmd",
"namespace": "default",
"resourceVersion": "9683",
"uid": "64f6eeb1-c267-4ee1-b34d-14e65573d63f"
},
"message": "Successfully assigned default/nginx-6db489d4b7-99xmd to kind-worker3",
</code></pre>
| Bill | <p>Kubectl jsonpath implementation doesn't support regexp matching so its not possible to achieve using only this tool (Take a look at <a href="https://github.com/kubernetes/kubernetes/issues/72220" rel="nofollow noreferrer">this github issue</a>). Fortunately you can always use <code>jq</code> to filter events, take a look for example below.</p>
<pre><code>kubectl get events -ojson | jq '.items[] | select(.involvedObject.name | test("^ngin-")) | .message'
</code></pre>
| Matt |
<p>I would like to change my deployed(GKE) Helm Chart values file with the ones that are inside my local file, basically to do this:</p>
<pre><code>helm upgrade -f new-values.yml {release name} {package name or path}
</code></pre>
<p>So I've make all the changes inside my local file, but the deployment is inside the GKE cluster.
I've connected to my cluster via ssh, but how can I run the above command in order to perform the update if the file with the new values is on my local machine and the deployment is inside GKE cluster?
Maybe somehow via the <code>scp</code> command?</p>
| Anna | <h3>Solution by setting up required tools locally (you need a while or two for that)</h3>
<p>You just need to reconfigure your <code>kubectl</code> client, which can be done pretty straighforward. When you log in to <a href="http://console.cloud.google.com/" rel="nofollow noreferrer">GCP Console</a> -> go to <strong>Kubernetes Engine</strong> -> <strong>Clusters</strong> -> click on <strong>Actions</strong> (3 vertical dots to the right of the cluster name) -> select <strong>Connect</strong> -> copy the command, which may resemble the following one:</p>
<pre><code>gcloud container clusters get-credentials my-gke-cluster --zone europe-west4-c --project my-project
</code></pre>
<p>It assumes you have your <strong>Cloud SDK</strong> and <code>kubectl</code> already installed on your local machine. If you have not, here you have step-by-step description how to do that:</p>
<ul>
<li><a href="https://cloud.google.com/sdk/docs/install#deb" rel="nofollow noreferrer">Installing Google Cloud SDK</a> [Debian/Ubuntu] (if you use a different OS, simply choose another tab)</li>
<li><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management" rel="nofollow noreferrer">Installing kubectl tool</a> [Debian/Ubuntu] (choose your OS if it is something different)</li>
</ul>
<p>Once you run the above command on your local machine, your <code>kubectl</code> context will be automatically set to your <strong>GKE Cluster</strong> even if it was set before e.g. to your local <strong>Minikube</strong> instance. You can check it by running:</p>
<pre><code>kubectl config current-context
</code></pre>
<p>OK, almost done. Did I also mention <code>helm</code> ? Well, you will also need it. So if you have not installed it on your local machine previously, please do it now:</p>
<ul>
<li><a href="https://helm.sh/docs/intro/install/#from-apt-debianubuntu" rel="nofollow noreferrer">Install helm</a> [Debian/Ubuntu]</li>
</ul>
<h3>Alternative slution using Cloud Shell (much quicker)</h3>
<p>If installing and configuring it locally seems to you too much hassle, you can simply use a <strong>Cloud Shell</strong> (I bet you've used it before). In case you didn't, once logged in to your <strong>GCP Console</strong> click on the following icon:</p>
<p><a href="https://i.stack.imgur.com/RX1am.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RX1am.png" alt="enter image description here" /></a></p>
<p>Once logged into <strong>Cloud Shell</strong>, you can choose to upload your local files there:</p>
<p>simply click on <strong>More</strong> (3 dots again):</p>
<p><a href="https://i.stack.imgur.com/uUZ7M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uUZ7M.png" alt="enter image description here" /></a></p>
<p>and choose <strong>Upload a file</strong>:</p>
<p><a href="https://i.stack.imgur.com/IKvav.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IKvav.png" alt="enter image description here" /></a></p>
| mario |
<p>As <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer">the documentation</a> shows, you should be setting the env vars when doing a <code>docker run</code> like the following:</p>
<pre><code>docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
</code></pre>
<p>This sets the superuser and password to access the database instead of the defaults of <code>POSTGRES_PASSWORD=''</code> and <code>POSTGRES_USER='postgres'</code>.</p>
<p>However, I'm using Skaffold to spin up a k8s cluster and I'm trying to figure out how to do something similar. How does one go about doing this for Kubernetes and Skaffold?</p>
| cjones | <p>@P Ekambaram is correct but I would like to go further into this topic and explain the "whys and hows".</p>
<p>When passing passwords on Kubernetes, it's highly recommended to use encryption and you can do this by using secrets. </p>
<p><strong>Creating your own Secrets</strong> (<a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-your-own-secrets" rel="nofollow noreferrer">Doc</a>)</p>
<p>To be able to use the secrets as described by @P Ekambaram, you need to have a secret in your kubernetes cluster. </p>
<p>To easily create a secret, you can also create a Secret from generators and then apply it to create the object on the Apiserver. The generators should be specified in a <code>kustomization.yaml</code> inside a directory.</p>
<p>For example, to generate a Secret from literals <code>username=admin</code> and <code>password=secret</code>, you can specify the secret generator in <code>kustomization.yaml</code> as</p>
<pre class="lang-sh prettyprint-override"><code># Create a kustomization.yaml file with SecretGenerator
$ cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
</code></pre>
<p>Apply the kustomization directory to create the Secret object.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
</code></pre>
<p><strong>Using Secrets as Environment Variables</strong> (<a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">Doc</a>)</p>
<p>This is an example of a pod that uses secrets from environment variables:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
</code></pre>
<p>Source: <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">here</a>.</p>
| Mark Watney |
<p>I have two services hosted on different ports and I have created a ingress resource which looks like this</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /svc1/
backend:
serviceName: app1-svc
servicePort: 3000
- path: /svc2/
backend:
serviceName: app2-svc
servicePort: 8080
</code></pre>
<p>on top of this I have created a <strong><em>NodePort</em></strong> type ingress controller.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
name: controller-nginx-ingress-controller
spec:
clusterIP: 10.88.18.191
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31442
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: controller
</code></pre>
<p>And finally, setup a cloud load balancer to access application running on my K8S cluster.</p>
<p><strong>Problem:</strong>
I couldn't able to access any of my application using URL routing</p>
<ul>
<li><a href="http://load-balancer-ip/svc1/" rel="nofollow noreferrer">http://load-balancer-ip/svc1/</a></li>
<li><a href="http://load-balancer-ip/svc2/" rel="nofollow noreferrer">http://load-balancer-ip/svc2/</a></li>
</ul>
<p>Can anyone please let me know what incorrect I'm doing? And how to resolve this issue?</p>
| Geeky Ninja | <p>From what you mentioned in comments I am pretty sure the problem can be solved with path rewrites.</p>
<p>Now, when you are sending a request to <code>/svc1/</code> with <code>path: /svc1/</code> then the request is forwarded to <code>app1-svc</code> with path set to <code>/svc1/</code> and you are receiving 404 errors as there is no such path in app1. From what you mentioned, you can most probably solve the issue using rewrite. You can achieve it using <code>nginx.ingress.kubernetes.io/rewrite-target</code> annotation, so your ingress would look like following:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: mynamespace
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /svc1(/|$)(.*)
backend:
serviceName: app1-svc
servicePort: 3000
- path: /svc2(/|$)(.*)
backend:
serviceName: app2-svc
servicePort: 8080
</code></pre>
<p>In this case when sending request with path set to <code>/svc1/something</code> the request will be forwarded to app1 with path rewritten to <code>/something</code>.</p>
<p>Also take a look in <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">ingress docs for more explanation</a>.</p>
<p>Let me know if it solved you issue.</p>
| Matt |
<p>This is my DockerFile</p>
<pre><code># set base image (host OS)
FROM python:3.8
# set the working directory in the container
WORKDIR /code
# command to run on container start
RUN mkdir -p /tmp/xyz-agent
</code></pre>
<p>And when I execute the following command -
<code>docker -v build .</code> the docker builds successfully and I don't get any error. This is the output -</p>
<pre><code>Step 1/3 : FROM python:3.8
3.8: Pulling from library/python
b9a857cbf04d: Already exists
d557ee20540b: Already exists
3b9ca4f00c2e: Already exists
667fd949ed93: Already exists
4ad46e8a18e5: Already exists
381aea9d4031: Pull complete
8a9e78e1993b: Pull complete
9eff4cbaa677: Pull complete
1addfed3cc19: Pull complete
Digest: sha256:fe08f4b7948acd9dae63f6de0871f79afa017dfad32d148770ff3a05d3c64363
Status: Downloaded newer image for python:3.8
---> b0358f6298cd
Step 2/3 : WORKDIR /code
---> Running in 486aaa8f33ad
Removing intermediate container 486aaa8f33ad
---> b798192954bd
Step 3/3 : CMD ls
---> Running in 831ef6e6996b
Removing intermediate container 831ef6e6996b
---> 36298963bfa5
Successfully built 36298963bfa5
</code></pre>
<p>But when I login inside the container using terminal. I don't see the directory created.
Same goes for other commands as well. Doesn't throw error, doesn't create anything.
NOTE: I'm using Docker for Desktop with Kubernetes running.</p>
| bosari | <p>Are you sure you create your container from the newly built image ?</p>
<p>Let's take a look again at the whole process:</p>
<p>We have the following <code>Dockerfile</code>:</p>
<pre><code># set base image (host OS)
FROM python:3.8
# set the working directory in the container
WORKDIR /code
# command to run on container start
RUN mkdir -p /tmp/xyz-agent
</code></pre>
<p>In the directory where <code>Dockerfile</code> is placed, we run:</p>
<pre><code>docker build .
</code></pre>
<p>Then we run:</p>
<pre><code>docker images
</code></pre>
<p>which shows us our newly built image:</p>
<pre><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 775e0c698e81 29 seconds ago 882MB
python 3.8 b0358f6298cd 6 days ago 882MB
</code></pre>
<p>Now we need to tag our newly created image:</p>
<pre><code>docker tag 775e0c698e81 my-python:v1
</code></pre>
<p>When we run <code>docker images</code> again it shows us:</p>
<pre><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my-python v1 775e0c698e81 About a minute ago 882MB
python 3.8 b0358f6298cd 6 days ago 882MB
</code></pre>
<p>Let's run a new container from this image and check whether our directory has been successfully created:</p>
<pre><code>$ docker run -ti my-python:v1 /bin/bash
root@6e93ed4a6e94:/code# pwd
/code
root@6e93ed4a6e94:/code# ls -ld /tmp/xyz-agent
drwxr-xr-x 2 root root 4096 Jan 19 14:58 /tmp/xyz-agent
root@6e93ed4a6e94:/code#
</code></pre>
<p>As you can see above, <code>/tmp/xyz-agent</code> directory is in it's place as expected. I hope this helps you figure out where you are making a mistake.</p>
| mario |
<p>From time to time, in my 8-nodes Kubernetes 1.17.2 cluster managed by Rancher 2.3.5 single-node install, I happen to stumble upon this weird error, which actually says a folder isn't there but... it is!
The first occurrence of this error, which actually prevents the afflicted container from being able to start at all, were related to my striving in getting a GlusterFS volume across 3 of my worker nodes reckoned as a standard storage provider! So I've tried torchbox/k8s-hostpath-provisioner, but also rancher.io/local-path; from time to time, though, this weird error about the provisioned directory being misconfigured happened on new, specific services, so I've just decided to ditch them off, and move forward.
Now I'm really stopped, because there aren't so many alternatives to a K8s-deployed mailserver such as tomav/docker-mailserver, but it is still refusing to start throwing this CreateContainerConfigError, even after I've completely deleted it and patched its YAML manifest as the use K8s standard "hostPath" volumeMount instead of my testing provisioners:</p>
<p>FROM THIS:</p>
<pre><code>volumes:
- name: data
persistentVolumeClaim:
claimName: mail-storage
</code></pre>
<p>TO THIS:</p>
<pre><code>volumes:
- name: data
hostPath:
path: /mnt/gvol2/docker-mailserver
type: ""
</code></pre>
<p>I've adapted this last code from another sample service I'm using with Rancher "Bind Mount" as a Volume in the GUI; docker-mailserver looks precisely the same in the Rancher GUI, no errors can be seen, so Volume definition seems OK.
Why I get this "CreateContainerConfigError", so?
Any help will be much appreciated!</p>
| adalberto caccia | <pre><code>type: "DirectoryOrCreate"
</code></pre>
<p>solved the mystery!
Apparently, K8s is not going to actually stat() the directory when it is not directly created by itself!</p>
| adalberto caccia |
<p>We are using <a href="https://github.com/poseidon/typhoon" rel="nofollow noreferrer">https://github.com/poseidon/typhoon</a> for our kubernetes cluster setup.
I want to set up a dashboard for kubernetes similar to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/</a></p>
<p>I followed <a href="https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html</a> and I am able to get the dashboard on my localhost </p>
<p>The issue with this is that "EVERY USER HAS TO FOLLOW THE SAME TO ACCESS THE DASHBOARD"</p>
<p>I was wondering if there was some way wherein we can access the dashboard via DomainName and everyone should be able to access it without much pre-set up required.</p>
| codeaprendiz | <p>In <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md" rel="nofollow noreferrer">dashboard documentation</a> you can read:</p>
<blockquote>
<p>Using Skip option will make Dashboard use privileges of Service
Account used by Dashboard. Skip button is disabled by default since
1.10.1. Use --enable-skip-login dashboard flag to display it.</p>
</blockquote>
<p>So you can add <code>--enable-skip-login</code> to the dashboard to display skip button.
If your users don't want to login, they can click Skip button during login and use privileges of Dashboard service account.</p>
| Matt |
<p>In a docker container I want to run k8s. </p>
<p>When I run <code>kubeadm join ...</code> or <code>kubeadm init</code> commands I see sometimes errors like </p>
<blockquote>
<p>\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:<br>
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1</p>
</blockquote>
<p>because (I think) my container does not have the expected kernel header files.</p>
<p>I realise that the container reports its kernel <strong>based on the host that is running the container</strong>; and looking at <a href="https://gitlab.cncf.ci/kubernetes/kubernetes/blob/06cdb02fcacc06bbc36d290ce99720856191ff27/test/e2e_node/system/kernel_validator.go" rel="nofollow noreferrer">k8s code</a> I see </p>
<pre><code>// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
</code></pre>
<p>so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info. </p>
<p>I note that running <code>docker run -it solita/centos-systemd:7 /bin/bash</code> on a macOS host I see :</p>
<pre><code># uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
</code></pre>
<p>but running exact same on a Ubuntu VM I see :</p>
<pre><code># uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
</code></pre>
<p>[Weirdly I don't see this <code>FATAL: Module configs not found in directory</code> error every time, but I guess that is a separate question!]</p>
<p><strong>UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.</strong> </p>
| k1eran | <p>I do not believe that is possible given the nature of containers.</p>
<p>You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.</p>
<p>Another solution is to run it under kind which uses docker driver instead of VirtualBox</p>
<p><a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/quick-start/</a></p>
| ThePilot |
<p>I have set up the <code>Nginx-Ingress</code> controller as per the documentation (<a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">Installation guide</a>)and followed the steps using the example provided. When I try to access the service using the <code>curl</code> command, I am getting a <code>400</code> Bad request. When I look at the logs of the <code>nginx-ingress</code> pod, I am not seeing any error. I have attached the logs for reference. I am finding difficult to troubleshoot the issue. Where </p>
<pre><code>fetch the pods from the nginx-ingress namespace
$ kubectl get po -n nginx-ingress
NAME READY STATUS RESTARTS AGE
coffee-7c45f487fd-965dq 1/1 Running 0 46m
coffee-7c45f487fd-bncz5 1/1 Running 0 46m
nginx-ingress-7f4b784f79-7k4q6 1/1 Running 0 48m
tea-7769bdf646-g559m 1/1 Running 0 46m
tea-7769bdf646-hlr5j 1/1 Running 0 46m
tea-7769bdf646-p5hp8 1/1 Running 0 46m
making the request. I have set up the DNS record in the /etc/hosts file
$ curl -vv http://cafe.example.com/coffee
GET /coffee HTTP/1.1
> Host: cafe.example.com
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Server: nginx/1.17.10
< Date: Mon, 11 May 2020 17:36:31 GMT
< Content-Type: text/html
< Content-Length: 158
< Connection: close
checking the logs after the curl request
$ kubectl logs -n nginx-ingress nginx-ingress-7f 4b784f79-7k4q6
100.96.1.1 - - [11/May/2020:17:31:48 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8340 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:31:51 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40392 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:31:58 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8348 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:01 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40408 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:08 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8360 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:11 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40414 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:18 +0000] "PROXY TCP4 3.6.94.242 172.20.81.142 35790 80" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:18 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8366 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:21 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40422 32579" 400 158 "-" "-" "-"
</code></pre>
| zilcuanu | <p>I was able to resolve the issue after adding the following annotations.</p>
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/force-ssl-redirect: true
</code></pre>
| Vipul Sharda |
<p>I am having this error, after running <strong>skaffold dev</strong>.</p>
<pre><code>Step 1/6 : FROM node:current-alpine3.11
exiting dev mode because first build failed: unable to stream build output: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35889->192.168.49.1:53: i/o timeout. Please fix the Dockerfile and try again..
</code></pre>
<p>Here is skaffold.yml</p>
<pre><code>apiVersion: skaffold/v2beta11
kind: Config
metadata:
name: *****
build:
artifacts:
- image: 127.0.0.1:32000/auth
context: auth
docker:
dockerfile: Dockerfile
deploy:
kubectl:
manifests:
- infra/k8s/auth-depl.yaml
local:
push: false
artifacts:
- image: 127.0.0.1:32000/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
</code></pre>
<p>I have tried all possible solutions I saw online, including adding 8.8.8.8 as the DNS, but the error still persists. I am using Linux and running ubuntu, I am also using Minikube locally. Please assist.</p>
| Fehmy | <p><em>This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.</em></p>
<p>In this case:</p>
<pre><code>minikube delete && minikube start
</code></pre>
<p>solved the problem but you can start from restarting <strong>docker daemon</strong>. Since this is <strong>Minikube</strong> cluster and <strong>Skaffold</strong> uses for its builds <strong>Minikube's Docker daemon</strong>, as suggested by <a href="https://stackoverflow.com/users/600339/brian-de-alwis">Brian de Alwis</a> in his comment, you may start from:</p>
<pre><code>minikube stop && minikube start
</code></pre>
<p>or</p>
<pre><code>minikube ssh
su
systemctl restart docker
</code></pre>
<p>I searched for similar errors and in many cases e.g. <a href="https://stackoverflow.com/a/61114031">here</a> or <a href="https://github.com/docker/for-mac/issues/1317" rel="nofollow noreferrer">in this thread</a>, setting up your DNS to something reliable like <code>8.8.8.8</code> may also help:</p>
<pre><code>sudo echo "nameserver 8.8.8.8" >> /etc/resolv.conf
</code></pre>
<p>in case you use <strong>Minikube</strong> you should first:</p>
<pre><code>minikube ssh
su ### to become root
</code></pre>
<p>and then run:</p>
<pre><code>echo "nameserver 8.8.8.8" >> /etc/resolv.conf
</code></pre>
<p>The following error message:</p>
<pre><code>Please fix the Dockerfile and try again
</code></pre>
<p>may be somewhat misleading in similar cases as <code>Dockerfile</code> is probably totally fine, but as we can read in other part:</p>
<pre><code>lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35889->192.168.49.1:53: i/o timeout.
</code></pre>
<p>it's definitely related with failing DNS lookup. This is well described <a href="http://docs.docker.oeynet.com/docker-cloud/docker-errors-faq/" rel="nofollow noreferrer">here</a> as well known issue.</p>
<blockquote>
<h2>Get i/o timeout</h2>
<p><em>Get <a href="https://index.docker.io/v1/repositories/" rel="nofollow noreferrer">https://index.docker.io/v1/repositories/</a>/images: dial tcp: lookup on :53: read udp :53: i/o timeout</em></p>
<h3>Description</h3>
<p>The DNS resolver configured on the host cannot resolve the registry’s
hostname.</p>
<h3>GitHub link</h3>
<p>N/A</p>
<h3>Workaround</h3>
<p>Retry the operation, or if the error persists, use another DNS
resolver. You can do this by updating your <code>/etc/resolv.conf</code> file
with these or other DNS servers:</p>
<p><code>nameserver 8.8.8.8 nameserver 8.8.4.4</code></p>
</blockquote>
| mario |
<p>We have deployed Customized Confluent Kafka Connector as statefulset in Kubernetes, which mounts secrets from Azure KeyVault. These secrets contain db username and password & are meant to be used while creating connectors via rest endpoint <code>https://kafka.mydomain.com/connectors</code> using Postman.</p>
<p>The secrets are being loaded as environment variables in container. And <code>kubernetes-ingress-controller</code> - <em>path based routing</em> is used for exposing rest endpoint.</p>
<p>So far, our team is unable to use the environment variables while creating connector through Postman.</p>
<p>Connector config:</p>
<pre><code>{
"name": "TEST.CONNECTOR.SINK",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"errors.log.include.messages": "true",
"table.name.format": "AuditTransaction",
"connection.password": "iampassword", <------------ (1)
"flush.size": "3",
"tasks.max": "1",
"topics": "TEST.CONNECTOR.SOURCE-AuditTransaction",
"key.converter.schemas.enable": "false",
"connection.user": "iamuser", <------------ (2)
"value.converter.schemas.enable": "true",
"name": "TEST.CONNECTOR.SINK",
"errors.tolerance": "all",
"connection.url": "jdbc:sqlserver://testdb.database.windows.net:1433;databaseName=mytestdb01",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"insert.mode": "insert",
"errors.log.enable": "true",
"key.converter": "org.apache.kafka.connect.json.JsonConverter"
}
}
</code></pre>
<p><em><strong>(1) and (2)</strong></em> - Here we want to use system environment variables with Values - <code>$my_db_username=iamuser</code>, <code>$my_db_password=iampassword</code>. We have tried using <code>"$my_db_username"</code> and <code>"$my_db_password"</code> there but in logs of Connector Pod, it doesn't resolve to the respective values.</p>
<p>Logs:</p>
<pre><code>[2020-07-28 12:31:22,838] INFO Starting JDBC Sink task (io.confluent.connect.jdbc.sink.JdbcSinkTask:44)
[2020-07-28 12:31:22,839] INFO JdbcSinkConfig values:
auto.create = false
auto.evolve = false
batch.size = 3000
connection.password = [hidden]
connection.url = jdbc:sqlserver://testdb.database.windows.net:1433;databaseName=mytestdb01
connection.user = $my_db_username
db.timezone = UTC
delete.enabled = false
dialect.name =
fields.whitelist = []
insert.mode = insert
max.retries = 10
pk.fields = []
pk.mode = none
quote.sql.identifiers = ALWAYS
retry.backoff.ms = 3000
table.name.format = AuditTransaction
</code></pre>
<p>Is there any way to use system/container environment variables in this config, while creating connectors with Postman or something else?</p>
| Aditya Jalkhare | <p>Finally did it!! Using <code>FileConfigProvider</code>. All the needed information was <a href="https://docs.confluent.io/current/connect/security.html#externalizing-secrets" rel="nofollow noreferrer">here</a>.</p>
<p>We just had to parametrize <code>connect-secrets.properties</code> according to our requirement and substitute env vars value on startup.</p>
<p>This doesn't allow using Env Vars via Postman. But parametrized <code>connect-secrets.properties</code> specifically tuned according to our need did the job and <code>FileConfigProvider</code> did the rest by picking values from <code>connect-secrets.properties</code></p>
<h2>Update</h2>
<p>Found a way to implement this using env vars <a href="https://github.com/giogt/kafka-env-config-provider" rel="nofollow noreferrer">here</a>.</p>
| Aditya Jalkhare |
<p>I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.</p>
<p>Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.</p>
<p>The pods within the cluster are throwing an error:
<code>err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"</code>
Both pods are in the same cluster.</p>
<p>What are the best steps to take to debug this?</p>
<p>I have tried running:
<code>kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf</code>
And this returns:
<code>search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal</code>
Which I found here: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
| fuzzi | <p>First of all, the following pattern:</p>
<pre><code>my-svc.my-namespace.svc.cluster-domain.example
</code></pre>
<p>is applicable only to <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="noreferrer">FQDNs of Services</a>, not <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods" rel="noreferrer">Pods</a> which have the following form:</p>
<pre><code>pod-ip-address.my-namespace.pod.cluster-domain.example
</code></pre>
<p>e.g.:</p>
<pre><code>172-17-0-3.default.pod.cluster.local
</code></pre>
<p>So in fact you're querying cluster dns about FQDN of the <code>Service</code> named <code>testpod</code> and not about FQDN of the <code>Pod</code>. Judging by the fact that it's being resolved successfully, such <code>Service</code> already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message <code>connection refused</code> can mean the following:</p>
<ol>
<li>your <code>Service</code> FQDN <code>testpod.mynamespace.svc.cluster.local</code> has been successfully resolved
(otherwise you would receive something like <code>curl: (6) Could not resolve host: testpod.default.svc.cluster.local</code>)</li>
<li>you've reached successfully your <code>testpod</code> <code>Service</code>
(otherwise, i.e. if it existed but wasn't listening on <code>8080</code> port, you're trying to connect to, you would receive <code>timeout</code> e.g. <code>curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out</code>)</li>
<li>you've reached the <code>Pod</code>, exposed by <code>testpod</code> <code>Service</code> (you've been sussessfully redirected to it by the <code>testpod</code> <code>Service</code>)</li>
<li>but once reached the <code>Pod</code>, you're trying to connect to incorect port and that's why the connection is being refused by the server</li>
</ol>
<p>My best guess is that your <code>Pod</code> in fact listens on different port, like <code>80</code> but you exposed it via the <code>ClusterIP</code> <code>Service</code> by specifying only <code>--port</code> value e.g. by:</p>
<pre><code>kubectl expose pod testpod --port=8080
</code></pre>
<p>In such case both <code>--port</code> (port of the <code>Service</code>) and <code>--targetPort</code> (port of the <code>Pod</code>) will have the same value. In other words you've created a <code>Service</code> like the one below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>And you probably should've exposed it either this way:</p>
<pre><code>kubectl expose pod testpod --port=8080 --targetPort=80
</code></pre>
<p>or with the following yaml manifest:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
</code></pre>
<p>Of course your <code>targetPort</code> may be different than <code>80</code>, but <code>connection refused</code> in such case can mean only one thing: target http server (running in a <code>Pod</code>) refuses connection to <code>8080</code> port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard <code>nginx</code> webserver or something based on your custom image. But if it's <code>nginx</code> and wasn't configured differently it listens on port <code>80</code>.</p>
<p>For further debug, you can attach to your <code>Pod</code>:</p>
<pre><code>kubectl exec -it testpod --namespace mynamespace -- /bin/sh
</code></pre>
<p>and if <code>netstat</code> command is not present (the most likely scenario) run:</p>
<pre><code>apt update && apt install net-tools
</code></pre>
<p>and then check with <code>netstat -ntlp</code> on which port your container listens on.</p>
<p>I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.</p>
| mario |
<p>Tried to install rook-ceph on kubernetes as this guide:</p>
<p><a href="https://rook.io/docs/rook/v1.3/ceph-quickstart.html" rel="nofollow noreferrer">https://rook.io/docs/rook/v1.3/ceph-quickstart.html</a></p>
<pre><code>git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f common.yaml
kubectl create -f operator.yaml
kubectl create -f cluster.yaml
</code></pre>
<p>When I check all the pods</p>
<pre><code>$ kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-9c2z9 3/3 Running 0 23m
csi-cephfsplugin-provisioner-7678bcfc46-s67hq 5/5 Running 0 23m
csi-cephfsplugin-provisioner-7678bcfc46-sfljd 5/5 Running 0 23m
csi-cephfsplugin-smmlf 3/3 Running 0 23m
csi-rbdplugin-provisioner-fbd45b7c8-dnwsq 6/6 Running 0 23m
csi-rbdplugin-provisioner-fbd45b7c8-rp85z 6/6 Running 0 23m
csi-rbdplugin-s67lw 3/3 Running 0 23m
csi-rbdplugin-zq4k5 3/3 Running 0 23m
rook-ceph-mon-a-canary-954dc5cd9-5q8tk 1/1 Running 0 2m9s
rook-ceph-mon-b-canary-b9d6f5594-mcqwc 1/1 Running 0 2m9s
rook-ceph-mon-c-canary-78b48dbfb7-z2t7d 0/1 Pending 0 2m8s
rook-ceph-operator-757d6db48d-x27lm 1/1 Running 0 25m
rook-ceph-tools-75f575489-znbbz 1/1 Running 0 7m45s
rook-discover-gq489 1/1 Running 0 24m
rook-discover-p9zlg 1/1 Running 0 24m
</code></pre>
<pre><code>$ kubectl -n rook-ceph get pod -l app=rook-ceph-osd-prepare
No resources found in rook-ceph namespace.
</code></pre>
<p>Do some other operation</p>
<pre><code>$ kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule-
$ kubectl -n rook-ceph-system delete pods rook-ceph-operator-757d6db48d-x27lm
</code></pre>
<p>Create file system</p>
<pre><code>$ kubectl create -f filesystem.yaml
</code></pre>
<p>Check again</p>
<pre><code>$ kubectl get pods -n rook-ceph -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-9c2z9 3/3 Running 0 135m 192.168.0.53 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-s67hq 5/5 Running 0 135m 10.1.2.6 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-sfljd 5/5 Running 0 135m 10.1.2.5 kube3 <none> <none>
csi-cephfsplugin-smmlf 3/3 Running 0 135m 192.168.0.52 kube2 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-dnwsq 6/6 Running 0 135m 10.1.1.6 kube2 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-rp85z 6/6 Running 0 135m 10.1.1.5 kube2 <none> <none>
csi-rbdplugin-s67lw 3/3 Running 0 135m 192.168.0.52 kube2 <none> <none>
csi-rbdplugin-zq4k5 3/3 Running 0 135m 192.168.0.53 kube3 <none> <none>
rook-ceph-crashcollector-kube2-6d95bb9c-r5w7p 0/1 Init:0/2 0 110m <none> kube2 <none> <none>
rook-ceph-crashcollector-kube3-644c849bdb-9hcvg 0/1 Init:0/2 0 110m <none> kube3 <none> <none>
rook-ceph-mon-a-canary-954dc5cd9-6ccbh 1/1 Running 0 75s 10.1.2.130 kube3 <none> <none>
rook-ceph-mon-b-canary-b9d6f5594-k85w5 1/1 Running 0 74s 10.1.1.74 kube2 <none> <none>
rook-ceph-mon-c-canary-78b48dbfb7-kfzzx 0/1 Pending 0 73s <none> <none> <none> <none>
rook-ceph-operator-757d6db48d-nlh84 1/1 Running 0 110m 10.1.2.28 kube3 <none> <none>
rook-ceph-tools-75f575489-znbbz 1/1 Running 0 119m 10.1.1.14 kube2 <none> <none>
rook-discover-gq489 1/1 Running 0 135m 10.1.1.3 kube2 <none> <none>
rook-discover-p9zlg 1/1 Running 0 135m 10.1.2.4 kube3 <none> <none>
</code></pre>
<p>Can't see pod as <code>rook-ceph-osd-</code>.</p>
<p>And <code>rook-ceph-mon-c-canary-78b48dbfb7-kfzzx</code> pod is always <code>Pending</code>.</p>
<p>If install toolbox as</p>
<p><a href="https://rook.io/docs/rook/v1.3/ceph-toolbox.html" rel="nofollow noreferrer">https://rook.io/docs/rook/v1.3/ceph-toolbox.html</a></p>
<pre><code>$ kubectl create -f toolbox.yaml
$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
</code></pre>
<p>Inside the container, check the ceph status</p>
<pre><code>[root@rook-ceph-tools-75f575489-znbbz /]# ceph -s
unable to get monitor info from DNS SRV with service name: ceph-mon
[errno 2] error connecting to the cluster
</code></pre>
<p>It's running on Ubuntu 16.04.6.</p>
<hr>
<p>Deploy again</p>
<pre><code>$ kubectl -n rook-ceph get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-4tww8 3/3 Running 0 3m38s 192.168.0.52 kube2 <none> <none>
csi-cephfsplugin-dbbfb 3/3 Running 0 3m38s 192.168.0.53 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-8kt96 5/5 Running 0 3m37s 10.1.2.6 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-kq6vv 5/5 Running 0 3m38s 10.1.1.6 kube2 <none> <none>
csi-rbdplugin-4qrqn 3/3 Running 0 3m39s 192.168.0.53 kube3 <none> <none>
csi-rbdplugin-dqx9z 3/3 Running 0 3m39s 192.168.0.52 kube2 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-7f57t 6/6 Running 0 3m39s 10.1.2.5 kube3 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-9zwhb 6/6 Running 0 3m39s 10.1.1.5 kube2 <none> <none>
rook-ceph-mon-a-canary-954dc5cd9-rgqpg 1/1 Running 0 2m40s 10.1.1.7 kube2 <none> <none>
rook-ceph-mon-b-canary-b9d6f5594-n2pwc 1/1 Running 0 2m35s 10.1.2.8 kube3 <none> <none>
rook-ceph-mon-c-canary-78b48dbfb7-fv46f 0/1 Pending 0 2m30s <none> <none> <none> <none>
rook-ceph-operator-757d6db48d-2m25g 1/1 Running 0 6m27s 10.1.2.3 kube3 <none> <none>
rook-discover-lpsht 1/1 Running 0 5m15s 10.1.1.3 kube2 <none> <none>
rook-discover-v4l77 1/1 Running 0 5m15s 10.1.2.4 kube3 <none> <none>
</code></pre>
<p>Describe pending pod</p>
<pre><code>$ kubectl describe pod rook-ceph-mon-c-canary-78b48dbfb7-fv46f -n rook-ceph
Name: rook-ceph-mon-c-canary-78b48dbfb7-fv46f
Namespace: rook-ceph
Priority: 0
Node: <none>
Labels: app=rook-ceph-mon
ceph_daemon_id=c
mon=c
mon_canary=true
mon_cluster=rook-ceph
pod-template-hash=78b48dbfb7
rook_cluster=rook-ceph
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/rook-ceph-mon-c-canary-78b48dbfb7
Containers:
mon:
Image: rook/ceph:v1.3.4
Port: 6789/TCP
Host Port: 0/TCP
Command:
/tini
Args:
--
sleep
3600
Environment:
CONTAINER_IMAGE: ceph/ceph:v14.2.9
POD_NAME: rook-ceph-mon-c-canary-78b48dbfb7-fv46f (v1:metadata.name)
POD_NAMESPACE: rook-ceph (v1:metadata.namespace)
NODE_NAME: (v1:spec.nodeName)
POD_MEMORY_LIMIT: node allocatable (limits.memory)
POD_MEMORY_REQUEST: 0 (requests.memory)
POD_CPU_LIMIT: node allocatable (limits.cpu)
POD_CPU_REQUEST: 0 (requests.cpu)
ROOK_CEPH_MON_HOST: <set to the key 'mon_host' in secret 'rook-ceph-config'> Optional: false
ROOK_CEPH_MON_INITIAL_MEMBERS: <set to the key 'mon_initial_members' in secret 'rook-ceph-config'> Optional: false
ROOK_POD_IP: (v1:status.podIP)
Mounts:
/etc/ceph from rook-config-override (ro)
/etc/ceph/keyring-store/ from rook-ceph-mons-keyring (ro)
/var/lib/ceph/crash from rook-ceph-crash (rw)
/var/lib/ceph/mon/ceph-c from ceph-daemon-data (rw)
/var/log/ceph from rook-ceph-log (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-65xtn (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
rook-config-override:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rook-config-override
Optional: false
rook-ceph-mons-keyring:
Type: Secret (a volume populated by a Secret)
SecretName: rook-ceph-mons-keyring
Optional: false
rook-ceph-log:
Type: HostPath (bare host directory volume)
Path: /var/lib/rook/rook-ceph/log
HostPathType:
rook-ceph-crash:
Type: HostPath (bare host directory volume)
Path: /var/lib/rook/rook-ceph/crash
HostPathType:
ceph-daemon-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/rook/mon-c/data
HostPathType:
default-token-65xtn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-65xtn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s (x3 over 84s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.
</code></pre>
<hr>
<p>Test mount</p>
<p>Create a nginx.yaml file</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
flexVolume:
driver: ceph.rook.io/rook
fsType: ceph
options:
fsName: myfs
clusterNamespace: rook-ceph
</code></pre>
<p>Deploy it and describe the pod detail</p>
<pre><code>...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m28s default-scheduler Successfully assigned default/nginx to kube2
Warning FailedMount 9m28s kubelet, kube2 Unable to attach or mount volumes: unmounted volumes=[www default-token-fnb28], unattached volumes=[www default-token-fnb28]: failed to get Plugin from volumeSpec for volume "www" err=no volume plugin matched
Warning FailedMount 6m14s (x2 over 6m38s) kubelet, kube2 Unable to attach or mount volumes: unmounted volumes=[www], unattached volumes=[default-token-fnb28 www]: failed to get Plugin from volumeSpec for volume "www" err=no volume plugin matched
Warning FailedMount 4m6s (x23 over 9m13s) kubelet, kube2 Unable to attach or mount volumes: unmounted volumes=[www], unattached volumes=[www default-token-fnb28]: failed to get Plugin from volumeSpec for volume "www" err=no volume plugin matched
</code></pre>
| Jingqiang Zhang | <p>rook-ceph-mon-x pods have following affinity:</p>
<pre><code>spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: rook-ceph-mon
topologyKey: kubernetes.io/hostname
</code></pre>
<p>which doesn't allow for running 2 rook-ceph-mon pods on the same node.
Since you seem to have 3 nodes: 1 master and 2 workers, 2 pods get created, one on kube2 and one on kube3 node. kube1 is master node tainted as unschedulable so rook-ceph-mon-c cannot be scheduled there.</p>
<p>To solve it you can:</p>
<ul>
<li>add one more worker node</li>
<li>remove NoSchedule taint with <code>kubectl taint nodes kube1 key:NoSchedule-</code></li>
<li>change <a href="https://github.com/rook/rook/blob/d8949ec945b036172588f587698e45633d5c392a/cluster/examples/kubernetes/ceph/cluster.yaml#L43" rel="nofollow noreferrer">mon count</a> to lower value</li>
</ul>
| Matt |
<p>When trying to bind a pod to a NFS persistentVolume hosted on another pod, it fails to mount when using docker-desktop. It works perfectly fine elsewhere even with the exact same YAML.</p>
<p>The error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m59s default-scheduler Successfully assigned test-project/test-digit-5576c79688-zfg8z to docker-desktop
Warning FailedMount 2m56s kubelet Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[lagg-connection kube-api-access-h68w7]: timed out waiting for the condition
Warning FailedMount 37s kubelet Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[kube-api-access-h68w7 lagg-connection]: timed out waiting for the condition
</code></pre>
<p>The minified project which you can apply to test yourself:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: test-project
labels:
name: test-project
---
apiVersion: v1
kind: Service
metadata:
labels:
environment: test
name: test-lagg
namespace: test-project
spec:
clusterIP: 10.96.13.37
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
app: nfs-server
environment: test
scope: backend
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
environment: test
name: test-lagg-volume
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
nfs:
path: /
server: 10.96.13.37
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
environment: test
name: test-lagg-claim
namespace: test-project
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: static
environment: test
scope: backend
name: test-digit
namespace: test-project
spec:
selector:
matchLabels:
app: static
environment: test
scope: backend
template:
metadata:
labels:
app: static
environment: test
scope: backend
spec:
containers:
- image: busybox
name: digit
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
volumeMounts:
- mountPath: /cache
name: lagg-connection
volumes:
- name: lagg-connection
persistentVolumeClaim:
claimName: test-lagg-claim
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
environment: test
name: test-lagg
namespace: test-project
spec:
selector:
matchLabels:
app: nfs-server
environment: test
scope: backend
template:
metadata:
labels:
app: nfs-server
environment: test
scope: backend
spec:
containers:
- image: gcr.io/google_containers/volume-nfs:0.8
name: lagg
ports:
- containerPort: 2049
name: lagg
- containerPort: 20048
name: mountd
- containerPort: 111
name: rpcbind
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: lagg-claim
volumes:
- emptyDir: {}
name: lagg-claim
</code></pre>
<p>As well as <code>emptyDir</code> I have also tried <code>hostPath</code>. This setup has worked before, and I'm not sure what I've changed if anything since it has stopped.</p>
| Ral | <p>Updating my Docker for Windows installation from 4.0.1 to 4.1.1 has fixed this problem.</p>
| Ral |
<p>I need to list the error pods below 5 days with all the columns. I have tried the below command, but no luck. I am getting some random pods and age. Could someone help to get only error pods which are below 5 days.</p>
<pre><code>kubectl get pod --all-namespaces --sort-by=.metadata.creationTimestamp | awk 'match($5,/[1-5]+d/) {print $0}' | grep "Error"
NAME READY STATUS RESTARTS AGE
pod1 0/1 Error 0 63d
pod2 0/1 Error 0 24d
pod3 0/1 Error 0 11d
pod4 0/1 Error 0 4d16h
pod5 0/1 Error 0 15h
</code></pre>
| Vivek | <p>Answering your question I would like to point out <strong>4 different things</strong>.</p>
<p><strong>First</strong>, the use of the regular expression in your example is not correct. The following regular expression:</p>
<pre><code>[1-5]+d
</code></pre>
<p>will match not only <code>1d</code>,<code>2d</code>,<code>3d</code>,<code>4d</code> and <code>5d</code> as you intended but it will also match <code>63d</code> or <code>24d</code>. Why ? Firstly, because you used <code>+</code> <a href="https://www.regular-expressions.info/refrepeat.html" rel="nofollow noreferrer">quantifier</a> which <em>repeats the previous item once or more.</em> so it will match even <code>345d</code> if you have such <code>Pods</code> running in your k8s cluster. Secondly, you didn't specify that it should start from a number between 1 and 5, it only says that it should occur somewhere (e.g. in the end) in the matched string, so <code>[1-5]d</code> will also match <code>63d</code> because it will be able to find <code>3d</code> in it. The correct form of this regular expression looks like this:</p>
<pre><code>^[1-5]d
</code></pre>
<p>Here we specify that it should match only strings which start with exactly one digit in range from 1 to 5 followed by <code>d</code> character.</p>
<p>So <code>awk</code> part of your command may look as follows:</p>
<pre><code>awk 'match($5,/^[1-5]d/) {print $0}'
</code></pre>
<p><strong>Second</strong>, note that when you run:</p>
<p><code>kubectl get pod</code> with <code>--all-namespaces</code> flag, the output contains 6 columns, not 5, as already mentioned in comments by <a href="https://stackoverflow.com/users/5147259/krishna-chaurasia">Krishna Chaurasia</a>. Additional column with <code>NAMESPACE</code> header is added. Compare 2 following results:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-release-mysql-0 1/1 Running 0 12d
</code></pre>
<p>and</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default my-release-mysql-0 1/1 Running 0 12d
</code></pre>
<p><code>AGE</code> becomes 6th column so your command (or specifically its <code>awk</code> part) should rather perform the match on column <code>$6</code>:</p>
<pre><code>awk 'match($6,/^[1-5]d/) {print $0}'
</code></pre>
<p><strong>Third</strong>, as to the last part of your command i.e.:</p>
<pre><code>grep "Error"
</code></pre>
<p>such filtering can be done by <code>kubectl</code> based on <code>--field-selector</code> e.g.:</p>
<pre><code>kubectl get pods --field-selector=status.phase=Pending --sort-by=.metadata.creationTimestamp | awk 'match($5,/^[1-5]d/) {print $0}'
</code></pre>
<p><strong>Fourth</strong>, it should be noted that in your example you're filtering out all the <code>Pods</code> which <code>AGE</code> is counted in other units like <code>hours</code>, <code>minutes</code> or <code>seconds</code>. If it's intended, just ignore the next paragraph.</p>
<p>However if you don't want to filter out all newer <code>Pods</code>, where <code>AGE</code> cannot be counted in <code>days</code>, you may modify your command so it shows <code>Pods</code> that were created <code>1-5 days</code> or <code>any number of seconds/minutes/hours</code> ago:</p>
<pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp | awk 'match($5,/^[1-5]d|[1-9]+h|[1-9]+m|[1-9]+s/) {print $0}'
</code></pre>
| mario |
<p>In k8s, we can use the <code>memory</code> medium(<code>tmpfs</code> instance) to define the <code>emptyDir</code> volume and mount it to pod's container. In the container, we can read and write data according to the <code>file</code> interface.</p>
<p>I want to know how does k8s achieve the association of <code>file</code> and <code>memory</code>? What is the principle of reading and writing <code>memory</code> data as <code>file</code>? <code>mmap</code>?</p>
| Long.zhao | <p>According to <a href="https://en.wikipedia.org/wiki/Tmpfs" rel="nofollow noreferrer">wikipdia</a>:</p>
<blockquote>
<p>tmpfs is a temporary file storage paradigm <strong>implemented in many Unix-like operating systems</strong>. It is intended to appear as a mounted file system, but data is stored in volatile memory instead of a persistent storage device. A similar construction is a RAM disk, which appears as a virtual disk drive and hosts a disk file system.</p>
</blockquote>
<p>So its not k8s feature. Is is a Linux feature that just appears to be used by k8s.</p>
<p>You can read more about it in <a href="https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt" rel="nofollow noreferrer">linux kernel documentation</a></p>
| Matt |
<p>for some legacy systems, I need to activate TLSv1.1 on my NGINX ingress controller until they are switched to TLSv1.2.
It should be fairly easy according to the documentation, but I am getting a handshake error. Looks like Nginx is not serving any certificate at all.</p>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
data:
log-format: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
log-format-escape-json: "true"
log-format-upstream: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
ssl-protocols: TLSv1.1 TLSv1.2
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: nginx
</code></pre>
<p>curl:</p>
<pre><code>$ curl https://example.com/healthcheck -I --tlsv1.2
HTTP/2 200
....
$ curl https://example.com/healthcheck -I --tlsv1.1 -k -vvv
* Trying 10.170.111.150...
* TCP_NODELAY set
* Connected to example.com (10.170.111.150) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.1 (OUT), TLS handshake, Client hello (1):
* TLSv1.1 (IN), TLS alert, Server hello (2):
* error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure
</code></pre>
<p>openssh:</p>
<pre><code>$ openssl s_client -servername example.com -connect example.com:443 -tls1_2
CONNECTED(00000007)
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, CN = DigiCert SHA2 Secure Server CA
verify return:1
depth=0 C = US, L = NY, O = Example, CN = example.com
verify return:1
---
Certificate chain
...
---
Server certificate
...
issuer=/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA
---
No client certificate CA names sent
Server Temp Key: ECDH, X25519, 253 bits
---
SSL handshake has read 3584 bytes and written 345 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
....
Verify return code: 0 (ok)
---
$ openssl s_client -servername example.com -connect example.com:443 -tls1_1
CONNECTED(00000007)
4541097580:error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.260.1/libressl-2.6/ssl/ssl_pkt.c:1205:SSL alert number 40
4541097580:error:140040E5:SSL routines:CONNECT_CR_SRVR_HELLO:ssl handshake failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.260.1/libressl-2.6/ssl/ssl_pkt.c:585:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Start Time: 1576574691
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
</code></pre>
<p>A sum-up of questions:</p>
<p>1) How to enable TLSv1.1 on Nginx ingress?</p>
<p>2) Can I see in the logs (where) which tls version was used to connect? I cannot find anything with kubectl logs -n Nginx pod?</p>
| Antman | <p>For anyone else having this problem. -> But please consider deactivating TLSv1 and TLSv1.1 as soon as possible!!!</p>
<pre><code>apiVersion: v1
data:
log-format: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
log-format-escape-json: "true"
log-format-upstream: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-early-data: "true"
ssl-protocols: TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: nginx
</code></pre>
| Antman |
<p>I have a Java <a href="https://github.com/kahootali/SpringJavaMonitoringApp" rel="nofollow noreferrer">Spring Boot Application</a> and I have configured the server to run on SSL and it is mandatory.</p>
<pre><code>server:
port: 8443
ssl:
enabled: true
key-store-type: pkcs12
key-store: ${KEYSTORE}
key-password: ${KEYSTORE_PASSWORD}
key-store-password: ${KEYSTORE_PASSWORD}
client-auth: need
</code></pre>
<p>I have created a cert for my domain <code>*.kahootali.com</code> from LetsEncrypt certificate and created a p12 file for the keystore by running</p>
<pre><code>openssl pkcs12 -export -CAfile ca.crt -in cert.pem -inkey key.pem -certfile cert.pem -out kstore.p12
</code></pre>
<p>I want to expose it on Kubernetes using Ingress Nginx Controller, so I have created secret by</p>
<pre><code>kubectl create secret generic store --from-file=kstore.p12
</code></pre>
<p>I have deployed application, can see the <a href="https://github.com/kahootali/SpringJavaMonitoringApp/tree/main/deploy" rel="nofollow noreferrer">deployment files,</a> and when I port-forward local 8443 to its service's 8443 and run</p>
<pre><code>curl -iv --cacert ca.crt --cert mediator_cert.pem --key mediator_key.pem --resolve 'spring-app.kahootali.com:8443:127.0.0.1' https://spring-app.kahootali.com:8443/
</code></pre>
<p>It works fine and returns</p>
<pre><code>* Added spring-app.kahootali.com:8444:127.0.0.1 to DNS cache
* Hostname spring-app.kahootali.com was found in DNS cache
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to spring-app.kahootali.com (127.0.0.1) port 8444 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: ca.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS handshake, CERT verify (15):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.kahootali.com
* start date: Feb 11 10:27:47 2021 GMT
* expire date: May 12 10:27:47 2021 GMT
* subjectAltName: host "spring-app.kahootali.com" matched cert's "*.kahootali.com"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: spring-app.kahootali.com:8444
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404
HTTP/1.1 404
< X-Application-Context: application:8443
X-Application-Context: application:8443
< Content-Type: application/json;charset=UTF-8
Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Date: Sun, 14 Feb 2021 14:29:47 GMT
Date: Sun, 14 Feb 2021 14:29:47 GMT
<
* Connection #0 to host spring-app.kahootali.com left intact
{"timestamp":1613312987350,"status":404,"error":"Not Found","message":"No message available","path":"/"}
</code></pre>
<p>But when I create an Ingress for it and ssl-passthrough it</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-monitoring-app
labels:
app: spring-monitoring-app
annotations:
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: spring-app.kahootali.com
http:
paths:
- path: /
backend:
serviceName: spring-monitoring-app
servicePort: http
tls:
- hosts:
- spring-app.kahootali.com
secretName: tls-cert
</code></pre>
<p>It gives <code>ERR_BAD_SSL_CLIENT_AUTH_CERT</code> on browser and in app Debug level logs, it gives</p>
<pre><code>Error during SSL handshake
java.io.IOException: EOF during handshake.
The SNI host name extracted for this connection was [spring-app.kahootali.com]
Handshake failed during wrap
javax.net.ssl.SSLHandshakeException: Empty server certificate chain
</code></pre>
| Ali Kahoot | <p><em>I'm posting my comment as an answer for better visibility:</em></p>
<p>As per <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#ssl-passthrough" rel="nofollow noreferrer">the docs</a> <strong>SSL Passthrough</strong> feature is disabled by default. In order to enable it you need to start your <strong>nginx-ingress controller</strong> with <code>--enable-ssl-passthrough</code> flag. Make sure you didn't forget about it.</p>
<p>You may also take a look at this <a href="https://kubernetes.github.io/ingress-nginx/troubleshooting/#troubleshooting" rel="nofollow noreferrer">troubleshooting steps</a> to verify your <strong>nginx-ingress controller</strong> configuration.</p>
<p>Please let me know if it helps.</p>
| mario |
<p>I have a Kubernetes environment and using Grafana to visualise metrics streamed from Prometheus.
I have application counters which are not fed to Prometheus. However I am able to view them as a JSON object by using a curl command.</p>
<p><a href="http://10.0.0.1:8081/api/events/" rel="nofollow noreferrer">http://10.0.0.1:8081/api/events/</a></p>
<p>Response has the following format:</p>
<pre><code>{
{
"ID": "001",
"source": "pageloads",
"summary": "high failure counts",
"severity": "major"
},
{
"ID": "003",
"source": "profile_counts",
"summary": "profile count doesn't match number of groups",
"severity": "minor"
},
{
"ID": "002",
"source": "number of subscribers",
"summary": "profiles higher than subscribers",
"severity": "critical"
}
}
</code></pre>
<p>Is there a plugin to query this data (<a href="http://10.0.0.1:8081/api/events/" rel="nofollow noreferrer">http://10.0.0.1:8081/api/events/</a>) in Grafana?</p>
<p>Thank you</p>
| Peter Smith | <p>You should be able to visualize this using the <a href="https://grafana.com/grafana/plugins/ryantxu-ajax-panel" rel="nofollow noreferrer">AJAX Panel plugin</a>.</p>
| tomgalpin |
<p>This question is similar to the <a href="https://stackoverflow.com/questions/57764237/kubernetes-ingress-to-external-service">question</a> but this is more around the path in the rule that can be configured.</p>
<p>The ingress should be able to handle both the internal services and an external service. The Url for the external service should be something like <a href="http://host_name:80/es" rel="nofollow noreferrer">http://host_name:80/es</a>. When the user hits this url, this should be redirected to the external service.</p>
<p>The service definition and the ingress rule are configured as below but it leads to 404.
Where am i going wrong?</p>
<p><strong>Ingress rules</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
annotations:
kubernetes.io/ingress.class: “nginx”
nginx.ingress.kubernetes.io/ingress.class: “nginx”
nginx.ingress.kubernetes.io/ssl-redirect: “false”
spec:
rules:
- host:
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /es
</code></pre>
<p><strong>Service and End Point definitions</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: <ip to external service>
ports:
- name: app
port: 80
protocol: TCP
</code></pre>
<p>It works when i try with the URL <a href="http://host_name:80" rel="nofollow noreferrer">http://host_name:80</a> and the following ingress rule. Please note the difference in the path in the ingress rule.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
annotations:
kubernetes.io/ingress.class: “nginx”
nginx.ingress.kubernetes.io/ingress.class: “nginx”
nginx.ingress.kubernetes.io/ssl-redirect: “false”
spec:
rules:
- host:
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /
</code></pre>
| Manikandan Kannan | <p>There is a service that can echo my request back to me: <a href="https://postman-echo.com/" rel="nofollow noreferrer">https://postman-echo.com/</a>, it will come useful later.
Here is its ip and it will simulate your external service:</p>
<pre><code>$ dig postman-echo.com +short
107.23.20.188
</code></pre>
<p>It works as following:</p>
<pre><code>$ curl 107.23.20.188/get | jq
{
"args": {},
"headers": {
"x-forwarded-proto": "http",
"x-forwarded-port": "80",
"host": "107.23.20.188",
"x-amzn-trace-id": "Root=1-5ebced9c-941e363cc28bf3529b8e7246",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"url": "http://107.23.20.188/get"
}
</code></pre>
<p>So as you can see it sends me a json with all headers that I sent to it and most importantly - url with path it receives.</p>
<p>Here is the ingress yaml I used:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
annotations:
#kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host:
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /es/(.*)
</code></pre>
<p>Service and Endpoint definition stays the same as yours with exception for endpoint IP. Here I used 107.23.20.188 (the postman-echo IP).</p>
<p>Now lets try to send some requests through nginx but first lets check whats ingress ip:</p>
<pre><code>$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
external-service * 192.168.39.96 80 20h
</code></pre>
<p>The ip is <code>192.168.39.96</code> and its private IP because I am running it on minikube but it should not matter.</p>
<pre><code>$ curl -s 192.168.39.96/es/get
{
"args": {},
"headers": {
"x-forwarded-proto": "http",
"x-forwarded-port": "80",
"host": "192.168.39.96",
"x-amzn-trace-id": "Root=1-5ebcf259-6331e8c709656623f1a94ed4",
"x-request-id": "d1545d1e8764da3cf57abb143faac4fb",
"x-forwarded-host": "192.168.39.96",
"x-original-uri": "/es/get",
"x-scheme": "http",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"url": "http://192.168.39.96/get"
}
</code></pre>
<p>so as you see I am sending request for path <code>/es/get</code> and echo server is receiving <code>/get</code>.</p>
<hr>
<p>One thing I have noticed while writing this answer is that (maybe its just copy-paste error but) your quotes in annotations <code>”</code> are different than <code>"</code> and this may be causing that nginx is not processing annotations as it should. Im my case for some reason when I was copy-pasting your yaml it it was working but so it did without your annotations so that may be the reason I haven't noticed it earlier.</p>
| Matt |
<p>I am currently setting up a dex instance on our Kubernetes custer to manage the LDAP authentication. Gangway is in front of it to give us the Kube config file. It worked fine the first time.</p>
<p>Then I was trying to test to disable my account to login and deleted the refresh token. Since then Dex shows the below error:</p>
<pre><code>time="2019-09-24T08:05:19Z" level=info msg="performing ldap search ou=people,dc=comp,dc=us,dc=it,dc=com sub (&(objectClass=person)(uid=swedas))"
time="2019-09-24T08:05:19Z" level=info msg="username \"swedas\" mapped to entry uid=swedas,ou=people,dc=comp,dc=us,dc=it,dc=com sub"
time="2019-09-24T08:05:19Z" level=info msg="login successful: connector \"ldap\", username=\"swedas\", email=\"[email protected]\", groups=[]"
time="2019-09-24T08:05:19Z" level=error msg="failed to delete refresh token: not found"
</code></pre>
<p>This is expected but how do I get over this ? How to restore my account?</p>
| swetad90 | <p>It seems like you have to delete the saved offline session as well. What is your storage options for Dex? To find out, you need to use your admin config for kubectl and check dex configmap issuing
<code>kubectl -n <DEX NAMESPACE> get configmap <DEX CONFIGMAP> -o yaml</code></p>
<p>In case you are using <code>kubernetes</code> as storage there is an offline session (custom resource definition) associated to your account that to have to delete as well. After that you may generate and use a new kubectl configuration using gangway.</p>
<p>List offline sessions:
<code>kubectl -n <DEX NAMESPACE> get offlinesessionses -o yaml</code></p>
| Marios |
<p>I created an AWS AMI with docker and my image preloaded (docker image is 7 gigs and it takes too long to download from private registry). I am using Rancher and have set my node template to use this ami. When I run my</p>
<pre><code>kubectl create -f command
</code></pre>
<p>I get the error that the image is not present and I don't have pull permissions. When I then ssh into the EC2 node and run</p>
<pre><code>docker images
</code></pre>
<p>Only Rancher images show up. I know that the docker file is present with the AMI as I have launched separate instances outside of Kubernetes and have proved to myself that the docker image is present. I notice when spinning up the cluster through the Rancher UI that it appears that Docker is being reinstalled on the nodes which I believe is removing the docker image.</p>
| magladde | <p>When you say <code>ssh into the EC2 instance</code>, I believe you checked in the node where the pod is running and not in the master.</p>
<p>Also kindly check if the image pull policy is set to <code>Never</code> or <code>IfNotPresent</code> in your yaml file.</p>
<pre><code>imagePullPolicy: Never
</code></pre>
| Prakash Krishna |
<p>I was trying to deploy a openebs cstore-pool based dynamically provisioned storage class so that I could have 3 seperate disks on 3 different machines.</p>
<p>While doing this I realized that I do not have an external drive and for capacity management I have to use a separate disk for pooling.</p>
<p>I created a disk image with dd with the size of 4GB for trying the feature.</p>
<pre><code>$ dd if=/dev/zero of=diskImage4 bs=1M count=4096
</code></pre>
<p>When I mounted it I saw that it is mounted as a loop device to loop0, as shown in the <code>lsblk</code> command output</p>
<pre><code>loop0 8:0 0 8K 1 loop mountPoint
</code></pre>
<p>What I was trying to achieve was,</p>
<pre><code>sda 8:16 0 23.5G 0 disk
└─sda1 8:18 0 23.5G 0 part /
sdb 8:0 0 4.0G 0 disk
└─sdb1 8:1 0 4.0G 0 part
</code></pre>
<p>How can I mount the new created file "diskImage4" as a disk partition.</p>
<p>I saw some mount parameters and the <code>losetup</code> command but they were all finally used for mounting the image as a loop device.</p>
<p>Or if there is a way to use files as disks in cstore-pools I would love to learn that.</p>
<p>If there is no common or understandable way to achieve this, thanks anyways.</p>
| Catastrophe | <p>You havent created a partition table on the virtual disk.</p>
<p>Do the DD as above, then run the output of that through gparted or fdisk and creat a partition table</p>
<p>then do an losteup <code>losetup -f diskImage4</code></p>
<p>then read the partitions <code>partx -a /dev/loop0</code> (or whatever the loop device is created as</p>
<p>Then do a lsblk</p>
<p>loop0 and loop0p1 should be visible</p>
| tomgalpin |
<p>I have a <code>k3s</code> cluster with a Prometheus and Alertmanager deployment inside, and set up an <code>Ingress</code> resource for each of them. My initial setup was <code>prometheus.domain.com</code> and <code>alermanager.domain.com</code> respectively, and these worked as expected.</p>
<p>However, I would like to switch it to <code>domain.com/prometheus</code> and <code>domain.com/alertmanager</code> respectively, and the options I have researched aren't working.</p>
<p><strong>GOAL</strong>
Have my Prometheus service work having <code>domain.com/prometheus</code> being the 'root' of my Prometheus pathing, hence redirecting automatically to <code>domain.com/prometheus/graph</code>, as per its default behavior, and all subpaths under Prometheus (alerts, config, admin API) follow the same behavior.</p>
<p><strong>Attempt #1</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
app: Prometheus
annotations:
traefik.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /prometheus($|/)(.*)
backend:
serviceName: prometheus
servicePort: 9090
</code></pre>
<p>SSH into the Vagrant box hosting my k3s cluster:</p>
<pre><code>$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS
prometheus <none> * 192.168.0.200 80
$ curl 192.168.0.200/prometheus
404 page not found
$ curl 192.168.0.200/prometheus/graph
404 page not found
</code></pre>
<p><strong>Attempt #2</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
app: Prometheus
annotations:
traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip
spec:
rules:
- http:
paths:
- path: /prometheus
backend:
serviceName: prometheus
servicePort: 9090
</code></pre>
<p>Same result as above.</p>
| Xanagandr | <p>You need to start prometheus with:</p>
<pre><code>--web.route-prefix="http://example.com/prometheus"
</code></pre>
<p>From <a href="https://github.com/prometheus/prometheus/blob/4e6a94a27d64f529932ef5f552d9d776d672ec22/cmd/prometheus/main.go#L160-L162" rel="nofollow noreferrer">source code</a>:</p>
<blockquote>
<p>web.external-url",
"The URL under which Prometheus is externally reachable (for example, if Prometheus is served via a reverse proxy). Used for generating relative and absolute links back to Prometheus itself. If the URL has a path portion, it will be used to prefix all HTTP endpoints served by Prometheus. If omitted, relevant URL components will be derived automatically.</p>
</blockquote>
<hr />
<p>And for altermanager you need to set:</p>
<pre><code>--alertmanager.url=http://example.com/alertmanager
</code></pre>
<hr />
<p><strong>NOTE:</strong> Don't use rewrites in ingress in this case.</p>
| Matt |
<p>I am trying to create a server-snippet, that will return a 503 for mobile user. I am doing that, by checking the user agent.
The problem:
Server-Snippet is not returning 503 in case of mobile user agent.
All over, the 503 is returned when the user agent is NOT a mobile devide. Mobile itself, sends a 200.
I cannot understand what is done by the Ingress. It seems, as the server-snipper code is somehow parsing it "in a hard-coded way".
Maybe someone got similar issues. If someone might give a hint on such server-snippers, it would be nice.
Thanks </p>
<p>I tried several server-snippets. Please check code below.</p>
<ol>
<li>Try This is a official code snipper from github.</li>
</ol>
<blockquote>
<p>nginx.ingress.kubernetes.io/server-snippet: set $agentflag 0;</p>
<p>if ($http_user_agent ~* "(Mobile|ios|android)" ){
set $agentflag 1; }</p>
<p>if ( $agentflag = 1 ) {
return 503; }</p>
</blockquote>
<p>As json:</p>
<pre><code>"nginx.ingress.kubernetes.io/server-snippet": "| set $agentflag 0; if ($http_user_agent ~* \"(Mobile|ios|android)\" ) { set $agentflag 1; } if ( $agentflag = 1 ) { return 503;}"
</code></pre>
<hr>
<ol start="2">
<li>Try</li>
</ol>
<blockquote>
<p>nginx.ingress.kubernetes.io/server-snippet:
if ($http_user_agent ~* "(Mobile|ios|android)" ){
return 503;
}</p>
</blockquote>
<p>As json:</p>
<pre><code> "nginx.ingress.kubernetes.io/server-snippet": " if ($http_user_agent ~* (Mobile|ios|android) ) {\n return 503; }"
</code></pre>
| Olli | <p>A short working solution is</p>
<pre><code> annotations:
kubernetes.io/ingress.class: private-nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/server-snippet: |
if ($http_user_agent ~* "(Mobile)" ) {
return 503;
}
</code></pre>
| Olli |
<p>I'm under the impression that the equivalent of the following command can't be put into a <code>Dockerfile</code> or <code>Dockerfile.dev</code>:</p>
<pre><code>docker run -p 5432:5432 -v /home/app/database/db-files:/var/lib/postgresql/data sockpuppet/database
</code></pre>
<p>The <code>-p 5432:5432</code> I was using to bind to the local port so I could connect to Postgres with pgAdmin. This is not an absolute requirement, but a nice to have. <strong>Perhaps there is a better way of doing this?</strong></p>
<p>The <code>-v /home/app/database/db-files:/var/lib/postgresql/data</code> so I can persist data on the local volume.</p>
<p>The problem is <code>EXPOSE</code> in a <code>Dockerfile</code>, as far as I know, just opens ports between containers. The problem with <code>VOLUME</code> in a <code>Dockerfile</code> is that it just refers to the image's file system.</p>
<p>The bigger issue I'm having a hard time understanding is the Skaffold <code>skaffold.yaml</code> refers to these <code>Dockerfile</code>`Dockerfile.dev` when running the containers:</p>
<pre><code>apiVersion: skaffold/v1beta2
kind: Config
build:
local:
push: false
artifacts:
- image: sockpuppet/client
context: client
docker:
dockerfile: Dockerfile.dev
sync:
'**/*.js': .
'**/*.css': .
'**/*.html': .
- image: sockpuppet/server
context: server
docker:
dockerfile: Dockerfile.dev
sync:
'**/*.js': .
deploy:
kubectl:
manifests:
- k8s/client-deployment.yaml
- k8s/server-deployment.yaml
- k8s/server-cluster-ip-service.yaml
- k8s/client-cluster-ip-service.yaml
</code></pre>
<p>So how am I supposed to bind ports and map volumes if they can't be specified in <code>Dockerfile</code>? Do I just need to run <code>docker run -p 5432:5432 -v /home/app/database/db-files:/var/lib/postgresql/data ishraqiyun77/database</code> manually every time I want to start up the DB?</p>
<p>Repo I'm using as a reference if that is helpful: <a href="https://github.com/StephenGrider/DockerCasts/tree/master/complex" rel="nofollow noreferrer">https://github.com/StephenGrider/DockerCasts/tree/master/complex</a></p>
| cjones | <p>The <code>skaffold.yaml</code> is there to help with build and deployment of <code>k8s</code>. If you want to do port-exposing and volume mapping, you should do that in the various <code>.yaml</code> files in the <code>manifests</code> section. The <code>EXPOSE</code> keyword in your <code>Dockerfile</code>s simply tells the newly-created image which ports to allow for exposing and forwarding; it is only in your <code>k8s</code> containers that you actually do the mapping of ports and volumes to the host machine.</p>
<p>Disclosure: I am an <a href="https://enterprisedb.com" rel="nofollow noreferrer">EnterpriseDB (EDB)</a> employee</p>
| richyen |
<p>There seems be be mixed information and I couldn't find any official
source to confirm this.</p>
<p>From the kubernetes
<a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.12.md" rel="nofollow noreferrer">changelog</a>,
it seems that cAdvisor web UI which has been available via kubelet has been deprecated:</p>
<pre><code>The formerly publicly-available cAdvisor web UI that the kubelet started using --cadvisor-port has been entirely removed in 1.12. The recommended way to run cAdvisor if you still need it, is via a DaemonSet.
</code></pre>
<p>But this <a href="https://stackoverflow.com/a/60190769/1651941">Stackoverflow answer</a> indicates that UI itself has been deprecated:</p>
<pre><code>Despite its UI has been deprecated it is still possible to monitor your containers via Prometheus.
</code></pre>
<p>From looking at the official <a href="https://github.com/google/cadvisor/blob/master/docs/web.md" rel="nofollow noreferrer">documentation</a>, I find no such information.</p>
<p>So my questions is:</p>
<ul>
<li>Has the cAdvisor Web UI itself has been deprecated ? (I'm aware that the interface via kubelet option --cadvisor-port is deprecated. But the option being deprecated is different than if the Web UI itself is deprecated)</li>
<li>If it is deprecated, is there any offical source on this ?</li>
</ul>
| Sibi | <p>It doesn't look like web ui itself is deprecated.</p>
<p>It's only been removed from kubelet, that's all. It means that the web ui won't be a part of a kubelet anymore.</p>
<p>You can still use it if you want by deploying cAdvisor as separate application.</p>
| Matt |
<p>I have a setup of EKS Cluster and I am trying to access some of the secrets available in my AWS Secrets manager, Currently i have given permissions to one ROLE (AWS-IAM) to access all the required secrets and i am using below k8s manifest and i am getting below error.</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: spark-pi
namespace: spark-pi
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::XXXXXXXXXXX:role/spark-secret-role
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: spark-pi-role
namespace: spark-pi
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spark-pi-role-binding
namespace: spark-pi
subjects:
- kind: ServiceAccount
name: spark-pi
namespace: spark-pi
roleRef:
kind: Role
name: spark-pi-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>I am able to Submit Spark Job Successfully, but i am getting below Error while checking POD logs.
<strong>User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/eks-node-role/i-0XXXXXXXXXXX is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:us-west-2:XXXXXXXXXX:secret:dev/somesecret (Service: AWSSecretsManager; Status Code: 400; Error Code: AccessDeniedException;</strong></p>
<p>Not Sure why it is assuming EKS-Node Role, when i have attached Required Role and Permissions to Service Account, also i have already created Managed Policy (Attached to Role) to access AWS Secrets.</p>
| Zester07 | <p>RBAC itself is just for kubernetes access management.
You have defined a ServiceAccount with an AWS role attached, thats good.</p>
<p>Could you share the AWS policy which is attached to the role <code>role/spark-secret-role</code>? And please share the Pod manifest with us, you need to attach the ServiceAccount to the Pod itself. Otherwise the Pod is not using the ServiceAccount with the attached AWS role.</p>
<p>You also need to create an OIDC ID provider (IdP) in AWS.</p>
<p>The whole thing is called IRSA (IAM Roles for Service Accounts)
You can find all necessary information in this AWS blog article: <a href="https://aws.amazon.com/de/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="nofollow noreferrer">https://aws.amazon.com/de/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/</a></p>
| Julian Kleinhans |
<p>I have a GKE cluster (gke v1.13.6) and using istio (v1.1.7) with several services deployed and working successfully except one of them which always responds with HTTP 503 when calling through the gateway : upstream connect error or disconnect/reset before headers. reset reason: connection failure.</p>
<p>I've tried calling the pod directly from another pod with curl enabled and it ends up in 503 as well :</p>
<pre><code>$ kubectl exec sleep-754684654f-4mccn -c sleep -- curl -v d-vine-machine-dev:8080/d-vine-machine/swagger-ui.html
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.3.254.3...
* TCP_NODELAY set
* Connected to d-vine-machine-dev (10.3.254.3) port 8080 (#0)
> GET /d-vine-machine/swagger-ui.html HTTP/1.1
> Host: d-vine-machine-dev:8080
> User-Agent: curl/7.60.0
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection failure< HTTP/1.1 503 Service Unavailable
< content-length: 91
< content-type: text/plain
< date: Thu, 04 Jul 2019 08:13:52 GMT
< server: envoy
< x-envoy-upstream-service-time: 60
<
{ [91 bytes data]
100 91 100 91 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1378
* Connection #0 to host d-vine-machine-dev left intact
</code></pre>
<p>Setting the log level to TRACE at the istio-proxy level :</p>
<pre><code>$ kubectl exec -it -c istio-proxy d-vine-machine-dev-b8df755d6-bpjwl -- curl -X POST http://localhost:15000/logging?level=trace
</code></pre>
<p>I looked into the logs of the injected sidecar istio-proxy and found this :</p>
<pre><code>[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:381] [C119][S9661729384515860777] router decoding headers:
':authority', 'api-dev.d-vine.tech'
':path', '/d-vine-machine/swagger-ui.html'
':method', 'GET'
':scheme', 'http'
'cache-control', 'max-age=0'
'upgrade-insecure-requests', '1'
'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
'accept-encoding', 'gzip, deflate'
'accept-language', 'fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7'
'x-forwarded-for', '10.0.0.1'
'x-forwarded-proto', 'http'
'x-request-id', 'e38a257a-1356-4545-984a-109500cb71c4'
'content-length', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/default/sa/default;Hash=8b6afba64efe1035daa23b004cc255e0772a8bd23c8d6ed49ebc8dabde05d8cf;Subject="O=";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account;DNS=istio-ingressgateway.istio-system'
'x-b3-traceid', 'f749afe8b0a76435192332bfe2f769df'
'x-b3-spanid', 'bfc4618c5cda978c'
'x-b3-parentspanid', '192332bfe2f769df'
'x-b3-sampled', '0'
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C121] connecting
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C121] connecting to 127.0.0.1:8080
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C121] connection in progress
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C119][S9661729384515860777] decode headers called: filter=0x4f118b0 status=1
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:384] [C119] parsed 1272 bytes
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C119] readDisable: enabled=true disable=true
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C121] socket event: 3
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C121] write ready
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:526] [C121] delayed connection error: 111
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C121] closing socket: 0
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C121] disconnect. resetting 0 pending requests
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C121] client disconnected, failure reason:
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C121] purge pending, failure reason:
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:644] [C119][S9661729384515860777] upstream reset: reset reason connection failure
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0e5f0 status=0
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0edc0 status=0
[2019-07-04 07:30:41.353][24][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0f0e0 status=0
[2019-07-04 07:30:41.353][24][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C119][S9661729384515860777] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '91'
'content-type', 'text/plain'
'date', 'Thu, 04 Jul 2019 07:30:41 GMT'
'server', 'istio-envoy'
</code></pre>
<p>Has anyone encountered such an issue ? If you need more info about the configuration, I can provide.</p>
| Jean-Baptiste Piraux | <p>Thanks for your answer Manvar. There was no problem with the curl enabled pod but thanks for the insight. It was a misconfiguration of our tomcat port that was not matching the service/virtualService config.</p>
| Jean-Baptiste Piraux |
<p>I understand how to set up a readiness probe in kubernetes, but are there any best practices about what the microservice should actually check when the readiness probe is called? Two specific examples:</p>
<ol>
<li>A microservice that fronts a db where without a functioning db connection, practically all functionality will not work. Here I think pinging the db would be reasonable, failing the readiness check if ping fails. Is this recommended?</li>
<li>A microservice that uses N other microservices, but where failure to connect to any one would still allow for a majority of the functionality to work. Here I think checking for connectivity to the backing services is ill advised. In this case, assuming there is no extensive "boot" or "warm up" processing, liveness and readiness are equivalent. Correct?</li>
</ol>
<p>Thank you</p>
| user2133814 | <p>No, I don't think there are best practise for readiness probes.</p>
<p>It all depends on applciation and what you exepct to happen.</p>
<blockquote>
<p>Here I think pinging the db would be reasonable, failing the readiness check if ping fails</p>
</blockquote>
<p>I will try to comment on this. Let's imagine you have some backend microservice (deployment with serveral replicas) and it's communicating with a db. When db fails (assuming no replication or some serious db downtime), your pod replicas' readiness probes start to fail and the pods' endpoints are being deleted from the Service. Now when client tries to access the service, it will result in connection timeout because no service is there to handle the request.</p>
<p>You have to ask yourself if this is the behaviuor you want/expect or maybe it would be much more convinient for the readiness probe not to fail when db fails, microservice would still handle traffic in this case, and would be able to return an error message to the client informing him about the problem.</p>
<p>Even simple 503 would be much better in this case. Getting an actual error message tells me much more about the actual issue than getting <em>connection timeout</em>.</p>
<hr />
<blockquote>
<p>[...] but where failure to connect to any one would still allow for a majority of the functionality to work. Here I think checking for connectivity to the backing services is ill advised. In this case, assuming there is no extensive "boot" or "warm up" processing, liveness and readiness are equivalent.</p>
</blockquote>
<p>It depends on the usecase. In application code you can react much quicker to problems that happen to backing services and I would use this approach whenever I can, and only use readiness for checking backing services whenever it can't be handled differently.</p>
<hr />
<p>So for me liveness probe answers the question: "Is this application still running?"
And readines proble answers the question: "Is this application <em>ready to handle/capable of handling</em> the traffic?"</p>
<p>And it's up to you to define what does it mean to "still run" and "be able to handle traffic".</p>
<p>But usually if appliation is running, it is also able to handle the traffic so in this case liveness and readiness are indeded equal.</p>
| Matt |
<p>I am running K8S cluster on-premise (Nothing in the cloud) with one K8S Master and two worker nodes.</p>
<ul>
<li>k8s-master : 192.168.100.100</li>
<li>worker-node-1 : 192.168.100.101</li>
<li>worker-node-2 : 192.168.100.102</li>
</ul>
<p>I used kubernetes / ingress-nginx for routing traffic to my simple App.
These are my pods running on both workers nodes:</p>
<pre><code>[root@k8s-master ingress]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default hello-685445b9db-b7nql 1/1 Running 0 44m 10.5.2.7 worker-node-2 <none> <none>
default hello-685445b9db-ckndn 1/1 Running 0 44m 10.5.2.6 worker-node-2 <none> <none>
default hello-685445b9db-vd6h2 1/1 Running 0 44m 10.5.1.18 worker-node-1 <none> <none>
default ingress-nginx-controller-56c75d774d-p7whv 1/1 Running 1 30h 10.5.1.14 worker-node-1 <none> <none>
kube-system coredns-74ff55c5b-s8zss 1/1 Running 12 16d 10.5.0.27 k8s-master <none> <none>
kube-system coredns-74ff55c5b-w6rsh 1/1 Running 12 16d 10.5.0.26 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 14 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-flannel-ds-76mt8 1/1 Running 1 30h 192.168.100.102 worker-node-2 <none> <none>
kube-system kube-flannel-ds-bfnjw 1/1 Running 10 16d 192.168.100.101 worker-node-1 <none> <none>
kube-system kube-flannel-ds-krgzg 1/1 Running 13 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-proxy-6bq6n 1/1 Running 1 30h 192.168.100.102 worker-node-2 <none> <none>
kube-system kube-proxy-df8fn 1/1 Running 13 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-proxy-z8q2z 1/1 Running 10 16d 192.168.100.101 worker-node-1 <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-799cd98cf6-zh8xs 1/1 Running 9 16d 192.168.100.101 worker-node-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-hvxgm 1/1 Running 10 16d 10.5.1.17 worker-node-1 <none> <none>
</code></pre>
<p>And these are the services running on my cluster:</p>
<pre><code>[root@k8s-master ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello NodePort 10.105.236.241 <none> 80:31999/TCP 30h
ingress-nginx-controller NodePort 10.110.141.41 <none> 80:30428/TCP,443:32682/TCP 30h
ingress-nginx-controller-admission ClusterIP 10.109.15.31 <none> 443/TCP 30h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
</code></pre>
<p>And this is the ingress description:</p>
<pre><code>[root@k8s-master ingress]# kubectl describe ingress ingress-hello
Name: ingress-hello
Namespace: default
Address: 10.110.141.41
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/hello hello:80 (10.5.1.18:80,10.5.2.6:80,10.5.2.7:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>The issue is when accessing the first node by visiting worker-node-1 IP Address with Ingress Controller Port = <strong>30428</strong>, <strong><a href="http://192.168.100.101:30428" rel="nofollow noreferrer">http://192.168.100.101:30428</a></strong>, its working fine with no problems.
While accessing worker-node-2 by visiting IP with same ingress port <strong>30428</strong>, its <strong>NOT RESPONDING</strong> from out side the node and also from inside the node too by accessing the URL: <strong><a href="http://192.168.100.102:30428" rel="nofollow noreferrer">http://192.168.100.102:30428</a></strong> .
I also tried executing telnet command (inside the worker node 2), no luck also:</p>
<pre><code>[root@worker-node-2 ~]# telnet 192.168.100.102 30428
Trying 192.168.100.102...
</code></pre>
<p>The most interesting thing is the port is shows up in netstat command, as I am executing this command from inside the Node-2 , showing ingress Port:30428 is in <strong>LISTEN</strong> state:</p>
<pre><code>[root@worker-node-2 ~]# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1284/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:32682 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1856/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1020/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1016/cupsd
tcp 0 0 127.0.0.1:41561 0.0.0.0:* LISTEN 1284/kubelet
tcp 0 0 0.0.0.0:30428 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:31999 0.0.0.0:* LISTEN 2578/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1284/kubelet
tcp6 0 0 :::111 :::* LISTEN 1/systemd
tcp6 0 0 :::10256 :::* LISTEN 2578/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 1020/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1016/cupsd
udp 0 0 0.0.0.0:5353 0.0.0.0:* 929/avahi-daemon: r
udp 0 0 0.0.0.0:44997 0.0.0.0:* 929/avahi-daemon: r
udp 0 0 192.168.122.1:53 0.0.0.0:* 1856/dnsmasq
udp 0 0 0.0.0.0:67 0.0.0.0:* 1856/dnsmasq
udp 0 0 0.0.0.0:111 0.0.0.0:* 1/systemd
</code></pre>
<p>based on my understanding , all worker node must expose NodePort for "ingress controller" port which=30428??</p>
<p><strong>Edited:</strong>
I found that <strong>"ingress-nginx-controller-56c75d774d-p7whv"</strong> is deployed only on node-1.
Do I need to make sure that the ingress-nginx controller is running on all nodes? how to achieve that if this statement is true?</p>
| Faris Rjoub | <p>Kubernetes networking (kube-proxy to be more specific) uses iptables to control the network connections between pods and nodes. Since Centos 8 uses <code>nftables</code> instead <code>iptables</code> this cause networking issues.</p>
<p>Calico in v.3.8.1+ included support for hosts which uses iptables in NFT mode.
The solution is to set <code>FELIX_IPTABLESBACKEND=NFT</code> option. This will tell Calico to use nftables backed.</p>
<blockquote>
<p>This parameter controls which variant of iptables binary Felix uses.
Set this to <code>Auto</code> for auto detection of the backend. If a specific
backend is needed then use <code>NFT</code> for hosts using a netfilter backend
or <code>Legacy</code> for others. [Default: <code>Legacy</code>]</p>
</blockquote>
<p>Please visit this calico page to check how to <a href="https://docs.projectcalico.org/reference/felix/configuration" rel="nofollow noreferrer">configure felix</a>.
For more reading please visit <a href="https://github.com/projectcalico/calico/issues/2322#issuecomment-515239775" rel="nofollow noreferrer">this github issues</a>.</p>
| acid_fuji |
<p><strong>Problem:</strong></p>
<p>I upgraded from AWS EKS cluster to v1.23 from v1.22 and all of a sudden, all the pods that had Persistent Volume Claim (PVC) and Persistent Volume (PV) started failing with the errors like <code>FailedAttachVolume AttachVolume.Attach failed</code> and <code>FailedMount MountVolume.WaitForAttach failed</code> for the AWS EBS volumes.</p>
<p>Pods were giving the following error: <code>Unable to attach or mount volumes timed out waiting for the condition</code></p>
<p><strong>Solutions Tried:</strong></p>
<ul>
<li>I tried adding AWS EBS CSI Driver add-on in the AWS EKS cluster but still it didn't work.</li>
<li>I tried removing the annotation for migration to this new provisioner on PVCs but that didn't work either.</li>
<li>I also tried adding storage class for <code>gp3</code> AWS EBS volume type with AWS EBS CSI Driver as the new provisioner as I was using <code>gp2</code> but that didn't work either.</li>
</ul>
<p><strong>Note:</strong> AWS EBS volumes were of type <code>gp2</code>.</p>
| Abdullah Khawer | <p>After adding <code>AmazonEBSCSIDriverPolicy</code> AWS IAM policy to the AWS IAM role that is attached to all the AWS EKS nodes (AWS EC2 instances) and then adding the <code>AWS EBS CSI Driver</code> add-on in the AWS EKS cluster, errors were resolved and PVC got attached successfully. I don't see any issues related to Persistent Volume Claims (PVC), Persistent Volumes (PV), AWS EBS volumes, and pods anymore.</p>
<p><strong>Note:</strong> I already had an AWS IAM OpenID Connect (OIDC) provider for my AWS EKS cluster which is a prerequisite for this. In your case, there could be some other issue and the resolution steps might differ so please check the referred document.</p>
<p>Reference: <a href="https://repost.aws/knowledge-center/eks-troubleshoot-ebs-volume-mounts" rel="nofollow noreferrer">How do I troubleshoot issues with my EBS volume mounts in Amazon EKS?</a></p>
| Abdullah Khawer |
<p>I have different applications running in the same Kubernetes Cluster.</p>
<p>I would like multiple domains to access my Kubernetes Cluster and be redirected depending the domain.
For each domain I would like different annotations/configuration.</p>
<p>Without the annotations I have an ingress deployment such as:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
type: LoadBalancer
tls:
- hosts:
- foo.bar.dev
- bar.foo.dev
secretName: tls-secret
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: varfoo
servicePort: 80
path: /(.*)
</code></pre>
<p>But They need to have multiple configuration, for example, one need to have the following annotation</p>
<pre><code> nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
</code></pre>
<p>And another would have this one</p>
<pre><code> nginx.ingress.kubernetes.io/backend-protocol: "FCGI"
nginx.ingress.kubernetes.io/fastcgi-index: "index.php"
nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm"
</code></pre>
<p><strong>Those configurations are not compatible</strong>, and I can't find a way to specify a configuration by host.</p>
<p>I also understand that it's impossible to have 2 Ingress serving External HTTP request.</p>
<p>So what am I not understanding / doing wrong ?</p>
| BastienSander | <blockquote>
<p>I also understand that it's impossible to have 2 Ingress serving External HTTP request</p>
</blockquote>
<p>I am not sure where you've found this but you totally can do this.</p>
<p>You should be able to create two separate ingress objects like following:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
type: LoadBalancer
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: "FCGI"
nginx.ingress.kubernetes.io/fastcgi-index: "index.php"
nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm"
spec:
type: LoadBalancer
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
</code></pre>
<p>This is a completely valid ingress configuration, and most probably the only valid one that will solve your problem.</p>
<p>Each ingress object configures one domain.</p>
| Matt |
<p>So this has been working forever. I have a few simple services running in GKE and they refer to each other via the standard service.namespace DNS names.</p>
<p>Today all DNS name resolution stopped working. I haven't changed anything, although this may have been triggered by a master upgrade.</p>
<pre><code>/ambassador # nslookup ambassador-monitor.default
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'ambassador-monitor.default': Try again
/ambassador # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local c.snowcloud-01.internal google.internal
nameserver 10.207.0.10
options ndots:5
</code></pre>
<p>Master version 1.14.7-gke.14</p>
<p>I can talk cross-service using their IP addresses, it's just DNS that's not working.</p>
<p>Not really sure what to do about this...</p>
| user2013791 | <p>The easiest way to verify if there is a problem with your Kube DNS is to look at the logs StackDriver [https://cloud.google.com/logging/docs/view/overview].</p>
<p>You should be able to find DNS resolution failures in the logs for the pods, with a filter such as the following:</p>
<p>resource.type="container" </p>
<p>("UnknownHost" OR "lookup fail" OR "gaierror") </p>
<p> </p>
<p>Be sure to check logs for each container. Because the exact names and numbers of containers can change with the GKE version, you can find them like so:</p>
<p> kubectl get pod -n kube-system -l k8s-app=kube-dns -o \ </p>
<p>jsonpath='{range .items[*].spec.containers[*]}{.name}{"\n"}{end}' | sort -u kubectl get pods -n kube-system -l k8s-app=kube-dns </p>
<p> </p>
<p>Has the pod been restarted frequently? Look for OOMs in the node console. The nodes for each pod can be found like so:</p>
<p>kubectl get pod -n kube-system -l k8s-app=kube-dns -o \ </p>
<p>jsonpath='{range .items[*]}{.spec.nodeName} pod={.metadata.name}{"\n"}{end}' </p>
<p> </p>
<p>The <code>kube-dns</code> pod contains four containers:</p>
<ul>
<li><code>kube-dns</code> process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to serve DNS requests,</li>
<li><code>dnsmasq</code> adds DNS caching to improve performance,</li>
<li><code>sidecar</code> provides a single health check endpoint while performing dual health checks (for <code>dnsmasq</code> and <code>kubedns</code>). It also collects dnsmasq metrics and exposes them in the Prometheus format,</li>
<li><code>prometheus-to-sd</code> scraping the metrics exposed by <code>sidecar</code> and sending them to Stackdriver.</li>
</ul>
<p>By default, the <code>dnsmasq</code> container accepts 150 concurrent requests. Requests beyond this are simply dropped and result in failed DNS resolution, including resolution for <code>metadata</code>. To check for this, view the logs with the following filter:</p>
<p>resource.type="container"<br />resource.labels.cluster_name="<cluster-name>"<br />resource.labels.namespace_id="kube-system"<br />logName="projects/<project-id>/logs/dnsmasq"<br />"Maximum number of concurrent DNS queries reached"</p>
<p> </p>
<p>If legacy stackdriver logging of cluster is disabled, use the following filter:</p>
<p>resource.type="k8s_container"<br />resource.labels.cluster_name="<cluster-name>"<br />resource.labels.namespace_name="kube-system"<br />resource.labels.container_name="dnsmasq"<br />"Maximum number of concurrent DNS queries reached"</p>
<p> </p>
<p>If Stackdriver logging is disabled, execute the following:</p>
<p>kubectl logs --tail=1000 --namespace=kube-system -l k8s-app=kube-dns -c dnsmasq | grep 'Maximum number of concurrent DNS queries reached'</p>
<p> </p>
<p>Additionally, you can try to use the command [dig ambassador-monitor.default @10.207.0.10] from each nodes to verify if this is only impacting one node. If it is, you can simple re-create the impacted node.</p>
| Ilia Borovoi |
<p>I followed this tutorial <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a> to issue a SSL certificate for my ingress using cert manager and Let's encrypt and I run this error <code>Issuing certificate as Secret does not exist</code>. Is my configuration wrong? It's a Minikube local cluster.</p>
<p>staging_issuer.yaml</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: email_address
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-read-timeout: "12h"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- frontend.info
- backend.info
secretName: echo-tls
rules:
- host: frontend.info
http:
paths:
- backend:
serviceName: frontend
servicePort: 80
- host: backend.info
http:
paths:
- backend:
serviceName: backend
servicePort: 8080
</code></pre>
<p>kubectl describe certificate</p>
<pre><code>Name: echo-tls
Namespace: default
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1beta1
Kind: Certificate
Metadata:
Creation Timestamp: 2021-01-26T09:29:54Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1alpha2
Fields Type: FieldsV1
Manager: controller
Operation: Update
Time: 2021-01-26T09:29:55Z
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: echo-ingress
UID: <UID>
Resource Version: 423812
UID: <UID>
Spec:
Dns Names:
frontend.info
backend.info
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: echo-tls
Status:
Conditions:
Last Transition Time: 2021-01-26T09:29:54Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Last Transition Time: 2021-01-26T09:29:54Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: False
Type: Ready
Next Private Key Secret Name: echo-tls-hg5tt
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 7h56m cert-manager Issuing certificate as Secret does not exist
Normal Generated 7h56m cert-manager Stored new private key in temporary Secret resource "echo-tls-hg5tt"
Normal Requested 7h56m cert-manager Created new CertificateRequest resource "echo-tls-hmz86
</code></pre>
| efgdh | <p>Let' start from answering your question about the event:</p>
<pre class="lang-yaml prettyprint-override"><code>Events:
Type Reason Age From Message
Normal Issuing 7h56m cert-manager Issuing certificate as Secret does not exist
</code></pre>
<p>This is not an error and it is not a blocking factor. As you can see in the <code>type</code> section this is marked as <code>Normal</code>. The events type that you should worry about are <code>Warning</code> events, like here:</p>
<pre class="lang-yaml prettyprint-override"><code>Events:
Type Reason Age From Message
Warning Unhealthy 2m (x2 over 2m) kubelet, ip-XXX-XXX-XXX-XXX.us-west-2.compute.internal Readiness probe failed: Get http://XXX.XXX.XXX.XXX:YYY/healthz: dial tcp connect: connection refused
</code></pre>
<hr />
<p>Now coming to your real problem. The documentation that you provided clearly states in prerequisites section that you need to have a domain name and a DNS A record pointing to the DigitalOcean Load Balancer used by the Ingress (in your case you would want to point it towards <code>minikube</code>). Assuming you are the owner of the two domains you mentioned in the yamls I noticed that they points to a two different ip address:</p>
<pre class="lang-yaml prettyprint-override"><code>$ dig frontend.info
;; ANSWER SECTION:
frontend.info. 599 IN A 104.247.81.51
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>$ dig backend.info
;; ANSWER SECTION:
backend.info. 21599 IN A 127.0.0.1
</code></pre>
<p>Domain has to point to <code>external-ip</code> address of a machine where <code>minikube</code> is running (in my case it was cloud virtual machine). Having this, it is sill not enough since <code>minikube</code> typically runs in its own docker container or vm. You need to make sure the traffic actually reaches your minikube cluster.</p>
<p>For that purpose I have used <code>kubectl port-fowarding</code> to expose the <code>nginx-controller</code> pod:</p>
<pre class="lang-sh prettyprint-override"><code>sudo kubectl port-forward pod/ingress-nginx-controller-69ccf5d9d8-mdkrr -n kube-system 80:80 --address 0.0.0.0
Forwarding from 0.0.0.0:80 -> 80
</code></pre>
<p>Let's encrypt needs to have access to your application to prove that you are the owner of the domain. Once this is achieved your certificate object will change it status to <code>True</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>➜ ~ k get certificate
NAME READY SECRET AGE
echo-tls True echo-tls 104m
</code></pre>
<p>Here you have final test. Please note that I was using my own domain which I just changed into <code><your-domain></code>. In your case this would <code>frontend.info</code> or <code>backend.info</code></p>
<pre class="lang-java prettyprint-override"><code>➜ ~ curl https://<your-domain> -v
-----
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=test-domain.com
* start date: Jan 27 10:09:31 2021 GMT
* expire date: Apr 27 10:09:31 2021 GMT
* subjectAltName: host "<your-domain>" matched cert's "<your-domain>"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
-----
</code></pre>
| acid_fuji |
<p>I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:</p>
<pre><code>volumes:
- name: shared-folder
emptyDir: {}
</code></pre>
<p>The first container is a busybox:</p>
<pre><code> - image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
</code></pre>
<p>The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.</p>
<pre><code> volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
</code></pre>
<p><strong>Problem.</strong> The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.</p>
<p>Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?</p>
| Diego | <p>To use mountpath with env variable you can use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment" rel="nofollow noreferrer">subPath with expanded environment variables</a> (k8s v1.17+).</p>
<p>In your case it would look like following:</p>
<pre><code>containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
</code></pre>
| Matt |
<p>Following this tutorial,
<a href="https://learn.hashicorp.com/tutorials/terraform/gke?in=terraform/kubernetes" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/terraform/gke?in=terraform/kubernetes</a>
I have deployed a GKE cluster in GCloud.</p>
<p>Now when I try to schedule a deployment following this link,
<a href="https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider</a></p>
<p>It fails with,</p>
<pre><code>kubernetes_deployment.nginx: Creating...
Error: Failed to create deployment: Post "https://<ip>/apis/apps/v1/namespaces/default/deployments": x509: certificate signed by unknown authority
on kubernetes.tf line 21, in resource "kubernetes_deployment" "nginx":
21: resource "kubernetes_deployment" "nginx" {
</code></pre>
<p>My kubernetes.tf looks like this,</p>
<pre><code>terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
}
}
}
provider "kubernetes" {
load_config_file = false
host = google_container_cluster.primary.endpoint
username = var.gke_username
password = var.gke_password
client_certificate = google_container_cluster.primary.master_auth.0.client_certificate
client_key = google_container_cluster.primary.master_auth.0.client_key
cluster_ca_certificate = google_container_cluster.primary.master_auth.0.cluster_ca_certificate
}
resource "kubernetes_deployment" "nginx" {
metadata {
name = "scalable-nginx-example"
labels = {
App = "ScalableNginxExample"
}
}
spec {
replicas = 2
selector {
match_labels = {
App = "ScalableNginxExample"
}
}
template {
metadata {
labels = {
App = "ScalableNginxExample"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "example"
port {
container_port = 80
}
resources {
limits {
cpu = "0.5"
memory = "512Mi"
}
requests {
cpu = "250m"
memory = "50Mi"
}
}
}
}
}
}
}
</code></pre>
<p>I am using MacOS to run terraform. Any help is appreciated.</p>
<p>Please note that kubectl get pods --all-namespaces is working fine, so I don't think it's an issue with kube config.</p>
<p>Thanks,
Arun</p>
| Arun A Nayagam | <p>It was because the certificate was base64 encoded, changing the provider section to the below snippet, got rid of the issue.</p>
<pre><code>provider "kubernetes" {
load_config_file = false
host = google_container_cluster.primary.endpoint
username = var.gke_username
password = var.gke_password
client_certificate = base64decode(google_container_cluster.primary.master_auth.0.client_certificate)
client_key = base64decode(google_container_cluster.primary.master_auth.0.client_key)
cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
}
</code></pre>
| Arun A Nayagam |
<p>Hi All I use GKE and created a cluster in Google cloud.
Here is my Persistent volume</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
capacity:
storage: 256Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/db
</code></pre>
<p>Here is the command that created persistent volume</p>
<pre><code>kubectl create -f mongo-pv.yaml
</code></pre>
<p>persistentvolumeclaim</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
volumeName: mongo-pv
resources:
requests:
storage: 256Mi
</code></pre>
<p>applied using</p>
<pre><code>kubectl create -f mongo-pvc.yaml
</code></pre>
<p>however the command to get pvc always shows pending. It should be showing up. Not sure what is wrong in the yaml files</p>
<pre><code>kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongo-pvc Pending mongo-pv 0 standard-rwo
</code></pre>
<p><strong>Update</strong></p>
<pre><code> kubectl describe pvc mongo-pvc
Name: mongo-pvc
Namespace: default
StorageClass: standard-rwo
Status: Pending
Volume: mongo-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 0
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeMismatch 23s (x262 over 65m) persistentvolume-controller Cannot bind to requested volume "mongo-pv": storageClassName does not match
</code></pre>
| bionics parv | <p>Your PV's storage class name is empty but your PVC's storage class name is filled automatically as <code>standard-rwo</code> because of the DefaultStorageClass admission plugin.</p>
<p>These paragraphs are from <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p>A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.</p>
</blockquote>
<blockquote>
<p>PVCs don't necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.</p>
</blockquote>
<p>I guess you should either <strong>turn off DefaultStorageClass admission plugin and apply your manifests again</strong> or <strong>give your PV the same storage class name</strong> like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
storageClassName: standard-rwo
capacity:
storage: 256Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/db
</code></pre>
| tuna |
<p>I am working on bringing up Eclipse mosquitto broker docker image as a kubernetes container with the below YAML configuration. However, I am unable to have any sort of logging working for this broker, to enable some debugging. Is there a way to pass "command" to ask this docker image to the configuration file provided instead of using the default one ? Can anyone share a commonly used YAML file for starting the broker with persistance/volume/logging capabilities ?</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: infra-pod
labels:
app: infra
spec:
containers:
- name: mosquitto-broker
image: eclipse-mosquitto
ports:
- containerPort: 1883
- containerPort: 8883
</code></pre>
| Jimmy | <p>Below you will find an example how volumes are mounted for mosquitto deployment. Please before you start trying them out visit kubernetes documents about <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">volumes</a> to understand a bit more how they are used and which one is suitable for your environment.</p>
<pre class="lang-yaml prettyprint-override"><code> volumeMounts:
- name: mosquitto
mountPath: /srv/mqtt/config
- name: localtime
mountPath: /etc/localtime
- name: mosquitto-data
mountPath: /srv/mqtt/data
- name: mosquitto-log
mountPath: /srv/mqtt/log
</code></pre>
<pre class="lang-yaml prettyprint-override"><code> volumes:
- name: mosquitto
persistentVolumeClaim:
claimName: mosquitto
- name: mosquitto-data
persistentVolumeClaim:
claimName: mosquitto-data
- name: mosquitto-log
persistentVolumeClaim:
claimName: mosquitto-log
- name: localtime
hostPath:
path: /home/test
</code></pre>
<p>To provide some custom config you have <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">configure your pod to use Kubernetes configMap</a>.What you are doing below is to add ConfigMap name under the <code>volumes</code> section of the Pod specification. This adds the ConfigMap data to the directory specified as <code>volumeMounts.mountPath</code>.</p>
<pre class="lang-yaml prettyprint-override"><code> volumeMounts:
- name: password-file
mountPath: /.config/mosquitto/auth/password_file.txt
subPath: password_file.txt
- name: config-file
mountPath: /.config/mosquitto/mosquitto.conf
subPath: mosquitto.conf
</code></pre>
<pre class="lang-yaml prettyprint-override"><code> ----
volumes:
- name: config-file
configMap:
name: mosquitto-config
- name: password-file
configMap:
name: mosquitto-password
---
</code></pre>
<p>In the example above there is another field used called <code>subPath</code> which is used to mount specified file into pod directory. This is being used to avoid mouting volume and top of the existing directory. You can more about <a href="https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i" rel="nofollow noreferrer">here</a>.</p>
<p>Please note that those yamls are purely informational and serves as an example how to pass config and mount volumes. You will have to adjust them to you needs.</p>
| acid_fuji |
<p>I am running k8s 1.14 in azure.</p>
<p>I keep getting pod evictions on some of my pods in the cluster.</p>
<p>As an example:</p>
<pre><code>$ kubectl describe pod kube-prometheus-stack-prometheus-node-exporter-j8nkd
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m22s default-scheduler Successfully assigned monitoring/kube-prometheus-stack-prometheus-node-exporter-j8nkd to aks-default-2678****
Warning Evicted 3m22s kubelet, aks-default-2678**** The node had condition: [DiskPressure].
</code></pre>
<p>Which I can also confirm by:</p>
<pre><code>$ kubectl describe node aks-default-2678****
...
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 27 Nov 2019 22:06:08 +0100 Wed, 27 Nov 2019 22:06:08 +0100 RouteCreated RouteController created a route
MemoryPressure False Fri, 23 Oct 2020 15:35:52 +0200 Mon, 25 May 2020 18:51:40 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Fri, 23 Oct 2020 15:35:52 +0200 Sat, 05 Sep 2020 14:36:59 +0200 KubeletHasDiskPressure kubelet has disk pressure
</code></pre>
<p>Since this is a managed azure k8s cluster I don't have access to the kubelet on the nodes or the master nodes. Is there anything I can do to investigate/debug this problem without SSH access to the nodes?</p>
<p>Also I assume this comes from storage on the nodes and not from PV/PVC which have been mounted into the pods. So how do I get an overview of storage consumption on the worker nodes without SSH access?</p>
| u123 | <blockquote>
<p>So how do I get an overview of storage consumption on the worker nodes without SSH access?</p>
</blockquote>
<p>You can create privileged pod like following:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: privileged-pod
name: privileged-pod
spec:
hostIPC: true
hostNetwork: true
hostPID: true
containers:
- args:
- sleep
- "9999"
image: centos:7
name: privileged-pod
volumeMounts:
- name: host-root-volume
mountPath: /host
readOnly: true
volumes:
- name: host-root-volume
hostPath:
path: /
</code></pre>
<p>and then exec to it:</p>
<pre><code>kubectl exec -it privileged-pod -- chroot /host
</code></pre>
<p>and then you have access to the whole node, just like you would have using ssh.</p>
<p>Note: In case your k8s user has attached <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">pod-security-policy</a> you may not be able to do this, if changeing <code>hostIPC</code>, <code>hostNetwork</code> and <code>hostPID</code> is disallowed.</p>
<p>You also need to make sure that the pod gets scheduled on specific node that you want to have acccess to. Use <code>.spec.nodeName: <name></code> to acheive it.</p>
| Matt |
<p>I am trying to host a qBitTorrent server with Kubernetes. I have composed a YAML for the <code>https://hub.docker.com/r/linuxserver/qbittorrent</code> docker container.</p>
<p>The problem is that it is accessible only from path /. As soon as I move it to /torrent it does not find it anymore: 404 Not Found.</p>
<h2>Steps to replicate:</h2>
<ul>
<li>apply following yamls</li>
<li><code>helm install nginx ingress-nginx/ingress-nginx</code></li>
<li>go to <code>service_ip:8080</code>, settings, WebUI, uncheck "Enable Host header validation"</li>
<li>go to <code>localhost:nginx_port/torrent</code></li>
</ul>
<h2>Result:</h2>
<ul>
<li>page not loading</li>
</ul>
<h2>Expected Result:</h2>
<ul>
<li>qBitTorrent WebUi appears and works</li>
</ul>
<h2>What I tried:</h2>
<ul>
<li>adding <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> to annotations</li>
</ul>
<p>server.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: torrent-deployment
labels:
app: torrent
spec:
replicas: 1
selector:
matchLabels:
pod-label: torrent-pod
template:
metadata:
labels:
pod-label: torrent-pod
spec:
containers:
- name: linuxserver
image: linuxserver/qbittorrent:amd64-latest
---
apiVersion: v1
kind: Service
metadata:
name: torrent-service
labels:
app: torrent
spec:
selector:
pod-label: torrent-pod
ports:
- port: 8080
name: torrent-deployment
</code></pre>
<p>ingress.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: torrent-ingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: torrent
spec:
rules:
- http:
paths:
- path: /torrent
pathType: Prefix
backend:
service:
name: torrent-service
port:
number: 8080
</code></pre>
| Paul Schuldesz | <p>Thanks to @matt_j I have found a workaround. I wrote and YAML for nginx myself and added the configurations from the post mentioned by matt ( <a href="https://github.com/qbittorrent/qBittorrent/wiki/NGINX-Reverse-Proxy-for-Web-UI" rel="nofollow noreferrer">https://github.com/qbittorrent/qBittorrent/wiki/NGINX-Reverse-Proxy-for-Web-UI</a> ) and it worked.</p>
<p>These are the YAMLs I came up with:</p>
<p>server.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
namespace: nginx
spec:
selector:
matchLabels:
pod-label: nginx
template:
metadata:
labels:
pod-label: nginx
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
replicas: 1
# status:
---
apiVersion: v1
kind: Service
metadata:
namespace: nginx
name: nginx
labels:
app: nginx
spec:
selector:
pod-label: nginx
ports:
- port: 80
name: nginx
</code></pre>
<p>config.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: nginx
data:
nginx.conf: |
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
http {
server {
server_name 10.152.183.95;
listen 80;
location /torrent/ {
proxy_pass http://torrent-service.qbittorrent:8080/;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
#proxy_cookie_path / "/; Secure";
}
}
}
events {
worker_connections 1024;
}
</code></pre>
| Paul Schuldesz |
<p>I am trying to <code>exec</code> into POD running over K8S using below command,</p>
<pre><code>kubectl exec -it entry-log -n my-ns -- /bin/bash
</code></pre>
<p>I am able to inside POD file system and wanted to edit the file <code>log.conf</code> using either <code>nano</code> or <code>vim</code> commands, but nothing is available here? Also I am seeing it's prompt as <code>bash-4.2</code>.</p>
<pre><code>bash-4.2$ cd code/
bash-4.2$ nano log.conf
bash: nano: command not found
</code></pre>
<p>How can I edit the said file, please suggest. Thanks!</p>
| user584018 | <p>What is the image of your container?</p>
<p>If your image is <code>ubuntu:focal</code> or <code>ubuntu:jammy</code>, they don't have the commands like <code>nano</code> or <code>vim</code> by default. You should install these either to your Dockerfile for your image or to your container using:</p>
<pre class="lang-bash prettyprint-override"><code>apt-get update && apt-get install nano -y
</code></pre>
| tuna |
<p>Deleting kube-apiserver from kubernetes-master does not prevent kubectl from querying pods. I always understand, kube-apiserver is responsible for communication with the master.</p>
<p>My question: how can kubectl still able to query pods while kube-apiserver is still restarting? Is there any official documentation that covers this behavior?</p>
<p><a href="https://i.stack.imgur.com/hG60M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hG60M.png" alt="enter image description here" /></a></p>
| kaleb | <p>Your understanding is correct. The <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">Kubernetes API</a> server validates and configures data for the api objects which include pods, services, replication controllers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact. So if your <code>api-server</code> pod will encounter some issues you will not be able to get your client communicating with it.</p>
<p>What is happening is that when you delete the <code>api-server</code> pod it is being immediately recreated hence your client is able to connect and fetch the data.</p>
<p>To provide an example I have simulated the api-server pod failure by fiddling a bit with <code>kube-apiserver.yaml</code> file in the <code>/etc/kubernetes/manifests</code>:</p>
<pre class="lang-sh prettyprint-override"><code>➜ manifests pwd
/etc/kubernetes/manifests
</code></pre>
<p>Immediately once a did that I was no longer able to connect to <code>api-server</code>:</p>
<pre class="lang-py prettyprint-override"><code>➜ manifests kubectl get pods -A
The connection to the server 10.128.15.230:6443 was refused - did you specify the right host or port?
</code></pre>
<p>Getting those manifest in docker desktop could be tricky depends where you run it. Please have a look at <a href="https://stackoverflow.com/questions/64758012/location-of-kubernetes-config-directory-with-docker-desktop-on-windows">this</a> case where answer show solution to that.</p>
| acid_fuji |
<p>I have a group of service accounts in namespace <code>prometheus</code> and I have a cluster role for reading all pods in my cluster. How can I build <code>ClusterRoleBinding</code> to do it?</p>
| faoxis | <p>If you go to <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">K8s docs about Using RBAC Authorization</a></p>
<p>And scroll down to examples, you can see this one:</p>
<blockquote>
<p>For all service accounts in the "qa" namespace:</p>
<pre><code>subjects:
- kind: Group
name: system:serviceaccounts:qa
apiGroup: rbac.authorization.k8s.io
</code></pre>
</blockquote>
<hr />
<p>You can now take it and apply to your usecase:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: <some-fancy-name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <clusterrolename>
subjects:
- kind: Group
name: system:serviceaccounts:prometheus
apiGroup: rbac.authorization.k8s.io
</code></pre>
<hr />
<p>There is also one different example worth to notice:</p>
<blockquote>
<ol start="3">
<li>Grant a role to all service accounts in a namespace</li>
</ol>
<p>If you want all applications in a namespace to have a role, no matter
what service account they use, you can grant a role to the service
account group for that namespace.</p>
<p>For example, grant read-only permission within "my-namespace" to all
service accounts in that namespace:</p>
<pre><code>kubectl create rolebinding serviceaccounts-view \
--clusterrole=view \
--group=system:serviceaccounts:my-namespace \
--namespace=my-namespace
</code></pre>
</blockquote>
| Matt |
<p>I have this service file in my chart, how can I allow JDBC connection outside kuberiq for example DBeaver? I tried to configure nodeport, but it keeps failing. Can someone assist here?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ include "ignite.fullname" . }}
labels:
app: {{ include "ignite.fullname" . }}
spec:
ports:
- name: jdbc
port: 11211
targetPort: 11211
- name: spi-communication
port: 47100
targetPort: 47100
- name: spi-discovery
port: 47500
targetPort: 47500
- name: jmx
port: 49112
targetPort: 49112
- name: sql
port: 10800
targetPort: 10800
- name: rest
port: 8080
targetPort: 8080
- name: thin-clients
port: 10900
targetPort: 10900
clusterIP: None
selector:
</code></pre>
<p>This is the ignite services I want to try to connect even just to create user
<code>$ kubectl describe svc ignite</code></p>
<pre><code>Name: ignite
Namespace: production
Labels: app=ignite
Annotations: <none>
Selector: app=ignite
Type: ClusterIP
IP: None
Port: jdbc 11211/TCP
TargetPort: 11211/TCP
Endpoints: 10.233.112.245:11211,10.233.112.246:11211
Port: spi-communication 47100/TCP
TargetPort: 47100/TCP
Endpoints: 10.233.112.245:47100,10.233.112.246:47100
Port: spi-discovery 47500/TCP
TargetPort: 47500/TCP
Endpoints: 10.233.112.245:47500,10.233.112.246:47500
Port: jmx 49112/TCP
TargetPort: 49112/TCP
Endpoints: 10.233.112.245:49112,10.233.112.246:49112
Port: sql 10800/TCP
TargetPort: 10800/TCP
Endpoints: 10.233.112.245:10800,10.233.112.246:10800
Port: rest 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.112.245:8080,10.233.112.246:8080
Port: thin-clients 10900/TCP
TargetPort: 10900/TCP
Endpoints: 10.233.112.245:10900,10.233.112.246:10900
Session Affinity: None
Events: <none>
</code></pre>
<p>I tried to add nodeport like below, but it's not saving it. What is wrong?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ include "ignite.fullname" . }}
labels:
app: {{ include "ignite.fullname" . }}
spec:
ClusterIP:
ports:
- name: jdbc
port: 11211
targetPort: 11211
- name: spi-communication
port: 47100
targetPort: 47100
- name: spi-discovery
port: 47500
targetPort: 47500
- name: jmx
port: 49112
targetPort: 49112
- name: sql
port: 10800
targetPort: 10800
nodedport: 30008
- name: rest
port: 8080
targetPort: 8080
- name: thin-clients
port: 10900
targetPort: 10900
selector:
app: {{ include "ignite.fullname" . }}
sessionAffinity: None
type: NodePort
</code></pre>
| NoamiA | <p>To summarize our discussion on comments:</p>
<p>The solution was to change <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a> to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">Nodeport Service</a>.</p>
<p>The problem was that edit of a service resulted in error because some service's fields are immutable (error: <code>Invalid value: "": field is immutable</code>). Service had to be recreated.</p>
<p>The solution was to use <code>--force</code> flag with helm.</p>
<pre><code>> helm upgrade --help | grep force
--force force resource updates through a replacement strategy
</code></pre>
| Matt |
<p>We have an Openshift environment on our company.<br>
We're trying to maximize the resources between our data scientists using jupyterhub.<br>
Is there an option for assigning more resources <strong>dynamicly</strong> per demand (if there are free resources avaliable). </p>
| AS IL | <p>You can take a look at setting resource restrictions. (Quite counterintuitive, I agree).</p>
<p>In Kubernetes (and therefore in OpenShift) you can set resource <code>requests</code> and <code>limits</code>.</p>
<p>Resource <code>requests</code> are the minimum a pod is guaranteed from the scheduler on the node it runs on. Resource <code>Limits</code> on the other hand give you the capability to allow your pod to exceed its requested resources up to the specified limit. </p>
<p>What is the difference between not setting resource <code>requests</code> and <code>limits</code> vs setting them?</p>
<h3>Setting resource <code>requests</code> and <code>limits</code></h3>
<ul>
<li>scheduler is aware of resources utilized. New resources can be scheduled according to the resources they require.</li>
<li>scheduler can rebalance pods across the nodes if a node hits maximum resource utilization</li>
<li>pods can not exceed the <code>limits</code> specified</li>
<li>pods are guaranteed to get at least the resources specified in <code>requests</code></li>
</ul>
<h3><strong>NOT</strong> setting resource <code>requests</code> and <code>limits</code></h3>
<ul>
<li>scheduler is aware of resources utilized. New resources (e.g. pods) can be scheduled based on a best guess basis. It is not guaranteed that those new pods get minimum resources they require to run stable.</li>
<li>scheduler is not able to rebalance resources without at least <code>requests</code></li>
<li>pods can utilize memory / cpu without restrictions</li>
<li>pods are not guaranteed any memory / cpu time</li>
</ul>
<hr>
<h2>How to set up resource <code>requests</code> and <code>limits</code></h2>
<ul>
<li><a href="https://docs.openshift.com/container-platform/3.11/dev_guide/compute_resources.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.11/dev_guide/compute_resources.html</a></li>
</ul>
<p>In the end it should look something like this</p>
<pre><code>apiVersion: v1
kind: Pod
spec:
containers:
- image: openshift/hello-openshift
name: hello-openshift
resources:
requests:
cpu: 100m
memory: 200Mi
ephemeral-storage: 1Gi
limits:
cpu: 200m
memory: 400Mi
ephemeral-storage: 2Gi
</code></pre>
<hr>
<p>Additional information can be found here</p>
<ul>
<li><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits</a></li>
</ul>
| ckaserer |
<p>I'm trying to create a Nginx server using Kubernetes and the official docker image. Unfortunately, when I'm trying to mount a custom config file, nothing happen. My container works just fine but with it's default configuration file. Moreover the another mount on /etc (the lets encrypt folder) doesn't work too. Nevertheless the certbot mount works just fine...
(If I check inside the container /etc/nginx/nginx.conf it's not the file I'm trying to mount, and /etc/letsencrypt doesn't exist)
I link my deployment file just below..
If someone has an idea and want to help it would be delightful !</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.6
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumeMounts:
- name: letsencrypt
mountPath: /etc/letsencrypt
readOnly: true
volumeMounts:
- name: certbot
mountPath: /var/www/certbot
volumes:
- name: nginx-config
nfs:
server: 192.168.2.9
path: /volume1/nginx/nginx.conf
- name: letsencrypt
nfs:
server: 192.168.2.9
path: /volume1/nginx/letsencrypt
- name: certbot
nfs:
server: 192.168.2.9
path: /volume1/nginx/certbot
</code></pre>
<p><strong>Edit :</strong>
To solve this problem I had to put all my volume mount inside a single volumeMount section and to remove the reference to file in the volume section, like this :</p>
<pre><code> volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
- name: letsencrypt
mountPath: /etc/letsencrypt
readOnly: true
- name: certbot
mountPath: /var/www/certbot
volumes:
- name: nginx-config
nfs:
server: 192.168.2.9
path: /volume1/nginx/
- name: letsencrypt
nfs:
server: 192.168.2.9
path: /volume1/nginx/letsencrypt
- name: certbot
nfs:
server: 192.168.2.9
path: /volume1/nginx/certbot
</code></pre>
| tellierflexus | <p>This is not working because you are incorrectly referencing your <code>Volumes</code>. There is no need to reference the file name at the <code>Volumes</code> mount stage:</p>
<pre class="lang-yaml prettyprint-override"><code> volumes:
- name: nginx-config
nfs:
server: 192.168.2.9
path: /volume1/nginx/nginx.conf
</code></pre>
<p>Once I applied your config my pod was in <code>CreateContainerConfigError</code> status with
<code>failed to prepare subPath for volumeMount "nginx-config" of container "nginx"</code> error.</p>
<p>The correct yaml should in this case look like this:</p>
<pre class="lang-yaml prettyprint-override"><code> volumes:
- name: nginx-config
nfs:
server: 192.168.2.9
path: /volume1/nginx/
</code></pre>
<p>I have added a line <code>#This is my custom config</code> just to know that my custom file was load and there are the results of that:</p>
<pre class="lang-sh prettyprint-override"><code>➜ keti nginx-deployment-66c5547c7c-nsfzv -- cat /etc/nginx/nginx.conf
</code></pre>
<pre class="lang-json prettyprint-override"><code>#This is my custom config
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
.....
</code></pre>
| acid_fuji |
<p>I have existing deployments in k8s, I would like to update container, I updated docker image tag (new unique id) in deployment and run:</p>
<pre><code>kubectl apply -f testdeploy.yml --namespace=myapp
</code></pre>
<p>Output is: <code>deployment.apps/fakename configured</code></p>
<p>But nothing happens.</p>
<p>When I run <code>kubectl get pods --namespace=myapp</code> I can see only one old pod with old age.</p>
<p>Sometimes is working, sometimes not, why?</p>
<p>What is wrong?</p>
| nirebam368 | <p>try describing the deployment and see the events</p>
<pre><code>kubectl describe <deployment-name> --namespace=myapp
</code></pre>
<p>or</p>
<pre><code>kubectl get events --namespace=myapp
</code></pre>
<p>to understand what is happening.</p>
<p>try checking whether new replicas has been created for changed deployment container image.</p>
<pre><code>kubectl get rs -n myapp
</code></pre>
<p>check the number of replicas expected example when you do <code>kubectl get rs</code>:</p>
<pre><code>NAME DESIRED CURRENT READY AGE
<deployment-name>-58ffbb8b76 0 0 0 10s
</code></pre>
<p>maybe few more details would be helpful to understand why was nothing happening when you try to deploy.</p>
<p>kubernetes has nice documentation check this.</p>
<p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application-introspection/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application-introspection/</a></p>
| Srikrishna B H |
<p>I have a network <a href="https://hub.docker.com/r/openspeedtest/latest" rel="nofollow noreferrer">speed tester</a> service and deployment in my cluster. I would like to display the widget on a window in my frontend react app within my k8s cluster ... I used iframe as follows.</p>
<pre><code>const SpeedTest = (props) => {
return (
<div>
<Container>
<iframe className="speedTestFrame" src={`..path..`}></iframe>
<br />
</Container>
</div>
);
};
</code></pre>
<p>My svc looks like this:</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: speedtest
spec:
ports:
- name: "10000"
port: 10000
targetPort: 8080
selector:
app: speedtest
</code></pre>
<p>and part of my ingress looks like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: speed-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /speedtest
pathType: Exact
backend:
serviceName: speedtest
servicePort: 10000
</code></pre>
<p>Now, when I use it (speed tester) as my default root <code>/</code> with only one ingress in the project, it works fine. However, my root <code>/</code> is for the frontend react app. If I set it on its own ingress like above, it says network error and does not work.</p>
<p>Is there any other way I can display this widget ?</p>
<p>How do I solve the ingress routing issue because apart from this widget and the frontend react app, I also have an admin react client app and it does not load if I place it in its own ingress as well.</p>
<p>TL;DR How do I load two react apps and a static container whose path is the root?</p>
| Denn | <p><strong>How do I load two react apps and a static container whose path is the root?</strong></p>
<p>Assuming <code>root</code> path is must have Kubernetes Ingress will allow you to do that under condition that you are listing these for different host. Below you might find an exmple how this might look like for your use case:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-example
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: frontend.example.com
http:
paths:
- path: /
backend:
serviceName: front-end
servicePort: 80
- host: admin.example.com
http:
paths:
- path: /
backend:
serviceName: admin-react
servicePort: 80
- host: speedtest.example.com
http:
paths:
- path: /
backend:
serviceName: speedtest
servicePort: 4200
</code></pre>
<p>Things get more complicated when you use different hosts because of <a href="https://www.w3.org/Security/wiki/Same_Origin_Policy" rel="nofollow noreferrer">same-origin policy</a>. It`s a security policy enforced on client-side web applications (like web browsers) to prevent interactions between resources from different origins. While useful for preventing malicious behavior, this security measure also prevents legitimate interactions between known origins.</p>
<p>For allow that to happen you will need to enable <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="nofollow noreferrer">cros origin resource sharing</a>.</p>
<p>What is CORS?</p>
<blockquote>
<p>CORS is a method to permit cross-origin HTTP requests in JavaScript,
i.e. requests made by JS running on one domain to a page on a different domain. This is usually disallowed for security reasons (it could allow one page to send requests using the user's credentials to another page); CORS provides a way to safely permit such requests.</p>
</blockquote>
<p>This can be handled in two ways, either by configuring you application to send CORS header or configure ingress to do so. For more info please go to:</p>
<ul>
<li><a href="https://torchbox.github.io/k8s-ts-ingress/cors/#:%7E:text=CORS%20is%20a%20method%20to,page%20on%20a%20different%20domain.&text=There%20are%20two%20ways%20to,to%20do%20so%20for%20you." rel="nofollow noreferrer">Configuring Cross-Origing Resource
Sharing</a></li>
<li><a href="https://www.moxio.com/blog/12/how-to-make-a-cross-domain-request-in-javascript-using-cors" rel="nofollow noreferrer">How to make a cross domain request in JavaScript using
CORS</a></li>
</ul>
| acid_fuji |
<p>I'm currently using <code>kubectl create -f clusterRole.yaml</code> , I was wondering if I can use helm to install it automatically with my chart.</p>
<p>I was looking at the helm documentation, and it used <code>kubectl create -f</code> for the clusterRole file. Is there any reason that this can't be done through helm? Is it because this concerns with access privilege issues?</p>
| allen | <p>As already mentioned in the comments, you can install your RBAC roles using your helm chart. As a matter of fact many of the helm charts do configure roles/clusterRoles at install. Here's an example of Elasticsearch <a href="https://github.com/elastic/helm-charts/blob/master/elasticsearch/templates/role.yaml" rel="nofollow noreferrer">helm chart</a> which does configure <code>Role</code> and <code>RoleBinding</code> at install level:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
{{- if eq .Values.podSecurityPolicy.name "" }}
- {{ $fullName | quote }}
{{- else }}
- {{ .Values.podSecurityPolicy.name | quote }}
{{- end }}
verbs:
- use
{{- end -}}
</code></pre>
<p>Another example with clusterRole can be found <a href="https://raw.githubusercontent.com/helm/charts/master/stable/kube2iam/templates/clusterrole.yaml" rel="nofollow noreferrer">here</a>.</p>
<p>To sum up, if you context allow you to install desired rbac or any other stuff with <code>kubectl</code> then basically you will be able to do so with helm.</p>
| acid_fuji |
<p>I am new to Kubernetes, so maybe a silly question.<br>
I am trying to deploy statefulset of ElasticSearch with 3 pod replicas. I have defined Statefulset with pvc in spec.<br>
This pvc has storage class which is served by a hostPath volume. </p>
<pre><code>volumeClaimTemplates:
- metadata:
name: beehive-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 1Gi
</code></pre>
<hr>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: beehive-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
hostPath:
path: /home/abc
</code></pre>
<p>I have few doubts.<br>
1) Would above setup/pv serve /host/abc directory on each node separately? i.e. every pod data would be stored on its corresponding node/host path. Also, wouuld k8s show one volume bound to multiple pvc?<br>
2) As I am using statefulset, I am assuming that the once pod-{i} is scheduled on node-{i}, it will always be scheduled there in every case (e.g. restart).<br>
3) Is above setup right way to implement such case where I need to store the data on host local directory. Or local persistent volume is better? I could not get the actual difference between the two.
4) Do I need to create local-storage storage class manually? (Above setup runs fine in docker for windows setup without creating storage class)
5) I may have other containers also which need to store the data under /home/abc directory only. So, I will be using subPath while mounting the volume in container. Do you see any issue here?</p>
<p>Pleas help.</p>
| NumeroUno | <p>hostPath volumes work well only on single-node clusters if you have a multi-node environment then you should use Local Persistent Volume</p>
<p>These blog posts explain the Local Persistent Volume.</p>
<p>Official blog -
<a href="https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/" rel="nofollow noreferrer">https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/</a></p>
<p>another reference - <a href="https://vocon-it.com/2018/12/20/kubernetes-local-persistent-volumes/" rel="nofollow noreferrer">https://vocon-it.com/2018/12/20/kubernetes-local-persistent-volumes/</a></p>
| Salman Memon |
<p>I am new to the Kubernetes and trying to deploy a single replica kafka instance on the single node minikube cluster.</p>
<p>Here is the zookeeper service/deployment yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: zookeeper-cluster
labels:
component: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
component: zookeeper
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: zookeeper
name: zookeeper
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
component: zookeeper
template:
metadata:
labels:
component: zookeeper
spec:
containers:
- image: zookeeper:3.4.13
name: zookeeper
ports:
- containerPort: 2181
resources:
limits:
memory: "256Mi"
cpu: "100m"
volumeMounts:
- mountPath: /conf
name: zookeeper-claim0
- mountPath: /data
name: zookeeper-claim1
- mountPath: /datalog
name: zookeeper-claim2
restartPolicy: Always
volumes:
- name: zookeeper-claim0
persistentVolumeClaim:
claimName: zookeeper-claim0
- name: zookeeper-claim1
persistentVolumeClaim:
claimName: zookeeper-claim1
- name: zookeeper-claim2
persistentVolumeClaim:
claimName: zookeeper-claim2
status: {}
</code></pre>
<p>and here is the Kafka service/deployment yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kafka-cluster
labels:
component: kafka
spec:
ports:
- name: "9092"
port: 9092
targetPort: 9092
selector:
component: kafka
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: kafka
name: kafka
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
component: kafka
template:
metadata:
labels:
component: kafka
spec:
containers:
- args:
- start-kafka.sh
env:
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: LISTENER_BOB
- name: KAFKA_ADVERTISED_LISTENERS
value: LISTENER_BOB://:9092
- name: KAFKA_LISTENERS
value: LISTENER_BOB://:9092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: LISTENER_BOB:PLAINTEXT
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_LOG_DIRS
value: /kafka/kafka-logs
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster:2181
image: wurstmeister/kafka:2.12-2.4.1
name: kafka
ports:
- containerPort: 9092
resources:
limits:
memory: "256Mi"
cpu: "200m"
volumeMounts:
- mountPath: /kafka/kafka-logs
name: kafka-claim0
restartPolicy: Always
volumes:
- name: kafka-claim0
persistentVolumeClaim:
claimName: kafka-claim0
status: {}
</code></pre>
<p>When trying to access the kafka from another application on kafka-cluster:9092, which is also running as a deployment, it throws an UnresolvedAddressException. where kafka-6799c65d58-f6tbt:9092 is the pod name</p>
<pre><code>java.io.IOException: Can't resolve address: **kafka-6799c65d58-f6tbt:9092**
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.access$700(NetworkClient.java:64)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1035)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:920)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:508)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: java.nio.channels.UnresolvedAddressException: null
at java.base/sun.nio.ch.Net.checkAddress(Net.java:130)
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:675)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 9 common frames omitted
</code></pre>
<p>Am I making any mistake while configuring it? or is there any alternative to it?</p>
| Nishit Jain | <p>It looks like kafka broker is advertising its own hostname (kafka-6799c65d58-f6tbt) as FQDN, which is the same as a pod name. Deployment pod's names cannot be resolved by DNS.</p>
<p>If you take a look at any kafka helm chart i.e. <a href="https://github.com/helm/charts/tree/master/incubator/kafka" rel="nofollow noreferrer">this one</a> you are going to see that they are using statefulsets. Statefulsets allow for resolving ip addresses of pods. Take a look <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">here at k8s docs</a> on how this works.</p>
<p>You could also try setting KAFKA_ADVERTISED_LISTENERS to :</p>
<pre><code>- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_ADVERTISED_LISTENERS
value: "LISTENER_BOB://$(MY_POD_IP):9092/"
</code></pre>
<p>But it doesn't scale well when changeing number of replicas.</p>
| Matt |
<p>I have an application running inside minikube K8 cluster. It’s a simple REST endpoint. The issue is after deployment I am unable to access that application from my local computer.</p>
<p>Using <code>http://{node ip}:{node port}</code> endpoint.</p>
<p>However, if I do:</p>
<pre><code>kubectl port-forward (actual pod name) 8000:8000
</code></pre>
<p>The application becomes accessible at: <code>127.0.0.1:8000</code> from my local desktop.</p>
<p>Is this the right way?
I believe this isn't the right way? as I am directly forwarding my traffic to the pod and this port forwarding won't remain once this pod is deleted.</p>
<p>What am I missing here and what is the right way to resolve this?</p>
<p>I have also configured a <code>NodePort</code> service, which should handle this but I am afraid it doesn’t seem to be working:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: rest-api
name: rest-api-np
namespace: rest-api-namespace
spec:
type: NodePort
ports:
- port: 8000
selector:
app: rest-api
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rest-api
name: rest-api-deployment
namespace: rest-api-namespace
spec:
replicas: 1
selector:
matchLabels:
app: rest-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rest-api
spec:
containers:
- image: oneImage:latest
name: rest-api
</code></pre>
| Maven | <p>You are having issues because your service is placed in <code>default</code> namespace while your deployment is in <code>rest-api-namespace</code> namespace.</p>
<p>I have deploy you yaml files and when the describe the service there were no endpoints:</p>
<pre class="lang-yaml prettyprint-override"><code>➜ k describe svc rest-api-np
Name: rest-api-np
Namespace: default
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.100.111.228
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31668/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Solution for that is to create service in the the same namespace. Once you do that, an ip address and port will appear in the <code>Endpoints</code> field:</p>
<pre class="lang-yaml prettyprint-override"><code>➜ k describe svc -n rest-api-namespace rest-api-np
Name: rest-api-np
Namespace: rest-api-namespace
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.99.49.24
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 32116/TCP
Endpoints: 172.18.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Alternative way is to add <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">endpoints</a> manually:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Endpoints
metadata:
name: my-service # please note that endpoints and service needs to have the same name
subsets:
- addresses:
- ip: 192.0.2.42 #ip of the pod
ports:
- port: 8000
</code></pre>
| acid_fuji |
<p>I would like to configure custom DNS in CoreDNS (to bypass NAT loopback issue, meaning that within the network, IP are not resolved the same than outside the network).</p>
<p>I tried to modify ConfigMap for CoreDNS with a 'fake' domain just to test, but it does not work.
I am using minik8s</p>
<p>Here the config file of config map coredns:</p>
<pre><code>apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
consul.local:53 {
errors
cache 30
forward . 10.150.0.1
}
kind: ConfigMap
</code></pre>
<p>Then I try to resolve this address using busy box, but it does not work.</p>
<pre><code>$kubectl exec -ti busybox -- nslookup test.consul.local
> nslookup: can't resolve 'test.consul.local'
command terminated with exit code 1
</code></pre>
<p>Even kubernetes DNS is failing</p>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
</code></pre>
| Woody | <p>I've reproduced your scenario and it works as intended. </p>
<p>Here I'll describe two different ways to use custom DNS on Kubernetes. The first is in Pod level. You can customize what DNS server your pod will use. This is useful in specific cases where you don't want to change this configuration for all pods.</p>
<p>To achieve this, you need to add some optional fields. To know more about it, please read <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">this</a>.
Example: </p>
<pre><code>kind: Pod
metadata:
name: busybox-custom
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
searches:
- ns1.svc.cluster-domain.example
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
restartPolicy: Always
</code></pre>
<pre><code>$ kubectl exec -ti busybox-custom -- nslookup cnn.com
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
Name: cnn.com
Address 1: 2a04:4e42::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42:200::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.129.67
Address 7: 151.101.193.67
Address 8: 151.101.1.67
</code></pre>
<pre><code>$ kubectl exec -ti busybox-custom -- nslookup kubernetes.default
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
</code></pre>
<p>As you can see, this method will create problem to resolve internal DNS names.</p>
<p>The second way to achieve that, is to change the DNS on a Cluster level. This is the way you choose and as you can see. </p>
<pre><code>$ kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
</code></pre>
<p>As you can see, I don't have <code>consul.local:53</code> entry. </p>
<blockquote>
<p><a href="https://www.consul.io/" rel="nofollow noreferrer">Consul</a> is a service networking solution to connect and secure services
across any runtime platform and public or private cloud</p>
</blockquote>
<p>This kind of setup is not common and I don't think you need to include this entry in your setup. <strong>This might be your issue and when I add this entry, I face the same issues you reported.</strong></p>
<pre><code>$ kubectl exec -ti busybox -- nslookup cnn.com
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: cnn.com
Address 1: 2a04:4e42:200::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.193.67
Address 7: 151.101.1.67
Address 8: 151.101.129.67
</code></pre>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
</code></pre>
<p>Another main problem is that you are debugging DNS using the latest busybox image. I highly recommend you to avoid any version newer than 1.28 as it has come know <a href="https://github.com/docker-library/busybox/issues/48" rel="nofollow noreferrer">problems</a> regarding name resolution. </p>
<p>The best busybox image you can use to troubleshot DNS is <a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/admin/dns/busybox.yaml" rel="nofollow noreferrer">1.28</a> as <a href="https://stackoverflow.com/users/1288818/oleg-butuzov" title="1,840 reputation">Oleg Butuzov</a> recommended on the comments. </p>
| Mark Watney |
<p>Scenario:</p>
<ul>
<li><p>Tableau application;</p>
</li>
<li><p>Postgres on a cloud;</p>
</li>
<li><p>Kubernetes on another cloud, running an application based on Alpine image (different cloud than Postgres).</p>
</li>
</ul>
<p>What a I need:</p>
<ul>
<li>Access Postgres from Tableau using Kubernetes as a kind of router;
So I need to send a request to my Kubernetes cluster, from tableau, and my Kubernetes cluster need to redirect the requisition to my Postgres host, and Postgres must to answer back to my kubernetes cluster after that my Kubernetes cluster must send de answer from Postgres to Tableau.</li>
</ul>
<p><strong>Important restrictions:</strong></p>
<ul>
<li><p>Tableau can access my kubernetes cluster but cannot access my Postgres host directly;</p>
</li>
<li><p>My kubernetes cluster can access my Postgres host.</p>
</li>
</ul>
<hr />
<p><strong>Next steps</strong>
Now I was able to make it work by using Thomas answer, using the following code:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- port: 5432
targetPort: 5432
nodePort: 30004
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: **111.111.111.111** ** < need change this to hostname
ports:
- port: 5432
</code></pre>
<p>Everything works fine with numerical IP, but I need to put my Postgres DNS instead, something like:</p>
<pre><code>subsets:
- addresses:
- ip: mypostgres.com
ports:
- port: 5432
</code></pre>
| LuizBuffa | <p>You can achieve this by creating service type object without selectors and then manually creating endpoints for this its. Service needs to expose outside either via <code>NodePort</code> or <code>Loadbalancer</code> type:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: my-service #Name of the service must match the name of the endpoints
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30007
</code></pre>
<p>Services don’t link to pods directly. There is another object in between called endpoints. Because of this you are able to define them manually.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Endpoints
metadata:
name: my-service #Name of te endpoint must match the name of the service
subsets:
- addresses:
- ip: 172.217.212.100 # This is the IP of the endpoints that the service will forward connections to.
ports:
- port: 80
</code></pre>
<p>Since you are going to expose your postgres some sort securiy measures has to be taken in order to secure it, e.g. <a href="https://www.springboard.com/blog/what-is-whitelisting/#:%7E:text=IP%20Whitelists,updated%20by%20the%20site%20administrator." rel="nofollow noreferrer">whitelist ip</a></p>
<p>For more reading please visit <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">/Services without selectors</a> .</p>
| acid_fuji |
<p>I've deployed few services and found one service to be behaving differently to others. I configured it to listen on 8090 port (which maps to 8443 internally), but the request works only if I send on port 8080. Here's my yaml file for the service (stripped down to essentials) and there is a deployment which encapsulates the service and container</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: uisvc
namespace: default
labels:
helm.sh/chart: foo-1
app.kubernetes.io/name: foo
app.kubernetes.io/instance: rb-foo
spec:
clusterIP: None
ports:
- name: http
port: 8090
targetPort: 8080
selector:
app.kubernetes.io/component: uisvc
</code></pre>
<p>After installing the helm, when I run <code>kubectl get svc</code>, I get the following output</p>
<pre><code>fooaccess ClusterIP None <none> 8888/TCP 119m
fooset ClusterIP None <none> 8080/TCP 119m
foobus ClusterIP None <none> 6379/TCP 119m
uisvc ClusterIP None <none> 8090/TCP 119m
</code></pre>
<p>However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. The container is running a spring boot application which by default listens on 8080. The only explanation I could come up with is somehow the port/targetPort is being ignored in this config and other pods are directly reaching the spring service inside.</p>
<p>Is this behaviour correct? Why is it not listening on 8090? How should I make it work this way?</p>
<p>Edit: Output for <code>kubectl describe svc uisvc</code></p>
<pre><code>Name: uisvc
Namespace: default
Labels: app.kubernetes.io/instance=foo-rba
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rba
helm.sh/chart=rba-1
Annotations: meta.helm.sh/release-name: foo
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/component=uisvc
Type: ClusterIP
IP: None
Port: http 8090/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.8:8080
Session Affinity: None
Events: <none>
</code></pre>
| Wander3r | <p>This is expected behavior since you used <code>headless service</code>.</p>
<p>Headless Services are used for service discovery mechanism so instead of returning single <code>DNS A records</code>, the <code>DNS server</code> will return multiple <code>A records</code> for your service each pointing to the IP of an individual pods that backs the service. So you do simple <code>DNS A records</code> lookup and get the IP of all of the pods that are part of the service.</p>
<p>Since <code>headless service</code> doesn't create <code>iptables</code> rules but creates <code>dns records</code> instead, you can interact directly with your pod instead of a proxy. So If you resolve <code><servicename:port></code> you will get <code><podN_IP:port></code> and then your connection will go to the pod directly. As long as all of this is in the same namespace you don`t have resolve it by full dns name.</p>
<p>With several pods, DNS will give you all of them and just put in the random order (or in RR order). The order depends on the DNS server implementation and settings.</p>
<p>For more reading please visit:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Services-netowrking/headless-services</a></li>
<li><a href="https://stackoverflow.com/questions/54488280/how-do-i-get-individual-pod-hostnames-in-a-deployment-registered-and-looked-up-i/56778273#56778273">This stack questions with great answer explaining how headless services work</a></li>
</ul>
| acid_fuji |
<p>I want to have two instances of same POD with an environment variable with different values in them.
How can we acheive this ? </p>
<p>THanks</p>
| Chandu | <p>You can achieve what you want using one pod containing 2 different containers. </p>
<p>Here is an example on how to achieve that: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox1
image: busybox:1.28
env:
- name: VAR1
value: "Hello I'm VAR1"
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
- name: busybox2
image: busybox:1.28
env:
- name: VAR2
value: "VAR2 here"
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
<p>We are creating 2 containers, one with <code>VAR1</code> and the second with <code>VAR2</code>. </p>
<pre><code>$ kubectl exec -ti busybox -c busybox1 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=busybox
TERM=xterm
VAR1=Hello I'm VAR1
KUBERNETES_PORT_443_TCP_ADDR=10.31.240.1
KUBERNETES_SERVICE_HOST=10.31.240.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
HOME=/root
</code></pre>
<pre><code>$ kubectl exec -ti busybox -c busybox2 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=busybox
TERM=xterm
VAR2=VAR2 here
KUBERNETES_PORT=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.31.240.1
KUBERNETES_SERVICE_HOST=10.31.240.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
HOME=/root
</code></pre>
<p>As you can see, they have the same hostname (inheritance from Pod name) and different variables. </p>
| Mark Watney |
<p>We are using helm of prometheus operator chart stable, <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml" rel="nofollow noreferrer">see this link for the source</a> </p>
<p>and we use our <code>values.yaml</code> which works OK,
in the value.yaml we are configing prometheus (men cpu etc) and the alertmanger.</p>
<p>Now I need to add the prometheus alert manger config, but not sure how to provide it via the values.yaml (tried, it doesn’t work)</p>
<p>Any idea how to pass the config of the alert manager ? </p>
<p>This is the value.yaml</p>
<pre><code>grafana:
enabled: true
alertmanager:
enabled: false
alertmanagerSpec:
replicas: 3
</code></pre>
<p>Now I need to provide in addition file which contain the alert manager rules</p>
<p>like the following:</p>
<p>file: <code>alerts.yaml</code></p>
<pre><code>
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
creationTimestamp: null
labels:
prometheus: prometheus
role: alert-rules
name: prometheus-prometheus-rules
namespace: mon
spec:
groups:
- name: ./prometheus.rules
rules:
- alert: CRITICAL - nodes Disk Pressure
expr: 'kube_node_labels{label_workern_cloud_io_group=“ds"} * on(node)kube_node_status_condition{condition="DiskPressure", status="true"} == 1'
for: 5m
labels:
severity: CRITICAL
</code></pre>
<p>How should I pass also the <code>alerts.yaml</code> via the helm installation ? </p>
<p><code>helm install prom stable/prometheus-operator -n mon -f values.yaml</code> </p>
<p>should I create my own chart and put it on template ? if so how it’s recommended for clean implementation ? </p>
| NSS | <p>There is no way to reference a external yaml file while running <code>helm install</code>.</p>
<p>The best way to achieve this is to copy the chart and include it to templates folder. </p>
<p>From helm documentation we can read: </p>
<blockquote>
<h3>Templates</h3>
<p>The most important piece of the puzzle is the <em>templates/</em>
directory. This is where Helm finds the YAML definitions for your
Services, Deployments and other Kubernetes objects. If you already
have definitions for your application, all you need to do is replace
the generated YAML files for your own. What you end up with is a
working chart that can be deployed using the <em>helm install</em> command.</p>
</blockquote>
<pre><code>$ git clone https://github.com/helm/charts.git
$ cp alerts.yaml ./charts/stable/prometheus-adapter/templates
$ helm install --name my-release stable/prometheus-adapter
</code></pre>
| Mark Watney |
<p>I am new to Kubernetes in GCE-GKE, I was trying to build and deploy the nodeJS app in GCE-GKE cluster using the skaffold.Yaml. I can see image got build and also deployed withoutr any issue but when I tried to access GET the index.ts file on browser, I can't. I really don't understand what might have gone wrong or I may have missed something at LB or ingress-nginx.</p>
<p>Below id the skaffold.yaml </p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
# local:
# push: false
googleCloudBuild:
projectId: xxxxxxxxxxxx
artifacts:
- image: us.gcr.io/xxxxxxxxxxx/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p>Below is ingress-srv.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
</code></pre>
<p>Below is auth-depl.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: us.gcr.io/xxxxxxxxxxxxx/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>Below is package.json</p>
<pre><code>{
"name": "auth",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "ts-node-dev --poll src/index.ts"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@types/express": "^4.17.6",
"express": "^4.17.1",
"nodemon": "^2.0.4",
"typescript": "^3.9.5"
},
"devDependencies": {
"ts-node-dev": "^1.0.0-pre.44"
}
}
</code></pre>
<p>Below is index.ts</p>
<pre><code>import express from 'express';
import { json } from 'body-parser';
const app = express();
app.use(json());
// /api/users/currentuser
app.get('/users', (req, res) => {
res.send('Hello');
});
app.listen(3000, () => {
console.log('Great');
console.log('listening on port 3000!!!!!');
});
</code></pre>
<p>Below is docker file</p>
<pre><code>FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
| Aadil Nabi | <p>Which tools you use to manage domain configuraion?</p>
<p>Step 1: As far as i know, after you init nginx on your k8s cluster, you need add a DNS record to map a domain (or subdomain) in your domain (ticketing.dev) to external IP or external DNS of your k8s cluster which nginx have regenerated for you (i use an A record normally).</p>
<p>Step 2: After that, when you have deployed your application, you need to set the domain of your app (domain name you specify in your ingress yaml file) is a CNAME record of the above (A) record.</p>
<p>You can verify your domain verify cation by use <code>nslookup</code> command.
If you do not use proxy for your domain, both of above domain (A and CNAME record) will be resolve to the same your externall IP of your k8s cluster.</p>
<p>The purpose of 2 above steps is that you let the DNS server know where (which IP) to resolve when the DNS server get a req ask about any your subdomain (domain). That why you can use <code>nslookup</code> command to verify your configuration.</p>
<p>Below is the tools i have used to manage DNS of our domain:
<a href="https://www.namecheap.com/support/knowledgebase/article.aspx/9607/2210/how-to-set-up-dns-records-for-your-domain-in-cloudflare-account" rel="nofollow noreferrer">https://www.namecheap.com/support/knowledgebase/article.aspx/9607/2210/how-to-set-up-dns-records-for-your-domain-in-cloudflare-account</a>
<a href="https://www.namecheap.com/support/knowledgebase/article.aspx/9607/2210/how-to-set-up-dns-records-for-your-domain-in-cloudflare-account" rel="nofollow noreferrer">example</a></p>
| Tho Quach |
<p>We have Helm 3.0.3 and 1.18 k8s and since one year we did not face any issue like as below before. We deploy several microservices via helm to k8s and all works fine so far. But even if we did not change anything for service field we are receiving error like as below.</p>
<p>Here is my command how I deploy to k8s. When I uninstall the service in k8s and start re-build it works ok but when I need to push new changes ı again face this error.</p>
<pre><code>+ helm upgrade --install --debug --force xx-ui-dev --values values.dev.yaml --namespace dev --set image.tag=608 .
</code></pre>
<hr />
<p>Error</p>
<pre><code>history.go:52: [debug] getting history for release xx-ui-dev
upgrade.go:120: [debug] preparing upgrade for xx-ui-dev
upgrade.go:128: [debug] performing update for xx-ui-dev
upgrade.go:292: [debug] creating upgraded release for xx-ui-dev
client.go:173: [debug] checking 7 resources for changes
client.go:432: [debug] Replaced "xx-ui-dev" with kind NetworkPolicy for kind NetworkPolicy
client.go:432: [debug] Replaced "xx-ui-dev" with kind ServiceAccount for kind ServiceAccount
client.go:432: [debug] Replaced "xx-ui-dev-auth" with kind Secret for kind Secret
client.go:432: [debug] Replaced "xx-ui-dev-config" with kind ConfigMap for kind ConfigMap
client.go:205: [debug] error updating the resource "xx-ui-dev":
failed to replace object: Service "xx-ui-dev" is invalid: spec.clusterIP: Invalid value: "": field is immutable
client.go:432: [debug] Replaced "xx-ui-dev" with kind Deployment for kind Deployment
client.go:432: [debug] Replaced "xx-ui-dev" with kind HorizontalPodAutoscaler for kind HorizontalPodAutoscaler
upgrade.go:351: [debug] warning: Upgrade "xx-ui-dev" failed: failed to replace object: Service "xx-ui-dev" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error: UPGRADE FAILED: failed to replace object: Service "xx-ui-dev" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.go:84: [debug] failed to replace object: Service "xx-ui-dev" is invalid: spec.clusterIP: Invalid value: "": field is immutable
</code></pre>
<hr />
<p>service.yaml</p>
<pre><code> spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 50003
protocol: TCP
name: http
selector:
app.kubernetes.io/name: {{ include "xx-ui.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
tier: backend
</code></pre>
<hr />
<pre><code>values.dev.yaml
service:
type: ClusterIP
port: 80
</code></pre>
| semural | <p>When using <code>--force</code> with <code>helm upgrade</code>, helm is using replace strategy instead of patch.</p>
<p>Have a look at the following <a href="https://github.com/helm/helm/blob/master/pkg/kube/client.go#L423-L450" rel="noreferrer">helm code</a>:</p>
<pre><code>if force {
var err error
obj, err = helper.Replace(target.Namespace, target.Name, true, target.Object)
...
} else {
patch, patchType, err := createPatch(target, currentObj)
...
// send patch to server
obj, err = helper.Patch(target.Namespace, target.Name, patchType, patch, nil)
}
</code></pre>
<p>Replace strategy is causing the errors you see.
Have a look at <a href="https://github.com/kubernetes/kubectl/issues/798" rel="noreferrer">this kubectl issue</a> if you are wondering why this happens.</p>
| Matt |
<p>I'm trying to figure out which the best approach to verify the network policy configuration for a given cluster.
<a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">According to the documentation</a></p>
<blockquote>
<p>Network policies are implemented by the network plugin, so you must be
using a networking solution which supports NetworkPolicy - simply
creating the resource without a controller to implement it will have
no effect.</p>
</blockquote>
<p>Assuming I have access only through kubectl to my cluster, what should I do to ensure the network policy resource deployed into the cluster will be honored?</p>
<p>I'm aware about the CNI available and the <a href="https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/" rel="nofollow noreferrer">related matrix capabilities</a>.<br>
I know you could check for the pod deployed under kube-system that are related to those CNI and verify the related capabilities using for ex. the matrix that I shared, but I wonder if there's a more structured approach to verify the current CNI installed and the related capabilities.</p>
<p>Regarding the "controller to implement it", is there a way to get the list of add-on/controller related to the network policy?</p>
| Crixo | <blockquote>
<p>Which the best approach to verify the network policy configuration for
a given cluster?</p>
</blockquote>
<p>If you have access to the pods, you can run tests to make sure that your NetworkPolicies are effective or not. There are two ways for you to check it:</p>
<ul>
<li>Reading your NetworkPolicy using kubectl (<code>kubectl get networkpolicies</code>). </li>
<li>Testing your endpoints to check if NetworkPolicies are effective. </li>
</ul>
<blockquote>
<p>I wonder if there's a more structured approach to verify the current
CNI installed and the related capabilities.</p>
</blockquote>
<p>There is no structured way to check your CNI. You need to understand how your CNI works to be able to identify it on your cluster. For Calico for example, you can identify it by checking if calico pods are running. (<code>kubectl get pods --all-namespaces --selector=k8s-app=calico-node</code>)</p>
<blockquote>
<p>Regarding the "controller to implement it", is there a way to get the
list of add-on/controller related to the network policy?</p>
</blockquote>
<p>"controller to implement it" is a reference to the CNI you are using. </p>
<p>There is a tool called <a href="https://orca.tufin.io/netpol/?yaml=apiVersion:%20networking.k8s.io/v1%0Akind:%20NetworkPolicy%0Ametadata:%0A3name:%20test-network-policy%0A3namespace:%20default%0Aspec:%0A3podSelector:%0A5matchLabels:%0A7role:%20db%0A3policyTypes:%0A3-%20Ingress%0A3-%20Egress%0A3ingress:%0A3-%20from:%0A5-%20ipBlock:%0A9cidr:%20172.17.0.0/16%0A9except:%0A9-%20172.17.1.0/24%0A5-%20namespaceSelector:%0A9matchLabels:%0A11project:%20myproject%0A5-%20podSelector:%0A9matchLabels:%0A11role:%20frontend%0A5ports:%0A5-%20protocol:%20TCP%0A7port:%206379%0A3egress:%0A3-%20to:%0A5-%20ipBlock:%0A9cidr:%2010.0.0.0/24%0A5ports:%0A5-%20protocol:%20TCP%0A7port:%205978" rel="nofollow noreferrer">Kubernetes Network Policies Viewer</a> that allows your to see graphically your NetworkPolicy. This is not connected to your question but it might help you to visualize your NetworkPolicies and understand what they are doing. </p>
| Mark Watney |
<p>I have high video encoding tasks running inside pods. These tasks run on getting requests from users and are very high CPU intensive. I want to make sure that the pod with the least CPU usage should receive in the incoming requests. Is there a way in Kubernetes to balance my load based on the percentage of CPU usage?</p>
| Nikhil Pareek | <p>AFAIK there is not such a thing available currently in Kubernetes. The only idea that comes to my mind is a custom solution with application designed in way that once it detects that it reaches the maximum cpu threshold it will fail the readiness probe. This will inform Kubernetes to to remove temporary the pod from the endpoints and allow you to direct request to the another pod. With the readiness probe failed Kubernetes will still keep already existing connections.</p>
| acid_fuji |
<p>I have several .net Core applications which shutdown for no obvious reason. It looks like that this happens since the implementation of health-checks but I'm not able to see the killing commands in kubernetes.</p>
<p><strong>cmd</strong></p>
<pre><code>kubectl describe pod mypod
</code></pre>
<p><strong>output</strong> (restart count is this high because of daily shutown in the evening; stage-environment)</p>
<pre><code>Name: mypod
...
Status: Running
...
Controlled By: ReplicaSet/mypod-deployment-6dbb6bcb65
Containers:
myservice:
State: Running
Started: Fri, 01 Nov 2019 09:59:40 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 01 Nov 2019 07:19:07 +0100
Finished: Fri, 01 Nov 2019 09:59:37 +0100
Ready: True
Restart Count: 19
Liveness: http-get http://:80/liveness delay=10s timeout=1s period=5s #success=1 #failure=10
Readiness: http-get http://:80/hc delay=10s timeout=1s period=5s #success=1 #failure=10
...
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 18m (x103 over 3h29m) kubelet, aks-agentpool-40946522-0 Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 18m (x29 over 122m) kubelet, aks-agentpool-40946522-0 Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>These are the pods logs</p>
<p><strong>cmd</strong></p>
<pre><code>kubectl logs mypod --previous
</code></pre>
<p><strong>output</strong></p>
<pre><code>Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
Application is shutting down...
</code></pre>
<p><strong>corresponding log from azure</strong>
<a href="https://i.stack.imgur.com/65rxf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/65rxf.png" alt="azure-log"></a></p>
<p><strong>cmd</strong></p>
<pre><code>kubectl get events
</code></pre>
<p><strong>output</strong> (what I'm missing here is the killing-event. My assumption is that the pod was not restarted, caused by multiple failed health-checks) </p>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
39m Normal NodeHasSufficientDisk node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientDisk
39m Normal NodeHasSufficientMemory node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientMemory
39m Normal NodeHasNoDiskPressure node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasNoDiskPressure
39m Normal NodeReady node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeReady
39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress
39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress
7m2s Warning Unhealthy pod/otherpod2 Readiness probe failed: Get http://10.244.0.158:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
7m1s Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
40m Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
44m Warning Unhealthy pod/otherpod1 Liveness probe failed: Get http://10.244.0.151:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
5m35s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
40m Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
8m8s Warning Unhealthy pod/mypod Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
8m7s Warning Unhealthy pod/mypod Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p><strong>curl from another pod</strong> (I've executed this in a very long loop every second and have never received something else than a 200 OK)</p>
<pre><code>kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/hc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"status":"Healthy","totalDuration":"00:00:00.0647250","entries":{"self":{"data":{},"duration":"00:00:00.0000012","status":"Healthy"},"warmup":{"data":{},"duration":"00:00:00.0000007","status":"Healthy"},"TimeDB-check":{"data":{},"duration":"00:00:00.0341533","status":"Healthy"},"time-blob-storage-check":{"data":{},"duration":"00:00:00.0108192","status":"Healthy"},"time-rabbitmqbus-check":{"data":{},"duration":"00:00:00.0646841","status":"Healthy"}}}100 454 0 454 0 0 6579 0 --:--:-- --:--:-- --:--:-- 6579
</code></pre>
<p><strong>curl</strong></p>
<pre><code>kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/liveness
Healthy % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7 0 7 0 0 7000 0 --:--:-- --:--:-- --:--:-- 7000
</code></pre>
| masterchris_99 | <p>I think you can:</p>
<ol>
<li><p>Modify livenessprobe and readinessprobe check only <code>http://80</code>, cut the path in URL</p></li>
<li><p>Remove the livenessprobe and readinessprobe (enabled=false)</p></li>
<li><p>Just increase the delay time to 5 or 10 mins, after that you can <code>kubectl exec -it <pod-name> sh/bash</code> into that pod and debug. You can use command <code>netstat</code> to check the service you want start on port 80 or not. And last thing, you can do the same with readinessprobe or livenessprobe <code>curl -v http://localhost</code>, if this command return code different 200, that why your pods always restart.</p></li>
</ol>
<p>Hope this help you, guy.</p>
| Tho Quach |
<p>I recently finished a project where I created an App consisting of several docker containers. The purpose of the app was to collect some data and safe it to an databank and also allow user interactions over an simple web gui. The app was hosted on four different Raspberry Pi's and it was possible to collect data from all physicial maschines through an api. Further you could do some simple machine learning tasks like calculating anomalies in the sensor data of the Pi's.</p>
<p>Now I'm trying to take the next step and using kubernetes for some load balancing and remote updates. My main goal is to remote update all raspberries from my master node. Which, in theory, would be a very handy feature. Also I want to share the ressources of the Pi's within the cluster for calculations. </p>
<p>I read a lot about Kubernets, Minikube, K3's, Kind and all the different approaches to set up an Kubernetes cluster, but feel like I am missing "a last puzzle piece". </p>
<p>So from what I understood I need an approach which allows me to set up an local (because all machines are laying on my desk/ no cloud needed) multi node cluster. My master node would be (idealy) my laptop, running Ubuntu in a virtual machine. My rasberry's would be my slave/worker nodes. If I would want to update my cluster I can use the kubernetes remote update functionality. </p>
<p>So my question out of this would be: Does it makes sense to use several rasberries as nodes in a kubernetes cluster and to manage them from one master node (laptop) and do you have any suggestions about the way to achieve this setup. </p>
<p>I usally dont like those question not containing any specific code or questions by myself, but feel like an simple hint could accelerate my project noteable. If it's the wrong place please feel free to delete this question.</p>
<p>Best regards </p>
| nail | <p>You didn't mention which rpi models you are using, but I assume you are not using rpi zeros. </p>
<blockquote>
<p>My main goal is to remote update all raspberries from my master node.</p>
</blockquote>
<p>Assuming that by that you mean updating your applications running in kubernetes that is installed on rpi then keep reading. Otherwise ignore all I wrote, and what you probably need is ansible or other simmilar provisioning/configuration-management/application-deployment tool.</p>
<p>Now answering to your question:</p>
<blockquote>
<p>Does it makes sense to use several rasberries as nodes in a kubernetes cluster</p>
</blockquote>
<p>yes, this is why people created k3s, so such setup is possible using less resources.</p>
<blockquote>
<p>and to manage them from one master node (laptop) </p>
</blockquote>
<p>assuming you will be using it for learning purpouses then why not. It is possible, but just be aware that when master node goes down (e.g. when you turn off your laptop), all cluster goes down (or at least api-server communication so you wont be able to change cluster's state). Also make sure you are using bridge networking interface for your VM so it is visible in your local network as a standalone instance.</p>
<blockquote>
<p>and do you have any suggestions about the way to achieve this setup.</p>
</blockquote>
<p>installing k3s on all nodes would be the easiest in your case. There are plenty of resources on the internet explaining how to achieve it.</p>
<hr>
<p>One last thing I would like to explain is the thing with updates.</p>
<p>Speaking of kubernetes updates you need to know that kubernetes doesn't update itself automatically. You need to explicitly update it. New k8s version is beeing released every 3 months that sometimes "breaks" things and backward compatibility is not possible (so always read <a href="https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG" rel="nofollow noreferrer">changelog</a> before updating stuff because rollbacks may not be possible unless you backed up an etcd cluster earlier).</p>
<p>Speaking of updating applications - To run your app all you do is send yaml files describing your application to k8s and it handles the rest. So if you want to update your app just update the tag on container image to newer version and k8s will handle the updates. <a href="https://www.weave.works/blog/kubernetes-deployment-strategies" rel="nofollow noreferrer">Read here</a> more about update strategies in k8s.</p>
| Matt |
<p>I'm trying to get information for a cron job so I can grab the current release of service.</p>
<p>So when I run <code>kubectl get pods</code> I get:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
cron-backfill-1573451940-jlwwj 0/1 Completed 0 33h
test-pod-66df8ccd5f-jvmkp 1/1 Running 0 16h
</code></pre>
<p>When I run <code>kubectl get pods --selector=job-name=cron-backfill</code> I get:</p>
<pre><code>No resources found in test namespace.
</code></pre>
<p>But when I run <code>kubectl get pods --selector=app=test-pod</code> I get:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
test-pod-66df8ccd5f-jvmkp 1/1 Running 0 16h
</code></pre>
<p>which is what I want. I figured since the first pod is a cron job there must be some other command used to check for those, but no luck.</p>
<p>I tried looking through the k8s docs here <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/</a> but can't find something that seems to work.</p>
| olivercollins-inhome | <p>You need to</p>
<pre><code>kubectl describe pods cron-backfill-1573451940-jlwwj
</code></pre>
<p>And then you can see the <code>Labels:</code> part</p>
<p>EX:</p>
<pre><code>Labels: app=<app-name>
controller-uid=<xxxxxxxxxx>
job-name=cron-backfill-1573451940-jlwwj
release=<release-name>
</code></pre>
<p>Final you can use following command to get your pods:</p>
<pre><code>kubectl get pods --selector=job-name=cron-backfill-1573451940-jlwwj
</code></pre>
<p>Hope this may help you, Guy!</p>
| Tho Quach |
<p>I had this working at one time, now it doesn't work anymore.
What I'm trying to do is create a <code>dask cluster</code> on <code>microk8s kubernetes</code>.
According to the <code>Helm</code> website: <code>https://hub.helm.sh/charts/dask/dask</code>, to deploy the cluster I must type in the following: </p>
<pre><code>helm repo add dask https://helm.dask.org/
helm repo update
helm install --name my-release dask/dask
</code></pre>
<p>However, I performed a <code>microk8s kubectl get svc</code> I don't see an <code>external IP signed</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 40h
my-dask-jupyter ClusterIP 10.152.183.219 <none> 80/TCP 12m
my-dask-scheduler ClusterIP 10.152.183.89 <none> 8786/TCP,80/TCP 12m
</code></pre>
<p>When I performed a the following I get a <code>null</code> value: </p>
<pre><code> echo http://$DASK_SCHEDULER_UI_IP:$DASK_SCHEDULER_UI_PORT -- Dask dashboard
echo http://$JUPYTER_NOTEBOOK_IP:$JUPYTER_NOTEBOOK_PORT -- Jupyter notebook
</code></pre>
<p>Please help, I think there may be some setup I need to perform with microk8s?
Thanks, </p>
| Vinh Tran | <p>If you have a look at dash helm-chart repo on github you can find this commit:
<a href="https://github.com/dask/helm-chart/commit/9f9aa245cf686363b517c9013eb9b3488e46fab0" rel="nofollow noreferrer">Make ClusterIP the default service type</a>.</p>
<p>It looks like ClusterIP is now the default.</p>
<p>If you want to overwrite it use <code>--set</code> e.g.:</p>
<pre><code>helm install --name my-release dask/dask --set scheduler.serviceType=LoadBalancer
</code></pre>
<p>or clone the repo from github and change default values in values.yaml file</p>
| Matt |
<p>I have a simple kubernetes setup: 1 pod, 1 service with LoadBalancer and things just work. But I don't want to pay extra for a load balancer when I have only 1 pod.</p>
<p>I tried to switch to NodePort, but I'm not able to access the service on the right ports (because they are remapped to 30000+ ports). I have a service that I'd like to access on port 443, but I can't, so what can I do in this case?</p>
<p>My service would be <code>https://server.aaa.com</code>, but if the port is remapped to 30000, I need to use <code>https://server.aaa.com:30000</code>? any way to be on port 443?</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></p>
| 07mm8 | <p>If you wish to avoid paying for the loadbalancer i would suggest to take a look into <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">Metallb</a>.</p>
<blockquote>
<p>MetalLB hooks into your Kubernetes cluster, and provides a network
load-balancer implementation. In short, it allows you to create
Kubernetes services of type “LoadBalancer” in clusters that don’t run
on a cloud provider, and thus cannot simply hook into paid products to
provide load-balancers.</p>
</blockquote>
<p>This would be better solution than using <code>nodePort</code> since it has significant downsides for production use. Changing the <code>nodePort</code> range also is not recommended since it might get in to conflict with other ports (<a href="https://stackoverflow.com/questions/63698150/in-kubernetes-why-nodeport-has-default-port-range-from-30000-32767/63704276#63704276">You may find here more information why) </a>. However if you want to do it there is nothing blocking you from doing that.</p>
<p>Still worth to note that there is not good alternative to cloud native LBs. The possible substitutes have issues, that makes them less convenient to use and less secure.</p>
<p>Lastly you may want to check the Kubernestes <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">host namespaces</a> :</p>
<ul>
<li><p><strong>HostNetwork</strong> - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device,
services listening on localhost, and could be used to snoop on
network activity of other pods on the same node.</p>
</li>
<li><p><strong>HostPorts</strong> - Provides a list of ranges of allowable ports in the host network namespace. Defined as a list of <code>HostPortRange</code>, with
<code>min</code>(inclusive) and <code>max</code>(inclusive). Defaults to no allowed host
ports.</p>
</li>
</ul>
<p>However <code>HostPort</code> has couple of downsides as it limits the scheduling options for your pod, as only hosts with vacancies for your chosen port can be used. If the host where your pods are running become unreachable, K8s will reschedule it to different nodes. So if the IP address for you workload change, clients of your application will lose access to the pod (the same thing will happen if you restart the pod).</p>
<p>You may want to read <a href="https://medium.com/@maniankara/kubernetes-tcp-load-balancer-service-on-premise-non-cloud-f85c9fd8f43c#:%7E:text=NodePort%20vs%20HostPort%20vs%20HostNetwork&text=It%20specifies%20the%20port%20number,access%20to%20nodes%20network%20namespace." rel="nofollow noreferrer">this document</a> where the author compares <code>hostNetwork</code>, <code>hostPorts</code>and <code>nodePort</code>.</p>
| acid_fuji |
<p>I have created a GKE cluster on GCP.</p>
<p>Kubernetes logs from kubectl logs command is different to /var/log/containers</p>
<p>kubectl</p>
<pre><code>{"method":"GET","path":"/healthz","format":"*/*","controller":"Public::PublicPagesController","action":"healthz","status":204,"duration":0.39,"view":0.0,"request_id":"ca29b519-d1e8-49a2-95ae-e5f23b60c36f","params":{},"custom":null,"request_time":"2022-04-27T15:25:43.780+00:00","process_id":6,"@version":"vcam-backend-vvcam-72_shareholder_event-rc16","@timestamp":"2022-04-27T15:25:43.780Z","message":"[204] GET /healthz (Public::PublicPagesController#healthz)"}
</code></pre>
<p>And logs in /var/log/containers, something add timestamp into the beginning of my container logs:</p>
<pre><code>2022-04-27T15:25:43.780523421Z stdout F {"method":"GET","path":"/healthz","format":"*/*","controller":"Public::PublicPagesController","action":"healthz","status":204,"duration":0.39,"view":0.0,"request_id":"ca29b519-d1e8-49a2-95ae-e5f23b60c36f","params":{},"custom":null,"request_time":"2022-04-27T15:25:43.780+00:00","process_id":6,"@version":"vcam-backend-vvcam-72_shareholder_event-rc16","@timestamp":"2022-04-27T15:25:43.780Z","message":"[204] GET /healthz (Public::PublicPagesController#healthz)"}
</code></pre>
<p>I want my application log will be consistent, I want it in json format like logs from kubectl command, so I can parse and analyze more.</p>
<p>I want to remove this part: <code>2022-04-27T15:25:43.780523421Z stdout F</code></p>
<p>Does anybody meet this problem? How can I make containers logs same as kubectl command logs?</p>
<p>GKE Version:</p>
<pre><code>Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10-gke.2000", GitCommit:"0823380786b063c3f71d5e7c76826a972e30550d", GitTreeState:"clean", BuildDate:"2022-03-17T09:22:22Z", GoVersion:"go1.16.14b7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Docker daemon.json</p>
<pre><code>{
"pidfile": "/var/run/docker.pid",
"iptables": false,
"ip-masq": false,
"log-level": "warn",
"bip": "169.254.123.1/24",
"mtu": 1460,
"storage-driver": "overlay2",
"live-restore": true,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
</code></pre>
<p>Notes: I notice that the timestamp in the beginning of log line only show when we add option <code>docker logs -t</code>, <a href="https://docs.docker.com/engine/reference/commandline/logs/" rel="nofollow noreferrer">docs here</a>
But I still do not know how to fix this problem in GKE cluster.</p>
| Tho Quach | <p>This problem is related to Container Runtime Interface (CRI). You can read about CRI <a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
<p>For now, I'm still can not change the log format as I want, I just adapt to this new format. This format is called <code>CRI log format</code>, the CRI default of GKE cluster always produce log in this format, and aggregation log applications adapt to this new CRI log format too:</p>
<ul>
<li>This is PR of <code>Grafana/Loki</code> to support CRI log format: <a href="https://github.com/grafana/loki/pull/365" rel="nofollow noreferrer">PR</a></li>
<li>Fluent-bit create a new parser for CRI log format: <a href="https://docs.fluentbit.io/manual/v/1.8/installation/kubernetes#container-runtime-interface-cri-parser" rel="nofollow noreferrer">Docs</a></li>
</ul>
<p>So I think you need to change the way you approach this problem: If we can not change the log format as we want, we can use application support this log format.</p>
<p>Notes: I'm not sure but I think this problem come from: <code>Kubernetes is removing support for Docker as a container runtime</code> - <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">docs</a> , and the new container runtime produce this CRI log format.</p>
| Tho Quach |
<p>I am new to Kubernetes. I am trying to launch a kubernetes cluster in my local mac machine. I am using following command to launch Kubernetes:</p>
<pre><code>minikube start --vm-driver=hyperkit
</code></pre>
<p>I am getting following error:</p>
<pre><code>/usr/local/bin/kubectl is version 1.14.7, and is incompatible with Kubernetes 1.17.0.
You will need to update /usr/local/bin/kubectl or use 'minikube kubectl' to connect with this cluster
</code></pre>
<p>Now while executing following command: </p>
<pre><code>minikube kubectl
</code></pre>
<p>It is not doing anything, just showing basic commands along with their usages.</p>
<p>And while trying to upgrade kubetctl it is showing it is already up to date.</p>
<p>I have not found any solution for this. Any idea regarding how to fix this ?</p>
| Joy | <p>The best solution for you is to <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos" rel="nofollow noreferrer">update kubectl manually</a>. To perform this you need to download the binary: </p>
<p><a href="https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/darwin/amd64/kubectl" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/darwin/amd64/kubectl</a></p>
<p>Change permissions of kubectl to be executable:</p>
<pre><code>$ chmod +x ./kubectl
</code></pre>
<p>And move to /usr/local/bin/ overwriting the old one. </p>
<pre><code>$ sudo mv ./kubectl $(which kubectl)
</code></pre>
<p>To check the effects, run:</p>
<pre><code>$ kubectl version
</code></pre>
| Mark Watney |
<p>I created a kubernetes cluster 1 master and 2 worker nodes 2 months ago,
today one worker node started to fail and I don't know why. I think nothing unusual happened to my worker.</p>
<p>I used flannel and kubeadm to create the cluster and it was working very well.</p>
<p>If I describe the node:</p>
<pre><code>tommy@bxybackend:~$ kubectl describe node bxybackend-node01
Name: bxybackend-node01
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=bxybackend-node01
kubernetes.io/os=linux
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:ca:97:82:50:10"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.168.10.4
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 03 Nov 2019 09:41:48 -0600
Taints: node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 11 Dec 2019 11:17:05 -0600 Wed, 11 Dec 2019 10:37:19 -0600 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 11 Dec 2019 11:17:05 -0600 Wed, 11 Dec 2019 10:37:19 -0600 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 11 Dec 2019 11:17:05 -0600 Wed, 11 Dec 2019 10:37:19 -0600 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 11 Dec 2019 11:17:05 -0600 Wed, 11 Dec 2019 10:37:19 -0600 KubeletNotReady Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Addresses:
InternalIP: 10.168.10.4
Hostname: bxybackend-node01
Capacity:
cpu: 12
ephemeral-storage: 102684600Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 14359964Ki
pods: 110
Allocatable:
cpu: 12
ephemeral-storage: 94634127204
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 14257564Ki
pods: 110
System Info:
Machine ID: 3afa24bb05994ceaaf00e7f22b9322ab
System UUID: 80951742-F69F-6487-F2F7-BE2FB7FEFBF8
Boot ID: 115fbacc-143d-4007-90e4-7fdcb5462680
Kernel Version: 4.15.0-72-generic
OS Image: Ubuntu 18.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.7
Kubelet Version: v1.17.0
Kube-Proxy Version: v1.17.0
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-flannel-ds-amd64-sslbg 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 8m31s
kube-system kube-proxy-c5gxc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m52s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (0%) 100m (0%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SystemOOM 52m kubelet, bxybackend-node01 System OOM encountered, victim process: dotnet, pid: 12170
Normal NodeHasNoDiskPressure 52m (x12 over 38d) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 52m (x12 over 38d) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeHasSufficientPID
Normal NodeNotReady 52m (x6 over 23d) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeNotReady
Normal NodeHasSufficientMemory 52m (x12 over 38d) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeHasSufficientMemory
Warning ContainerGCFailed 52m (x3 over 6d23h) kubelet, bxybackend-node01 rpc error: code = DeadlineExceeded desc = context deadline exceeded
Normal NodeReady 52m (x13 over 38d) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeReady
Normal NodeAllocatableEnforced 43m kubelet, bxybackend-node01 Updated Node Allocatable limit across pods
Warning SystemOOM 43m kubelet, bxybackend-node01 System OOM encountered, victim process: dotnet, pid: 9699
Warning SystemOOM 43m kubelet, bxybackend-node01 System OOM encountered, victim process: dotnet, pid: 12639
Warning SystemOOM 43m kubelet, bxybackend-node01 System OOM encountered, victim process: dotnet, pid: 16194
Warning SystemOOM 43m kubelet, bxybackend-node01 System OOM encountered, victim process: dotnet, pid: 19618
Warning SystemOOM 43m kubelet, bxybackend-node01 System OOM encountered, victim process: dotnet, pid: 12170
Normal Starting 43m kubelet, bxybackend-node01 Starting kubelet.
Normal NodeHasSufficientMemory 43m (x2 over 43m) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 43m (x2 over 43m) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeHasSufficientPID
Normal NodeNotReady 43m kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeNotReady
Normal NodeHasNoDiskPressure 43m (x2 over 43m) kubelet, bxybackend-node01 Node bxybackend-node01 status is now: NodeHasNoDiskPressure
Normal Starting 42m kubelet, bxybackend-node01 Starting kubelet.
</code></pre>
<p>If I watch syslog in the worker: </p>
<pre><code>Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.552152 19331 kuberuntime_manager.go:981] updating runtime config through cri with podcidr 10.244.1.0/24
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.552162 19331 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.552352 19331 docker_service.go:355] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.1.0/24,},}
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.552600 19331 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.1.0/24
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.555142 19331 kubelet_node_status.go:70] Attempting to register node bxybackend-node01
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.652843 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d6b534db-c32c-491b-a665-cf1ccd6cd089-kube-proxy") pod "kube-proxy-c5gxc" (UID: "d6b534db-c32c-491b-a665-cf1ccd6cd089")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753179 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/d6b534db-c32c-491b-a665-cf1ccd6cd089-xtables-lock") pod "kube-proxy-c5gxc" (UID: "d6b534db-c32c-491b-a665-cf1ccd6cd089")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753249 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/d6b534db-c32c-491b-a665-cf1ccd6cd089-lib-modules") pod "kube-proxy-c5gxc" (UID: "d6b534db-c32c-491b-a665-cf1ccd6cd089")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753285 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-ztrh4" (UniqueName: "kubernetes.io/secret/d6b534db-c32c-491b-a665-cf1ccd6cd089-kube-proxy-token-ztrh4") pod "kube-proxy-c5gxc" (UID: "d6b534db-c32c-491b-a665-cf1ccd6cd089")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753316 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "run" (UniqueName: "kubernetes.io/host-path/6a2299cf-63a4-4e96-8b3b-acd373de12c2-run") pod "kube-flannel-ds-amd64-sslbg" (UID: "6a2299cf-63a4-4e96-8b3b-acd373de12c2")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753342 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/6a2299cf-63a4-4e96-8b3b-acd373de12c2-cni") pod "kube-flannel-ds-amd64-sslbg" (UID: "6a2299cf-63a4-4e96-8b3b-acd373de12c2")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753461 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/6a2299cf-63a4-4e96-8b3b-acd373de12c2-flannel-cfg") pod "kube-flannel-ds-amd64-sslbg" (UID: "6a2299cf-63a4-4e96-8b3b-acd373de12c2")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753516 19331 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-ts2qt" (UniqueName: "kubernetes.io/secret/6a2299cf-63a4-4e96-8b3b-acd373de12c2-flannel-token-ts2qt") pod "kube-flannel-ds-amd64-sslbg" (UID: "6a2299cf-63a4-4e96-8b3b-acd373de12c2")
Dec 11 11:20:10 bxybackend-node01 kubelet[19331]: I1211 11:20:10.753531 19331 reconciler.go:156] Reconciler: start to sync state
Dec 11 11:20:12 bxybackend-node01 kubelet[19331]: I1211 11:20:12.052813 19331 kubelet_node_status.go:112] Node bxybackend-node01 was previously registered
Dec 11 11:20:12 bxybackend-node01 kubelet[19331]: I1211 11:20:12.052921 19331 kubelet_node_status.go:73] Successfully registered node bxybackend-node01
Dec 11 11:20:13 bxybackend-node01 kubelet[19331]: E1211 11:20:13.051159 19331 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Dec 11 11:20:16 bxybackend-node01 kubelet[19331]: E1211 11:20:16.051264 19331 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Dec 11 11:20:18 bxybackend-node01 kubelet[19331]: E1211 11:20:18.451166 19331 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Dec 11 11:20:21 bxybackend-node01 kubelet[19331]: E1211 11:20:21.251289 19331 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Dec 11 11:20:25 bxybackend-node01 kubelet[19331]: E1211 11:20:25.019276 19331 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Dec 11 11:20:46 bxybackend-node01 kubelet[19331]: E1211 11:20:46.772862 19331 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Dec 11 11:20:46 bxybackend-node01 kubelet[19331]: F1211 11:20:46.772895 19331 csi_plugin.go:281] Failed to initialize CSINodeInfo after retrying
Dec 11 11:20:46 bxybackend-node01 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Dec 11 11:20:46 bxybackend-node01 systemd[1]: kubelet.service: Failed with result 'exit-code'.
</code></pre>
| Tommy | <p>During your kubeadm install your are supposed to run the following command to hold kubelet, kubeadm and kubectl packages and prevent them from getting upgraded mistakenly. </p>
<pre><code>$ sudo apt-mark hold kubelet kubeadm kubectl
</code></pre>
<p>I've reproduced your scenario and what happened to your cluster is that 3 days ago, a new version of Kubernetes was released (v 1.17.0) and your kubelet got got upgraded accidentally. </p>
<p>On the new Kubernetes some changes where made on <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#changes" rel="nofollow noreferrer">CSI</a> and that's why you have some problems in this node. </p>
<p>I suggest you to drain this node, setup a new one with Kubernetes 1.16.2 and join the new one to your cluster. </p>
<p>To drain this node you need to run:</p>
<pre><code>$ kubectl drain bxybackend-node01 --delete-local-data --force --ignore-daemonsets
</code></pre>
<p>Optionally you can downgrade your kubelet to previous version using the following command: </p>
<pre><code>$ sudo apt-get install kubelet=1.16.2-00
</code></pre>
<p>Don't forget to mark your kubelet to prevent it from being upgraded again: </p>
<pre><code>$ sudo apt-mark hold kubelet
</code></pre>
<p>You can use the command <code>apt-mark showhold</code> to list all held packages and make sure kubelet, kubeadm and kubectl are on hold. </p>
<p><strong>To upgrade from 1.16.x to 1.17.x follow this <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">guide</a> from Kubernetes Documentation. I've validated it and it works as intended.</strong> </p>
| Mark Watney |
<p>I have got an authentication service. This service is behind an ingress (GKE in my case) for external API calls. When the signup function of the authentication service is called, it will send an email for email verification purpose. The link in this email has to point to the IP of ingress. In order to achieve that, my authentication service has to know the IP of the ingress. How can this be configured dynamically in k8s without storing the ingress IP[ address in a config file?</p>
<p>Many thanks in advance
Regards</p>
| lampalork | <p>Since by default GKE allocates <a href="https://cloud.google.com/compute/docs/ip-addresses#ephemeraladdress" rel="nofollow noreferrer">ephemeral external IP address</a> the simplest solution is to reserve <a href="https://cloud.google.com/compute/docs/ip-addresses#reservedaddress" rel="nofollow noreferrer">static ip address</a>. This can be done with new one or you can promote existing ephemeral IP to static one. With this solution the IP address is known in advance but the drawback of that the IP would have to be hardcoded into the application.</p>
<p>To avoid hardcoding this you could use nslookup to find ip address for this specific host. With this you should update your dns records with an address type record to point to you reserved static IP address. Please refer to your DNS service`s documentation on setting DNS A records to <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#configuring_your_domain_name_records" rel="nofollow noreferrer">configure your domain name</a>.</p>
<p>For more reading check <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#optional_configuring_a_static_ip_address" rel="nofollow noreferrer">how to configure static ip address</a>.</p>
<hr />
<p>The alternative way would be also to <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#directly-accessing-the-rest-api" rel="nofollow noreferrer">access the Kubernetes REST API directly</a> and fetch the IP address from there. This depending on your architecture and application design will required appropriate authentication towards API.</p>
| acid_fuji |
<p>It seems to listen on 80 by default - sensible - but if I wanted it to listen for requests on (for example) 8000, how would I specify this?</p>
<p>For clarity, this is via the nginx controller enabled via <code>minikube addons enable ingress</code>)</p>
| user1381745 | <blockquote>
<p>Ingress exposes HTTP and HTTPS routes from outside the cluster to
<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a>
within the cluster.</p>
</blockquote>
<p>It means that it'll use default ports for HTTP and HTTPS ports. </p>
<p>From the documentation we can read: </p>
<blockquote>
<p>An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">Service.Type=NodePort</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Service.Type=LoadBalancer</a>.</p>
</blockquote>
| Mark Watney |
<p>I have the following:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: SomeServiceAccount
</code></pre>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: SomeClusterRole
rules:
- apiGroups:
- "myapi.com"
resources:
- 'myapi-resources'
verbs:
- '*'
</code></pre>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: SomeClusterRoleBinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: SomeClusterRole
subjects:
- kind: ServiceAccount
name: SomeServiceAccount
</code></pre>
<p>But it throws:
<code>The ClusterRoleBinding "SomeClusterRoleBinding" is invalid: subjects[0].namespace: Required value</code></p>
<p>I thought the whole point of <code>"Cluster"RoleBinding</code> is that it's not limited to a single namespace. Anyone can explain this?</p>
<p>Kubernetes version <code>1.13.12</code>
Kubectl version <code>v1.16.2</code>
Thanks.</p>
| fardin | <p>You are not required set a namespace while creating a ServiceAccount, the case here is that you are required to specify the namespace of your Service account when you refer to it while creating a ClusterRoleBinding to select it. </p>
<blockquote>
<p>ServiceAccounts are namespace scoped subjects, so when you refer to
them, you have to specify the namespace of the service account you
want to bind. <a href="https://github.com/kubernetes/kubernetes/issues/29177#issuecomment-240712588" rel="noreferrer">Source</a></p>
</blockquote>
<p>In your case you can just use default namespace while creating your ClusterRoleBinding for example. </p>
<p>By doing this you are not tieing your ClusterRoleBinding to any namespace, as you can see in this example. </p>
<pre><code>$ kubectl get clusterrolebinding.rbac.authorization.k8s.io/tiller -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"tiller"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"},"subjects":[{"kind":"ServiceAccount","name":"tiller","namespace":"kube-system"}]}
creationTimestamp: "2019-11-18T13:47:59Z"
name: tiller
resourceVersion: "66715"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller
uid: 085ed826-0a0a-11ea-a665-42010a8000f7
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
</code></pre>
| Mark Watney |
<p>Over the last few hours, I have been trying to create a Kubernetes cluster in GCP. But could not successfully create. The error says "Unable to create the cluser". For sometime, I tried using the web cobsole and later switched to using gcloud. But nothing worked.</p>
<p>Can someone help me here? What is the mistake I am committing.</p>
<p>Regards
Raj</p>
| Rajasekar | <p>There was an outage: <a href="https://status.cloud.google.com/incidents/sqeWSRmcrJZyE2zSrJ74" rel="nofollow noreferrer">https://status.cloud.google.com/incidents/sqeWSRmcrJZyE2zSrJ74</a></p>
<p>It should be now resolved for most users. If you are still having troubles it's recommended to get in touch with support team.</p>
| MrPsycho |
<p>When I launch the <code>SparkPi</code> example on a self-hosted kubernetes cluster, the executor pods are quickly created -> have an error status -> are deleted -> are replaced by new executors pods.</p>
<p>I tried the same command on a Google Kubernetes Engine with success. I check the RBAC <code>rolebinding</code> to make sure that the service account has right to create the pod.</p>
<p>Guessing when the next executor pod will be ready, I can see using <code>kubectl describe pod <predicted_executor_pod_with_number></code> that the pod is actually created:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1s default-scheduler Successfully assigned default/examplepi-1563878435019-exec-145 to slave-node04
Normal Pulling 0s kubelet, slave-node04 Pulling image "myregistry:5000/imagery:c5b8e0e64cc98284fc4627e838950c34ccb22676.5"
Normal Pulled 0s kubelet, slave-node04 Successfully pulled image "myregistry:5000/imagery:c5b8e0e64cc98284fc4627e838950c34ccb22676.5"
Normal Created 0s kubelet, slave-node04 Created container executor
</code></pre>
<p>This is my <code>spark-submit</code> call:</p>
<pre class="lang-sh prettyprint-override"><code>/opt/spark/bin/spark-submit \
--master k8s://https://mycustomk8scluster:6443 \
--name examplepi \
--deploy-mode cluster \
--driver-memory 2G \
--executor-memory 2G \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///opt/spark/work-dir/log4j.properties \
--conf spark.kubernetes.container.image=myregistry:5000/imagery:c5b8e0e64cc98284fc4627e838950c34ccb22676.5 \
--conf spark.kubernetes.executor.container.image=myregistry:5000/imagery:c5b8e0e64cc98284fc4627e838950c34ccb22676.5 \
--conf spark.kubernetes.container.image.pullPolicy=Always \
--conf spark.kubernetes.driver.pod.name=pi-driver \
--conf spark.driver.allowMultipleContexts=true \
--conf spark.kubernetes.local.dirs.tmpfs=true \
--class com.olameter.sdi.imagery.IngestFromGrpc \
--class org.apache.spark.examples.SparkPi \
local:///opt/spark/examples/jars/spark-examples_2.11-2.4.3.jar 100
</code></pre>
<p>I expect that the required executor (2) should be created. If the driver script cannot create it, I would at least expect some log to be able to diagnose the issue.</p>
| Jean-Denis Giguère | <p>The issue was related to Hadoop + Spark integration. I was using Spark binary without Hadoop <code>spark-2.4.3-bin-without-hadoop.tgz</code>+ Hadoop 3.1.2. The configuration using environment variables seemed to be problematic for the Spark Executor.</p>
<p>I compiled Spark with Hadoop 3.1.2 to solve this issue. See: <a href="https://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn</a>.</p>
| Jean-Denis Giguère |
<p>For AWS cloud, I can create a Kubernetes ingress yaml containing</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: <<<my-cert-arn>>>
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/shield-advanced-protection: "true"
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
alb.ingress.kubernetes.io/tags: environment=prod,client=templar-order,name=templar-prod-app
alb.ingress.kubernetes.io/target-type: ip
</code></pre>
<p>and the <code>tags</code> come through in the AWS console, but the load balancer name is not set.</p>
<p>I've <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/ingress/annotations/" rel="nofollow noreferrer">read the docs</a>. What annotation can I use to set the load balancer name, here:</p>
<p><a href="https://i.stack.imgur.com/mudia.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mudia.png" alt="screenshot of AWS console for LBs" /></a></p>
| New Alexandria | <p>Unfortunately this feature is not yet supported so you can`t change the lb name using annotation.</p>
<p>The name is being generated <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/master/internal/alb/generator/name.go#L23" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-golang prettyprint-override"><code> func (gen *NameGenerator) NameLB(namespace string, ingressName string) string {
.....
}
</code></pre>
<p>However there is <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1483" rel="nofollow noreferrer">feature request</a> on github which looks promising. You might want to follow that case for updates.</p>
| acid_fuji |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.