Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I've been trying to create an EKS cluster with vpc-cni addon due to the pod restrictions for m5.xlarge VMs (57). After creation I can see it is passed to the launchtemplate object but when doing a node describe it still can allocate the previous (wrong?) number</p>
<p>ClusterConfig:</p>
<pre><code>apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: exchange-develop
region: us-east-1
version: '1.21'
managedNodeGroups:
- name: default
labels:
worker: default
instanceType: m5.xlarge
desiredCapacity: 2
minSize: 2
maxSize: 4
tags:
'k8s.io/cluster-autoscaler/enabled': 'true'
'k8s.io/cluster-autoscaler/exchange-develop': 'owned'
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::658464581062:policy/eks-csi-driver-policy
- arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
- arn:aws:iam::658464581062:policy/ALBIngressControllerIAMPolicy
- arn:aws:iam::658464581062:policy/ExternalDNSPlicy
- arn:aws:iam::658464581062:policy/eks-cluster-autoscaler
maxPodsPerNode: 110
availabilityZones: ['us-east-1c', 'us-east-1d']
iam:
withOIDC: true
vpc:
cidr: 10.10.0.0/16
#autoAllocateIPv6: true
# disable public access to endpoint and only allow private access
clusterEndpoints:
publicAccess: true
privateAccess: true
addons:
- name: vpc-cni
version: '1.10.1'
</code></pre>
<p>Launch template with redacted data:</p>
<pre><code>MIME-Version: 1.0
Content-Type: multipart/mixed; boundary=***
--
Content-Type: text/x-shellscript
Content-Type: charset="us-ascii"
#!/bin/sh
set -ex
sed -i -E "s/^USE_MAX_PODS=\"\\$\{USE_MAX_PODS:-true}\"/USE_MAX_PODS=false/" /etc/eks/bootstrap.sh
KUBELET_CONFIG=/etc/kubernetes/kubelet/kubelet-config.json
echo "$(jq ".maxPods=110" $KUBELET_CONFIG)" > $KUBELET_CONFIG
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
set -ex
B64_CLUSTER_CA=<>
API_SERVER_URL=<>
K8S_CLUSTER_DNS_IP=<>
/etc/eks/bootstrap.sh exchange-develop --kubelet-extra-args '--node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,alpha.eksctl.io/cluster-name=exchange-develop,alpha.eksctl.io/nodegroup-name=default,eks.amazonaws.com/nodegroup-image=ami-00836a7940260f6dd,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup=default,worker=default,eks.amazonaws.com/sourceLaunchTemplateId=lt-0037c1eab7037898d --max-pods=58' --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL --dns-cluster-ip $K8S_CLUSTER_DNS_IP --use-max-pods false
</code></pre>
<p>Node description:</p>
<pre><code>Name: ip-10-10-19-34.ec2.internal
Roles: <none>
Labels: alpha.eksctl.io/cluster-name=exchange-develop
alpha.eksctl.io/nodegroup-name=default
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=m5.xlarge
beta.kubernetes.io/os=linux
eks.amazonaws.com/capacityType=ON_DEMAND
eks.amazonaws.com/nodegroup=default
eks.amazonaws.com/nodegroup-image=ami-00836a7940260f6dd
eks.amazonaws.com/sourceLaunchTemplateId=lt-0037c1eab7037898d
eks.amazonaws.com/sourceLaunchTemplateVersion=1
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1c
kubernetes.io/arch=amd64
kubernetes.io/hostname=<<
kubernetes.io/os=linux
node.kubernetes.io/instance-type=m5.xlarge
topology.kubernetes.io/region=us-east-1
topology.kubernetes.io/zone=us-east-1c
worker=default
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 02 Dec 2021 10:22:20 -0300
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 02 Dec 2021 11:18:31 -0300 Thu, 02 Dec 2021 10:22:18 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Dec 2021 11:18:31 -0300 Thu, 02 Dec 2021 10:22:18 -0300 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Dec 2021 11:18:31 -0300 Thu, 02 Dec 2021 10:22:18 -0300 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Dec 2021 11:18:31 -0300 Thu, 02 Dec 2021 10:22:40 -0300 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.10.19.34
ExternalIP: <<
Hostname: <<
InternalDNS: <<
ExternalDNS: <<
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 4
ephemeral-storage: 83873772Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15921236Ki
pods: 58
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 3920m
ephemeral-storage: 76224326324
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 14904404Ki
pods: 58
System Info:
Machine ID: ec28ac2717ec395cdf5b4e37f7672569
System UUID: ec28ac27-17ec-395c-df5b-4e37f7672569
Boot ID: 50b3d3d9-5dfa-40b6-99c8-20873632c7ca
Kernel Version: 5.4.156-83.273.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.5-eks-bc4871b
Kube-Proxy Version: v1.21.5-eks-bc4871b
ProviderID: aws:///<<<
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system aws-node-9z7pw 25m (0%) 0 (0%) 0 (0%) 0 (0%) 61m
kube-system kube-proxy-2slc8 100m (2%) 0 (0%) 0 (0%) 0 (0%) 61m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 125m (3%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events: <none>
</code></pre>
<p>See allocatable pods 58...</p>
<p>So, what is the correct way of using eksctl to create a cluster with vpc-cni and pass the maxPodsPerNode argument to ec2 launch template?</p>
<p>EDIT:</p>
<p>Other things I've tried:</p>
<ul>
<li><p>Create cluster from scratch with vpc-cni addon 2 managedNodeGroups
with maxPodsPerNode and without, both will take the value of 58 and
not 110</p>
</li>
<li><p>Add another nodegroup with eksctl create nodegroup, still 58</p>
</li>
<li><p>Add another nodegroup with EKS AWS UI, still 58</p>
</li>
</ul>
<p>Eksctl version 0.75.0
Kubectl version 1.21.2</p>
| paltaa | <p>For managedNodeGroup you need to specify the AMI ID:</p>
<p><code>aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.21/amazon-linux-2/recommended/image_id --region us-east-1 --query "Parameter.Value" --output text</code></p>
<pre><code>managedNodeGroups:
- name: default
...
maxPodsPerNode: 110
ami: ami-00836a7940260f6dd
overrideBootstrapCommand: |
#!/bin/bash
/etc/eks/bootstrap.sh exchange-develop --kubelet-extra-args '--node-labels=eks.amazonaws.com/nodegroup=default,eks.amazonaws.com/nodegroup-image=ami-00836a7940260f6dd'
</code></pre>
| gohm'c |
<p>I don't have code for this as i am trying to understand it theoretically.</p>
<p><strong>Current state:</strong>
a PV and PVC get dynamically created by a helm chart. This pv and pvc are using default storage class with delete policy</p>
<p><strong>Future state:</strong>
I want to attach a new PVC with different storage class (with retain policy) to the existing PV and convert that PV to retain policy.</p>
<p>Is this possible?</p>
| shan | <p>Your question isn't clear, are you trying to attach another PVC to an existing PV?
If so then that is not possible.</p>
<p>If you want to unclaim the previous PVC and claim with a new PVC, that is also not possible, unless the PV is using the <strong>Recycle</strong> policy.</p>
<p>In any case, if you remove a PVC while the PV's reclaim policy is delete, the PV will remove itself, if you change it to reclaim, the PV will not be automatically reclaimable.</p>
| Daniel Karapishchenko |
<p>I'm having trouble getting my Kube-registry up and running on cephfs. I'm using rook to set this cluster up. As you can see, I'm having trouble attaching the volume. Any idea what would be causing this issue? any help is appreciated.</p>
<p><strong>kube-registry.yaml</strong></p>
<pre><code> apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kube-registry
template:
metadata:
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
imagePullPolicy: Always
resources:
limits:
cpu: 100m
memory: 100Mi
env:
# Configuration reference: https://docs.docker.com/registry/configuration/
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_HTTP_SECRET
value: "Ple4seCh4ngeThisN0tAVerySecretV4lue"
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
livenessProbe:
httpGet:
path: /
port: registry
readinessProbe:
httpGet:
path: /
port: registry
volumes:
- name: image-store
persistentVolumeClaim:
claimName: cephfs-pvc
readOnly: false
</code></pre>
<p><strong>Storagelass.yaml</strong></p>
<pre><code> apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where operator is deployed.
clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created
fsName: myfs
# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: myfs-data0
# Root path of an existing CephFS volume
# Required for provisionVolume: "false"
# rootPath: /absolute/path
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Deletea
</code></pre>
<p><strong>kubectl describe pods --namespace=kube-system kube-registry-58659ff99b-j2b4d</strong></p>
<pre><code> Name: kube-registry-58659ff99b-j2b4d
Namespace: kube-system
Priority: 0
Node: minikube/192.168.99.212
Start Time: Wed, 25 Nov 2020 13:19:35 -0500
Labels: k8s-app=kube-registry
kubernetes.io/cluster-service=true
pod-template-hash=58659ff99b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/kube-registry-58659ff99b
Containers:
registry:
Container ID:
Image: registry:2
Image ID:
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get http://:registry/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:registry/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
REGISTRY_HTTP_ADDR: :5000
REGISTRY_HTTP_SECRET: Ple4seCh4ngeThisN0tAVerySecretV4lue
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
Mounts:
/var/lib/registry from image-store (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nw4th (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
image-store:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cephfs-pvc
ReadOnly: false
default-token-nw4th:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nw4th
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 13m (x3 over 13m) default-scheduler running "VolumeBinding" filter plugin for pod "kube-registry-58659ff99b-j2b4d": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 13m default-scheduler Successfully assigned kube-system/kube-registry-58659ff99b-j2b4d to minikube
Warning FailedMount 2m6s (x5 over 11m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[image-store], unattached volumes=[image-store default-token-nw4th]: timed out waiting for the condition
Warning FailedAttachVolume 59s (x6 over 11m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-6eeff481-eb0a-4269-84c7-e744c9d639d9" : attachdetachment timeout for volume 0001-0009-rook-c
</code></pre>
<p><strong>ceph provisioner logs, I restarted my cluster so the name will be different but output is the same</strong></p>
<pre><code> I1127 18:27:19.370543 1 csi-provisioner.go:121] Version: v2.0.0
I1127 18:27:19.370948 1 csi-provisioner.go:135] Building kube configs for running in cluster...
I1127 18:27:19.429190 1 connection.go:153] Connecting to unix:///csi/csi-provisioner.sock
I1127 18:27:21.561133 1 common.go:111] Probing CSI driver for readiness
W1127 18:27:21.905396 1 metrics.go:142] metrics endpoint will not be started because `metrics-address` was not specified.
I1127 18:27:22.060963 1 leaderelection.go:243] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...
I1127 18:27:22.122303 1 leaderelection.go:253] successfully acquired lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1127 18:27:22.323990 1 controller.go:820] Starting provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-797b67c54b-42jwc_4e14295b-f73d-4b94-bae9-ff4f2639b487!
I1127 18:27:22.324061 1 clone_controller.go:66] Starting CloningProtection controller
I1127 18:27:22.324205 1 clone_controller.go:84] Started CloningProtection controller
I1127 18:27:22.325240 1 volume_store.go:97] Starting save volume queue
I1127 18:27:22.426790 1 controller.go:869] Started provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-797b67c54b-42jwc_4e14295b-f73d-4b94-bae9-ff4f2639b487!
I1127 19:08:39.850493 1 controller.go:1317] provision "kube-system/cephfs-pvc" class "rook-cephfs": started
I1127 19:08:39.851034 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"cephfs-pvc", UID:"7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06", APIVersion:"v1", ResourceVersion:"7744", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "kube-system/cephfs-pvc"
I1127 19:08:43.670226 1 controller.go:1420] provision "kube-system/cephfs-pvc" class "rook-cephfs": volume "pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06" provisioned
I1127 19:08:43.670262 1 controller.go:1437] provision "kube-system/cephfs-pvc" class "rook-cephfs": succeeded
E1127 19:08:43.692108 1 controller.go:1443] couldn't create key for object pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06: object has no meta: object does not implement the Object interfaces
I1127 19:08:43.692189 1 controller.go:1317] provision "kube-system/cephfs-pvc" class "rook-cephfs": started
I1127 19:08:43.692205 1 controller.go:1326] provision "kube-system/cephfs-pvc" class "rook-cephfs": persistentvolume "pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06" already exists, skipping
I1127 19:08:43.692220 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"cephfs-pvc", UID:"7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06", APIVersion:"v1", ResourceVersion:"7744", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned
</code></pre>
| Xcer | <p>In the pasted YAML for your StorageClass, you have:</p>
<pre><code>reclaimPolicy: Deletea
</code></pre>
<p>Was that a paste issue? Regardless, this is likely what is causing your problem.</p>
<p>I just had this exact problem with some of my Ceph RBD volumes, and the reason for it was that I was using a StorageClass that had</p>
<pre><code>reclaimPolicy: Delete
</code></pre>
<p>However, the <code>cephcsi</code> driver was not configured to support it (and I don't think it actually supports it either).</p>
<p>Using a <code>StorageClass</code> with</p>
<pre><code>reclaimPolicy: Retain
</code></pre>
<p>fixed the issue.</p>
<p>To check this on your cluster, run the following:</p>
<pre><code>$ kubectl get sc rook-cephfs -o yaml
</code></pre>
<p>And look for the line that starts with <code>reclaimPolicy: </code></p>
<p>Then, look at the <code>csidriver</code> your StorageClass is using. In your case it is <code>rook-ceph.cephfs.csi.ceph.com</code></p>
<pre><code>$ kubectl get csidriver rook-ceph.cephfs.csi.ceph.com -o yaml
</code></pre>
<p>And look for the entries under <code>volumeLifecycleModes</code></p>
<pre><code>apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
creationTimestamp: "2020-11-16T22:18:55Z"
name: rook-ceph.cephfs.csi.ceph.com
resourceVersion: "29863971"
selfLink: /apis/storage.k8s.io/v1beta1/csidrivers/rook-ceph.cephfs.csi.ceph.com
uid: a9651d30-935d-4a7d-a7c9-53d5bc90c28c
spec:
attachRequired: true
podInfoOnMount: false
volumeLifecycleModes:
- Persistent
</code></pre>
<p>If the only entry under <code>volumeLifecycleModes</code> is <code>Persistent</code>, then your driver is not configured to support <code>reclaimPolicy: Delete</code>.</p>
<p>If instead you see</p>
<pre><code>volumeLifecycleModes:
- Persistent
- Ephemeral
</code></pre>
<p>Then your driver should support <code>reclaimPolicy: Delete</code></p>
| Matt Wilder |
<p>I have installed Kubernetes cluster 1.20
I am trying to use install nginx ingress controller
but according Kubernetes's doc</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>it is stable for v1.19 .
does it work with versin 1.20 ?</p>
| Pedram Ezzati | <p>Yes, it works with k8s version 1.19 + with API version networking.k8s.io/v1. It is a stable feature from kubernetes version 1.19</p>
| Pulak Kanti Bhowmick |
<p>I've installed minikube over VirtualBox in a Windows 10 Home</p>
<p>I am trying to run command: <code>minikube tunnel</code> but I get an error:</p>
<pre><code>Status:
machine: minikube
pid: 10896
route: 10.96.0.0/12 -> 192.168.99.101
minikube: Running
services: []
errors:
minikube: no errors
router: error adding route: Error en la adici�n de la ruta: El objeto ya existe.
, 3
loadbalancer emulator: no errors
</code></pre>
<p>It is the error message (translated):</p>
<pre><code>Error in route addition: Object exists already.
</code></pre>
<p>I would like to know why I get an error in router section?</p>
<p>Thanks in advance</p>
| Cesar Miguel | <p>Solution to this : This solution works to me</p>
<p>Run minikube tunnel in powershell . Run PowerShell as administrator</p>
<pre><code>PS C:\Users\QL752LU> minikube tunnel
Status:
machine: minikube
pid: 9272
route: 10.96.0.0/12 -> 192.168.59.100
minikube: Running
services: [dockerml]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Status:
machine: minikube
pid: 9272
route: 10.96.0.0/12 -> 192.168.59.100
minikube: Running
services: [dockerml]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Status:
</code></pre>
| sainathpawar |
<p>I see a lot of posts in Stackoverflow relating to this subject, but I think they're not exactly the same.</p>
<p>Currently we have AWS ALBs with HTTPS listeners with multiple rules, and each listener rule is a <code>/path/*</code>. I know how to model that with Ingress Controller and Ingress objects now.</p>
<p>However our ALBs have have 2 certificate ARNs to serve two different domains. I know Ingress has this annotation now.</p>
<pre><code>alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:XXXXXXXXXXXX:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
</code></pre>
<p>But how do I add another certificate?</p>
| Chris F | <p>You can refer <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/ingress/annotations/#certificate-arn" rel="nofollow noreferrer">this</a> syntax mentioned under multiple certificates, which works fine with helm.</p>
| amitd |
<p><code>Kubectl</code> allows you to create ad hoc jobs based on existing crons.</p>
<p>This works great but in the documentation there is no specification for passing arguments upon creation of the job.</p>
<p>Example:</p>
<pre><code>kubectl -n my-namespace create job --from=cronjob/myjob my-job-clone
</code></pre>
<p>Is there any way I can pass arguements to this job upon creation?</p>
| Nicholas Porter | <p>Although <code>kubectl</code> currently does not allow you to use the --from flag and specify a command in the same clause, you can work around this limitation by getting the yaml from a dry run and using <code>yq</code> to apply a patch to it.</p>
<p>For example:</p>
<pre><code># get the original yaml file
kubectl create job myjob --from cronjob/mycronjob --dry-run=client --output yaml > original.yaml
# generate a patch with your new arguments
yq new 'spec.template.spec.containers[0].args[+]' '{INSERT NEW ARGS HERE}' > patch.yaml
# apply the patch
yq merge --arrays update patch.yaml original.yaml > final.yaml
# create job from the final yaml
kubectl create -f final.yaml
</code></pre>
| Anthony Pan |
<p>I am trying to migrate the PostgreSQL database from a performance NAS service to a cheaper General NAS service. Now I want to created a new PV and PVC and make my kubernetes statefulset to binding to the new PVC. I tried to edit the PVC binding in statefulset but give me this error:</p>
<pre><code>The StatefulSet "reddwarf-postgresql-postgresql" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
</code></pre>
<p>So It is impossible to change the legacy statefulset to binding to a new PVC? I have to create a new statefulset and delete the legacy PostgreSQL statefulset? what is the most smooth way to migrate the statefulset storage? I have copied all data and file structure from the legacy performance NAS service to the new NAS service.</p>
| Dolphin | <p>Most fields become immutable once deployed, you can only delete and re-deploy for immutable fields. But this is not necessary a bad thing for your case. You can leverage the fact that when you delete StatefulSet, the PVC/PC are not automatically deleted. So you can consider create new StatefulSet which back by new PVC/PV using your new storage, then you move backup data to these newly created volumes. Then you delete the StatefulSet and update it with command for your Postgresql to run. Finally you re-deploy your StatefulSet and it will reuse the populated PVC/PV.</p>
<p>Other common approach is write <code>initContainers</code> to check if your pod is fresh and populate the volume with backup data if needed. You need to make sure your restore script is idempotent in this case.</p>
| gohm'c |
<p>I have a Kubernetes pod running Laravel but I get the following permission error on load:</p>
<p><a href="https://i.stack.imgur.com/zAidq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zAidq.png" alt="Permission denied" /></a></p>
<p>If I access the pod interactively and run chmod on /storage:</p>
<p><a href="https://i.stack.imgur.com/bidFk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bidFk.png" alt="chmod on storage" /></a></p>
<p>The laravel app works. How can I get this command to run on deployment? I've tried the following but I get a 502 nginx error:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
container: app
template:
metadata:
labels:
container: app
spec:
containers:
- name: app
image: my/toolkit-app:test
command: ["chmod -R 777 /storage"]
securityContext:
runAsUser: 0
ports:
- containerPort: 80
imagePullSecrets:
- name: my-cred
</code></pre>
| Lee | <p>You can use a <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="noreferrer">PostStart</a> Container hook.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
container: app
template:
metadata:
labels:
container: app
spec:
containers:
- name: app
image: my/toolkit-app:test
securityContext:
runAsUser: 0
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "chmod -R 777 /storage"]
imagePullSecrets:
- name: my-cred
</code></pre>
<p>One thing consider:</p>
<blockquote>
<p>This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.</p>
</blockquote>
| Daniel Karapishchenko |
<p>database-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres-db
spec:
replicas:
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:latest
ports:
- protocol: TCP
containerPort: 1234
env:
- name: POSTGRES_DB
value: "classroom"
- name: POSTGRES_USER
value: temp
- name: POSTGRES_PASSWORD
value: temp
</code></pre>
<p>database-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: database-service
spec:
selector:
app: postgres-db
ports:
- protocol: TCP
port: 1234
targetPort: 1234
</code></pre>
<p>I want to use this database-service url for other deployment so i tried to add it in configMap</p>
<p>my-configMap.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: classroom-configmap
data:
database_url: database-service
</code></pre>
<p>[Not Working] Expected - database_url : database-service (will be replaced with corresponding service URL)</p>
<p><code>ERROR - Driver org.postgresql.Driver claims to not accept jdbcUrl, database-service</code></p>
<pre><code>$ kubectl describe configmaps classroom-configmap
</code></pre>
<p>Output :</p>
<pre><code>Name: classroom-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
database_url:
----
database-service
BinaryData
====
Events: <none>
</code></pre>
| Shiru99 | <p>According to the error you are having:</p>
<p><code>Driver org.postgresql.Driver claims to not accept jdbcUrl</code></p>
<p>It seems that there are a few issues with that URL, and a latest PSQL driver may complain.</p>
<ol>
<li><code>jdbc:postgres:</code> isn't right, use <code>jdbc:postgresql:</code>instead</li>
<li>Do not use <code>jdbc:postgresql://<username>:<passwor>..., user parameters instead: jdbc:postgresql://<host>:<port>/<dbname>?user=<username>&password=<password></code></li>
<li>In some cases you have to force SSL connection by adding <code>sslmode=require</code> parameter</li>
</ol>
| Pepe T. |
<p>I have created a cluster on GCE and I am trying to register that in GKE console. </p>
<p>I have created a service account with the roles:</p>
<p><strong>roles/owner,
roles/editor,
roles/gkehub.connect</strong></p>
<p>But, when I try to register my remote-cluster on GKE console, I am getting below error. Cloud someone help me to get out of this?</p>
<p><em>gcloud container hub memberships register remote-cluster --context=remote-cluster --service-account-key-file=./workdir/gkehub-7c3ea7087141.json</em></p>
<p><strong>ERROR: (gcloud.container.hub.memberships.register) Failed to check if the user is a cluster-admin: Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)</strong></p>
<p>Thanks in advance! </p>
| madhu | <p>Install this sdk plugin solved my problem.</p>
<pre><code>sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin
</code></pre>
| Lingqing Xu |
<p>In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels? If they didn't match, any pod created wouldn't match the selector, and I'd imagine K8s would go on creating new pods until every node is full. If that's true, why does K8s want us to specify the same labels twice? Either I'm missing something or this violates the DRY principle.</p>
<p>The only thing I can think of would be creating a Deployment with matchLabels "key: A" & "key: B" that simultaneously puts existing/un-owned pods that have label "key: A" into the Deployment while at the same time any new pods get label "key: B". But even then, it feels like any label in the template metadata should automatically be in the selector matchLabels.</p>
<p>K8s docs give the following example:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
| Marc Swingler | <p><code>...In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels?</code></p>
<p>Example when doing <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments" rel="nofollow noreferrer">canary deployment</a>.</p>
<p><code>...If they didn't match, any pod created wouldn't match the selector, and I'd imagine K8s would go on creating new pods until every node is full.</code></p>
<p>Your deployment will not proceed, it will fail with error message <code>"selector" does not match template "labels"</code>. No pod will be created.</p>
<p><code>...it feels like any label in the template metadata should automatically be in the selector matchLabels.</code></p>
<p>Labels under template.metadata are used for many purposes and not only for deployment, example labels added by CNI pertain to IP on the fly. Labels meant for selector should be minimum and specific.</p>
| gohm'c |
<p>I want to check if pod in the cluster running as <code>privileged pods</code>, which can indicate that we may have security issue, so I check if
<code>privileged: true</code></p>
<p>However under the
<code>securityContext:</code> spec there is additional fields like</p>
<ul>
<li><code>allowPrivilegeEscalation</code></li>
<li><code>RunAsUser</code></li>
<li><code>ProcMount</code></li>
<li><code>Capabilities</code>
etc</li>
</ul>
<p>Which may be risky (not sure about it) ,</p>
<p>My question is in case the pod is marked as <code>privileged:false</code> and the other fields are true like the following example,if this indicate some security issue ? Does this pods can do some operation on <strong>other pods</strong> etc , access external data?</p>
<p><strong>For example</strong> the following configuration which indicate the the pod is not privileged but <code>allowPrivilegeEscalation: true</code></p>
<pre><code>securityContext:
allowPrivilegeEscalation: true
privileged: false
</code></pre>
<p>I want to know which <code>securityContext</code> combination of pod config can <strong>control other</strong> <code>pods/process</code> in the cluster ?</p>
| PJEM | <p>The <code>securityContext</code> are more related to the container itself and some access to the host machine.</p>
<p>The <code>allowPrivilegeEscalation</code> allow a process to gain more permissions than its parent process. This is more related to setuid/setgid flags in binaries, but inside a container there is no much to get worried about.</p>
<p>You can only control other containers in the host machine from inside a container if you have a <code>hostPath</code> volume, or something like that, allowing you to reach the <code>.sock</code> file as <code>/run/crio/crio.sock</code> or the <code>docker.sock</code>. Is pretty obvious that, if you are concerned about this, allowing requests to Docker API through the network should be disabled.</p>
<p>Of course, all of these access are ruled by DAC and MAC restrictions. This is why podman <strong>uidmap</strong> is better, because the root inside the container do not have the same root id outside the container.</p>
<p>From Kubernetes point of view, you don't need this kind of privileges, all you need is a <code>ServiceAccount</code> and the correct RBAC permissions to control other things inside Kubernetes. A <code>ServiceAccount</code> binded to a <code>cluster-admin</code> <code>ClusterRole</code> can do anything in the API and much more, like adding ssh keys to the hosts.</p>
<p>If you are concerned about pods executing things in Kubernetes or in the host, just force the use of <code>nonRoot</code> containers, avoid indiscriminate use of <code>hostPath</code> volumes, and control your RBAC.</p>
<p>Openshift uses a very nice restriction by default:</p>
<ul>
<li>Ensures that pods cannot run as privileged</li>
<li>Ensures that pods cannot mount host directory volumes</li>
<li>Requires that a pod is run as a user in a pre-allocated range of UIDs (openshift feature, random uid)</li>
<li>Requires that a pod is run with a pre-allocated MCS label (selinux related)</li>
</ul>
<p>I don't answer exactly what you want, because I shifted the attention to RBAC, but I hope this can give you a nice idea.</p>
| Hector Vido |
<p>When mounting glusterfs on servers where kubernetes is installed via kubespray, an error occurs:</p>
<pre><code>Mount failed. Please check the log file for more details.
[2020-12-20 11:40:42.845231] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --volfile-server=kube-pv01 --volfile-id=/replicated /mnt/replica/)
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash:
2020-12-20 11:40:42
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.8.8
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x7e)[0x7f084d99337e]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x334)[0x7f084d99cac4]
/lib/x86_64-linux-gnu/libc.so.6(+0x33060)[0x7f084bfe2060]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_ports_reserved+0x13a)[0x7f084d99d12a]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_process_reserved_ports+0x8e)[0x7f084d99d35e]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(+0xc09b)[0x7f08481ef09b]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(client_bind+0x9d)[0x7f08481ef48d]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(+0x98d3)[0x7f08481ec8d3]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_reconnect+0xc9)[0x7f084d75e0f9]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_start+0x39)[0x7f084d75e1c9]
/usr/sbin/glusterfs(glusterfs_mgmt_init+0x159)[0x5604fe77df79]
/usr/sbin/glusterfs(glusterfs_volumes_init+0x44)[0x5604fe778e94]
/usr/sbin/glusterfs(main+0x811)[0x5604fe7754b1]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f084bfcf2e1]
/usr/sbin/glusterfs(_start+0x2a)[0x5604fe7755ea]
---------
</code></pre>
<p>[11:41:47] [[email protected] ~ ]# lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux 9.12 (stretch)
Release: 9.12
Codename: stretch</p>
<pre><code>Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
</code></pre>
<p>On servers without kubespray is mounted successfully.
How do I fix this error?</p>
| Александр Остапенко | <p>When mounting glusterfs on servers where kubernetes is installed via kubespray, an error occurs:</p>
<p>Solved. Update Debian 10</p>
| Александр Остапенко |
<p>I have created 3 CronJobs in Kubernetes. The format is exactly the same for every one of them except the names. These are the following specs:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test-job-1 # for others it's test-job-2 and test-job-3
namespace: cron-test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: test-job-1 # for others it's test-job-2 and test-job-3
image: busybox
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
args:
- cd database-backup && touch $(date +%Y-%m-%d:%H:%M).test-job-1 && ls -la # for others the filename includes test-job-2 and test-job-3 respectively
volumeMounts:
- mountPath: "/database-backup"
name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
volumes:
- name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
persistentVolumeClaim:
claimName: test-job-1-pvc # for others it's test-job-2-pvc and test-job-3-pvc
</code></pre>
<p>And also the following Persistent Volume Claims and Persistent Volume:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-job-1-pvc # for others it's test-job-2-pvc or test-job-3-pvc
namespace: cron-test
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
resources:
requests:
storage: 1Gi
volumeName: test-job-1-pv # depending on the name it's test-job-2-pv or test-job-3-pv
storageClassName: manual
volumeMode: Filesystem
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
namespace: cron-test
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/database-backup"
</code></pre>
<p>So all in all there are 3 CronJobs, 3 PersistentVolumes and 3 PersistentVolumeClaims. I can see that the PersistentVolumeClaims and PersistentVolumes are bound correctly to each other. So <code>test-job-1-pvc</code> <--> <code>test-job-1-pv</code>, <code>test-job-2-pvc</code> <--> <code>test-job-2-pv</code> and so on. Also the pods associated with each PVC are are the corresponding pods created by each CronJob. For example <code>test-job-1-1609066800-95d4m</code> <--> <code>test-job-1-pvc</code> and so on. After letting the cron jobs run for a bit I create another pod with the following specs to inspect <code>test-job-1-pvc</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: data-access
namespace: cron-test
spec:
containers:
- name: data-access
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- name: data-access-volume
mountPath: /database-backup
volumes:
- name: data-access-volume
persistentVolumeClaim:
claimName: test-job-1-pvc
</code></pre>
<p>Just a simple pod that keeps running all the time. When I get inside that pod with <code>exec</code> and see inside the <code>/database-backup</code> directory I see all the files created from all the pods created by the 3 CronJobs.</p>
<p><strong>What I exepected to see?</strong></p>
<p>I expected to see only the files created by <code>test-job-1</code>.</p>
<p>Is this something expected to happen? And if so how can you separate the PersistentVolumes to avoid something like this?</p>
| David Prifti | <p>I suspect this is caused by the PersistentVolume definition: if you really only changed the name, all volumes are mapped to the same folder on the host.</p>
<pre><code> hostPath:
path: "/database-backup"
</code></pre>
<p>Try giving each volume a unique folder, e.g.</p>
<pre><code> hostPath:
path: "/database-backup/volume1"
</code></pre>
| timsmelik |
<p>Is it possible to obtain Kubernetes logs for a dedicated time range?</p>
<p>All I can do right now is to make a dump of about the last-hour log for the single pod using <code>kubectl logs > dump.log</code> cmd.</p>
<p>But for debugging reasons, it's necessary to obtain the logs for the last week. I was unable to find any abilities to do this in Kubernetes logs.</p>
<p>The only thought is to attach some external service like Kibana for the logs collection, but maybe built-in Kubernetes remedies allow to do this?</p>
<p>Thank you.</p>
| Valerii Kulykov | <p><code>...the last-hour log for the single pod</code></p>
<p>To retrieve last 1 hour log you can do this <code>kubectl logs <pod> --since=1h</code>. Asserted from kubectl help for more options:</p>
<blockquote>
<p>--since=0s: Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be
used.</p>
<p>--since-time='': Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.</p>
<p>--tail=-1: Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise 10, if a selector is
provided.</p>
</blockquote>
| gohm'c |
<p>I have this specific use case, in which we remotely create Kubernetes clusters on a significant number of machines. When we run <code>kubeadm init</code> at the end the join commands gets printed as:</p>
<p><code>kubeadm join [IPv6-Address]:6443 --token TOKEN_VALUE --discovery-token-ca-cert-hash CERT_HASH</code></p>
<p>In order to programmatically join worker nodes we have a script that needs both the <code>TOKEN_VALUE</code> and the <code>CERT_HASH</code>.</p>
<p>As now I'm acquiring the <code>TOKEN_VALUE</code> with the following command: <code>sudo kubeadm token list | awk 'NR == 2 {print $1}'</code>. However, I haven't found an easy way(or any way at all) to obtain the <code>CERT_HASH</code>.</p>
<p>Any help or pointer would be appreciated.</p>
| nazar | <p>For those with the same problem, there doesn't seem to be a super clean or easy way to get it. But after looking at some places, the one that worked for me is <code>openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1</code></p>
| nazar |
<p>I'm setting up k8s on-prem k8s cluster. For tests I use single-node cluster on vm set up with kubeadm.
My requirements include running MQTT cluster (vernemq) in k8s with external access via Ingress (istio).</p>
<p>Without deploying ingress, I can connect (mosquitto_sub) via NodePort or LoadBalancer service.</p>
<p>Istio was installed using <code>istioctl install --set profile=demo</code></p>
<h2>The problem</h2>
<p>I am trying to access VerneMQ broker from outside the cluster. Ingress (Istio Gateway) – seems like perfect solution in this case, but I can't establish TCP connection to broker (nor via ingress IP, nor directly via svc/vernemq IP).</p>
<p>So, how do I established this TCP connection from external client through Istio ingress?</p>
<h2>What I tried</h2>
<p>I've created two namespaces:</p>
<ul>
<li>exposed-with-istio – with istio proxy injection</li>
<li>exposed-with-loadbalancer - without istio proxy</li>
</ul>
<p>Within <code>exposed-with-loadbalancer</code> namespace I deployed vernemq with LoadBalancer Service. It works, this is how I know VerneMQ can be accessed (with <code>mosquitto_sub -h <host> -p 1883 -t hello</code>, host is ClusterIP or ExternalIP of the svc/vernemq). Dashboard is accessible at host:8888/status, 'Clients online' increments on the dashboard.</p>
<p>Within <code>exposed-with-istio</code> I deployed vernemq with ClusterIP Service, Istios Gateway and VirtualService.
Immediately istio-after proxy injection, mosquitto_sub can't subscribe through the svc/vernemq IP, nor through the istio ingress (gateway) IP. Command just hangs forever, constantly retrying.
Meanwhile vernemq dashboard endpoint is accessible through both service ip and istio gateway.</p>
<p>I guess istio proxy must be configured for mqtt to work.</p>
<p>Here is istio-ingressgateway service:</p>
<p><code>kubectl describe svc/istio-ingressgateway -n istio-system</code></p>
<pre><code>Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=installed-state
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.7.0
release=istio
Annotations: Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: 10.100.213.45
LoadBalancer Ingress: 192.168.100.240
Port: status-port 15021/TCP
TargetPort: 15021/TCP
Port: http2 80/TCP
TargetPort: 8080/TCP
Port: https 443/TCP
TargetPort: 8443/TCP
Port: tcp 31400/TCP
TargetPort: 31400/TCP
Port: tls 15443/TCP
TargetPort: 15443/TCP
Session Affinity: None
External Traffic Policy: Cluster
...
</code></pre>
<p>Here are debug logs from istio-proxy
<code>kubectl logs svc/vernemq -n test istio-proxy</code></p>
<pre><code>2020-08-24T07:57:52.294477Z debug envoy filter original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug envoy filter tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug envoy filter http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug envoy filter [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug envoy pool creating a new connection
2020-08-24T07:57:52.294671Z debug envoy pool [C5646] connecting
2020-08-24T07:57:52.294684Z debug envoy connection [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug envoy connection [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug envoy pool queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug envoy conn_handler [C5645] new connection
2020-08-24T07:57:52.294768Z debug envoy connection [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug envoy connection [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug envoy pool [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug envoy connection [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug envoy connection [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=12
2020-08-24T07:57:52.294882Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=16
2020-08-24T07:57:52.294885Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=20
2020-08-24T07:57:52.294887Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=24
2020-08-24T07:57:52.294891Z debug envoy conn_handler [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug envoy pool [C5646] connection destroyed
</code></pre>
<p>This are logs from istio-ingressagateway. IP <code>10.244.243.205</code> belongs to VerneMQ pod, not service (probably it's intended).</p>
<pre><code>2020-08-24T08:48:31.536593Z debug envoy filter [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug envoy filter [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug envoy pool creating a new connection
2020-08-24T08:48:31.536778Z debug envoy pool [C13237] connecting
2020-08-24T08:48:31.536784Z debug envoy connection [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug envoy connection [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug envoy pool queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug envoy conn_handler [C13236] new connection
2020-08-24T08:48:31.537181Z debug envoy connection [C13237] connected
2020-08-24T08:48:31.537204Z debug envoy pool [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug envoy filter TCP:onUpstreamEvent(), requestedServerName:
2020-08-24T08:48:31.537880Z debug envoy misc Unknown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug envoy connection [C13237] remote close
2020-08-24T08:48:31.537913Z debug envoy connection [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug envoy pool [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug envoy connection [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug envoy connection [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug envoy conn_handler [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug envoy pool [C13237] connection destroyed
</code></pre>
<h3>My configurations</h3>
vernemq-istio-ingress.yaml
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: exposed-with-istio
labels:
istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
namespace: exposed-with-istio
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
namespace: exposed-with-istio
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
namespace: exposed-with-istio
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
namespace: exposed-with-istio
labels:
app: vernemq
spec:
selector:
app: vernemq
type: ClusterIP
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
- port: 8888
name: http-dashboard
- port: 1883
name: tcp-mqtt
targetPort: 1883
- port: 9001
name: tcp-mqtt-ws
targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vernemq
namespace: exposed-with-istio
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: vernemq/vernemq
ports:
- containerPort: 1883
name: tcp-mqtt
protocol: TCP
- containerPort: 8080
name: tcp-mqtt-ws
- containerPort: 8888
name: http-dashboard
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100-9109 # shortened
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_listener__tcp__allowed_protocol_versions
value: "3,4,5"
- name: DOCKER_VERNEMQ_allow_register_during_netsplit
value: "on"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
</code></pre>
vernemq-loadbalancer-service.yaml
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Namespace
metadata:
name: exposed-with-loadbalancer
---
... the rest it the same except for namespace and service type ...
</code></pre>
istio.yaml
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: vernemq-destination
namespace: exposed-with-istio
spec:
host: vernemq.exposed-with-istio.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: vernemq-gateway
namespace: exposed-with-istio
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vernemq-virtualservice
namespace: exposed-with-istio
spec:
hosts:
- "*"
gateways:
- vernemq-gateway
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vernemq-virtualservice
namespace: exposed-with-istio
spec:
hosts:
- "*"
gateways:
- vernemq-gateway
http:
- match:
- uri:
prefix: /status
route:
- destination:
host: vernemq.exposed-with-istio.svc.cluster.local
port:
number: 8888
tcp:
- match:
- port: 31400
route:
- destination:
host: vernemq.exposed-with-istio.svc.cluster.local
port:
number: 1883
</code></pre>
<p>Does Kiali screenshot imply that ingressgateway only forwards HTTP traffic to the service and eats all TCP?
<a href="https://i.stack.imgur.com/etR3t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/etR3t.png" alt="Kiali graph tab" /></a></p>
<h2>UPD</h2>
<p>Following suggestion, here's the output:</p>
<blockquote>
<p>** But your envoy logs reveal a problem: envoy misc Unknown error code 104 details Connection reset by peer and envoy pool [C5648] client disconnected.</p>
</blockquote>
<p><code>istioctl proxy-config listeners vernemq-c945876f-tvvz7.exposed-with-istio</code></p>
<p>first with | grep 8888 and | grep 1883</p>
<pre><code>0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
</code></pre>
<pre><code>10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
</code></pre>
<pre><code>... Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000 App: HTTP Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000 ALL Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369 ALL Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
10.107.205.214 9001 ALL Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 9090 App: HTTP Route: 9090
0.0.0.0 9090 ALL PassthroughCluster
10.96.0.10 9153 App: HTTP Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 9411 App: HTTP ...
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 App: TCP TLS Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 App: TCP TLS Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15010 App: HTTP Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
10.106.166.154 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 App: HTTP Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
10.100.213.45 15021 App: HTTP Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
10.100.213.45 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0 20001 App: HTTP Route: 20001
0.0.0.0 20001 ALL PassthroughCluster
10.100.213.45 31400 ALL Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
</code></pre>
<blockquote>
<p>** Furthermore please run: istioctl proxy-config endpoints and istioctl proxy-config routes .</p>
</blockquote>
<p><code>istioctl proxy-config endpoints vernemq-c945876f-tvvz7.exposed-with-istio</code>
grep 1883</p>
<pre><code>10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
</code></pre>
<pre><code>ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.101.200.113:9411 HEALTHY OK zipkin
10.106.166.154:15012 HEALTHY OK xds-grpc
10.211.55.14:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053 HEALTHY OK outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080 HEALTHY OK outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443 HEALTHY OK outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443 HEALTHY OK outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080 HEALTHY OK outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443 HEALTHY OK outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021 HEALTHY OK outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443 HEALTHY OK outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400 HEALTHY OK outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000 HEALTHY OK outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411 HEALTHY OK outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686 HEALTHY OK outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090 HEALTHY OK outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001 HEALTHY OK outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090 HEALTHY OK outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369 HEALTHY OK outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888 HEALTHY OK outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001 HEALTHY OK outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053 HEALTHY OK outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369 HEALTHY OK inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888 HEALTHY OK inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001 HEALTHY OK inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
127.0.0.1:44053 HEALTHY OK inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS HEALTHY OK sds-grpc
</code></pre>
<p><code>istioctl proxy-config routes vernemq-c945876f-tvvz7.exposed-with-istio</code></p>
<pre><code>NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021 istio-ingressgateway.istio-system /*
istiod.istio-system.svc.cluster.local:853 istiod.istio-system /*
20001 kiali.istio-system /*
15010 istiod.istio-system /*
15014 istiod.istio-system /*
vernemq.exposed-with-istio.svc.cluster.local:4369 vernemq /*
vernemq.exposed-with-istio.svc.cluster.local:44053 vernemq /*
kube-dns.kube-system.svc.cluster.local:9153 kube-dns.kube-system /*
8888 vernemq /*
80 istio-egressgateway.istio-system /*
80 istio-ingressgateway.istio-system /*
80 tracing.istio-system /*
grafana.istio-system.svc.cluster.local:3000 grafana.istio-system /*
9411 zipkin.istio-system /*
9090 kiali.istio-system /*
9090 prometheus.istio-system /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
* /stats/prometheus*
InboundPassthroughClusterIpv4 * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
InboundPassthroughClusterIpv4 * /*
* /healthz/ready*
</code></pre>
| Artsiom the Brave | <p>I had the same problem when using VerneMQ over an istio gateway. The problem was that the VerneMQ process resets the TCP connection if the listener.tcp.default contains the default value of 127.0.0.1:1883. I fixed it by using DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT with "0.0.0.0:1883".</p>
| Markus Past |
<p>We are using Istio Envoy based Rate limiting (with Kubernetes & Docker) as specified in <a href="https://istio.io/latest/docs/tasks/policy-enforcement/rate-limit/" rel="nofollow noreferrer">this documentation</a>.</p>
<p>Although I was able to set it up for local and global rate limiting in the Kubernetes cluster, I am unable to achieve the following:</p>
<ol>
<li><p>Rate limit a Service only for POST requests, while GET requests should go through unencumbered.</p>
</li>
<li><p>Rate limit a Service only for a certain time duration (e.g. 9 AM to 5 PM EST) and work normally at other times.</p>
</li>
</ol>
<p>Is the above possible in current Istio functionalities?</p>
| Sid | <p>I will try to answer both of your questions below.</p>
<h4>1. Rate limit a Service only for a specific request method</h4>
<p>We can use the <a href="https://www.envoyproxy.io/docs/envoy/v1.14.5/api-v2/api/v2/route/route_components.proto#envoy-api-msg-route-ratelimit-action-headervaluematch" rel="nofollow noreferrer">header_value_match</a> rate limit actions.</p>
<p>I created a single <code>rate_limits filter</code> with one <code>action</code> that matches any request with method <strong>GET</strong>:<br />
<strong>NOTE:</strong> For the sake of simplicity, I have only given an important part of the configuration.</p>
<p>Envoy rate_limits filter configuration:</p>
<pre><code>...
value:
rate_limits:
- actions:
- header_value_match:
descriptor_value: get
headers:
- name: :method
prefix_match: GET
...
</code></pre>
<p>Next, I created a <code>ratelimit service configuration</code> that matches descriptors with key <code>header_match</code> and value <code>get</code>. It will provide a limit of 1 request per minute:</p>
<pre><code>...
descriptors:
- key: header_match
rate_limit:
unit: minute
requests_per_unit: 1
value: get
...
</code></pre>
<p>After applying the above configuration, we can check whether it will be possible to use the <strong>GET</strong> method more than once within 1 minute:</p>
<pre><code>$ curl "http://$GATEWAY_URL/productpage" -I -X GET
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 5179
server: istio-envoy
date: Tue, 11 Jan 2022 09:57:33 GMT
x-envoy-upstream-service-time: 120
$ curl "http://$GATEWAY_URL/productpage" -I -X GET
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Tue, 11 Jan 2022 09:57:35 GMT
server: istio-envoy
content-length: 0
</code></pre>
<p>As we can see, after the second request, we received the HTTP <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429" rel="nofollow noreferrer">429 Too Many Requests</a> response status code which indicates that the user has sent too many requests in a given amount of time. It means that everything works as expected.</p>
<p>I recommend you to read the <a href="https://www.aboutwayfair.com/tech-innovation/understanding-envoy-rate-limits" rel="nofollow noreferrer">Understanding Envoy Rate Limits</a> article which contains a lot of useful information.</p>
<h4>2. Rate limit a Service only for a certain time duration (e.g. 9 AM to 5 PM EST) and work normally at other times.</h4>
<p>Unfortunately, I cannot find any suitable option to configure this behavior. I think that <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> can be used as a workaround, which will run periodically on a given schedule and will create/delete the appropriate configuration responsible for rate-limiting. In short, you can use a Bash script that creates/deletes a configuration responsible for rate-limiting and then you can mount that script in a volume to the <code>CronJob</code> Pod.
I have already described a similar use of <code>CronJob</code> <a href="https://stackoverflow.com/a/68406717/14801225">here</a> and believe it can help you.</p>
| matt_j |
<p>I'm trying to edit services created via helm chart and when changing from NodePort to ClusterIP I get this error</p>
<pre class="lang-sh prettyprint-override"><code>The Service "<name>" is invalid: spec.ports[0].nodePort: Fordbidden: may not be used when 'type' is 'ClusterIP'
</code></pre>
<p>I've seen solutions from other people where they just run <code>kubectl apply -f service.yaml --force</code> - but I'm not using kubectl but helm to do it - any thoughts ? If it was just one service I would just update/re-deploy manually but there are xx of them.</p>
| potatopotato | <p>Found the answer to my exact question in here <a href="https://www.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/helms/all_helms/wip/reference/hlm_upgrading_service_type_change.html" rel="nofollow noreferrer">https://www.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/helms/all_helms/wip/reference/hlm_upgrading_service_type_change.html</a></p>
<p>In short they suggest either to:</p>
<hr />
<p>There are three methods you can use to avoid the service conversion issue above. You will only need to perform one of these methods:</p>
<ul>
<li>Method 1: Installing the new version of the helm chart with a different release name and update all clients to point to the new probe service endpoint if required. Then delete the old release. This is the recommended method but requires a re-configuration on the client side.</li>
<li>Method 2: Manually changing the service type using kubectl edit svc. This method requires more manual steps but preserves the current service name and previous revisions of the helm chart. After performing this workaround, users should be able to perform a helm upgrade.</li>
<li>Method 3: Deleting and purging the existing helm release, and then install the new version of helm chart with the same release name.</li>
</ul>
| potatopotato |
<p>I have a aws <strong>EC2</strong> (EC2-A) and <strong>Amazon Managed Blockchain</strong> running in <strong>VPC (VPC-A)</strong></p>
<ul>
<li>This EC2-A instance has some files and certificates (required for executing transactions in the blockchain)</li>
<li>EC2-A has EBS storage which can be mounted on only one EC2 instance at one time.</li>
<li>Transactions can be only executed to the blockchain network from the EC2-A, since they're are in the same VPC-A.</li>
</ul>
<p>I have an aws <strong>EKS (Kubernetes cluster) running in VPC-B.</strong></p>
<p>How can I <em>access the files and certificates of EC2-A from a pod in my k8s cluster.</em> Also I have another <strong>pod</strong> which will be blockchain client <strong>executing transactions in the blockchain network, which is in VPC-A.</strong></p>
<p>Both these VPC-A and VPC-B are in the same aws account.</p>
| Niraj Kumar | <p>Mount a folder/files on an EC2 instance to a pod running in EKS is not supported. For your use case, you can easily share folder/files using EFS if not S3. If you are only allow to do pod to EC2 communication, you need a way for these resources to reach each other either by public IP if not VPC peering. Then you can run sftp, scp... any kind of off the shelf file sharing software you knew best for file exchange.</p>
| gohm'c |
<p>I was going by this update for EKS <a href="https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/" rel="nofollow noreferrer">https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/</a> and this blog from AWS <a href="https://aws.amazon.com/blogs/containers/using-eks-encryption-provider-support-for-defense-in-depth/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/containers/using-eks-encryption-provider-support-for-defense-in-depth/</a>.</p>
<p>This is a very cryptic line which never confirms whether EKS encrypts secrets or not by default</p>
<blockquote>
<p>In EKS, we operate the etcd volumes encrypted at disk-level using AWS-managed encryption keys.</p>
</blockquote>
<p>I did understand that:-</p>
<ul>
<li>KMS with EKS will provide envelope encryption,like encrypting the DEK using CMK.</li>
<li>But it never mentioned that if I don't use this feature ( of course KMS will cost ), does EKS encrypts data by default?</li>
</ul>
<p>Because Kubernetes by default does not encrypt data . <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Source</a></p>
<blockquote>
<p>Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.</p>
</blockquote>
| Jatin Mehrotra | <p>I think I found it, the blog and update post by aws are very cryptic.</p>
<p>According to <a href="https://docs.aws.amazon.com/eks/latest/userguide/clusters.html" rel="nofollow noreferrer">docs</a> and console :-</p>
<blockquote>
<p>All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted using AWS KMS.</p>
</blockquote>
<p>Using KMS with EKS is additional encryption or a better way of envelope encryption. It allows deploying a defense-in-depth strategy for Kubernetes applications by encrypting Kubernetes secrets with a KMS key that you define and manage.</p>
<p><a href="https://i.stack.imgur.com/CVMJG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CVMJG.png" alt="enter image description here" /></a></p>
| Jatin Mehrotra |
<p>I was looking for a way to stream the logs of all pods of a specific deployment of mine.<br />
So, some days ago I've found <a href="https://stackoverflow.com/a/56258727/12603421">this</a> SO answer giving me a magical command:</p>
<pre><code>kubectl logs -f deployment/<my-deployment> --all-containers=true
</code></pre>
<p>However, I've just discovered, after a lot of time debugging, that this command actually shows the logs of just one pod, and not all of the deployment.
So I went to <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">Kubectl's official documentation</a> and found nothing relevant on the topic, just the following phrase above the example that uses the deployment, as a kind of selector, for log streaming:</p>
<pre><code> ...
# Show logs from a kubelet with an expired serving certificate
kubectl logs --insecure-skip-tls-verify-backend nginx
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
</code></pre>
<p>So why is that the first example shown says "Show logs" and the other two say "Return snapshot logs"?</p>
<p>Is it because of this "snapshot" that I can't retrieve logs from all the pods of the deployment?
I've searched a lot for more deep documentation on streaming logs with kubectl but couldn't find any.</p>
| Teodoro | <p>To return all pod(s) log of a deployment you can use the same selector as the deployment. You can retrieve the deployment selector like this <code>kubectl get deployment <name> -o jsonpath='{.spec.selector}' --namespace <name></code>, then you retrieve logs using the same selector <code>kubectl logs --selector <key1=value1,key2=value2> --namespace <name></code></p>
| gohm'c |
<p>Is there a way to disable service links globally. There's a field in <code>podSpec</code>:</p>
<pre><code>enableServiceLinks: false
</code></pre>
<p>but it's <code>true</code> by default. I couldn't find anything in kubelet to kill it. Or is there some cool admission webhook toolchain I could use</p>
| nmiculinic | <p>You can use the Kubernetes-native policy engine called <a href="https://kyverno.io/" rel="nofollow noreferrer">Kyverno</a>. Kyverno policies can validate, <strong>mutate</strong> (see: <a href="https://kyverno.io/docs/writing-policies/mutate/" rel="nofollow noreferrer">Mutate Resources</a>), and generate Kubernetes resources.</p>
<p>A Kyverno policy is a collection of rules that can be applied to the entire cluster (<code>ClusterPolicy</code>) or to the specific namespace (<code>Policy</code>).</p>
<hr />
<p>I will create an example to illustrate how it may work.</p>
<p>First we need to install Kyverno, you have the option of installing Kyverno directly from the latest release manifest, or using Helm (see: <a href="https://kyverno.io/docs/introduction/#quick-start" rel="nofollow noreferrer">Quick Start guide</a>):</p>
<pre><code>$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
</code></pre>
<p>After successful installation, we can create a simple <code>ClusterPolicy</code>:</p>
<pre><code>$ cat strategic-merge-patch.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: strategic-merge-patch
spec:
rules:
- name: enableServiceLinks_false_globally
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
enableServiceLinks: false
$ kubectl apply -f strategic-merge-patch.yaml
clusterpolicy.kyverno.io/strategic-merge-patch created
$ kubectl get clusterpolicy
NAME BACKGROUND ACTION READY
strategic-merge-patch true audit true
</code></pre>
<p>This policy adds <code>enableServiceLinks: false</code> to the newly created Pod.</p>
<p>Let's create a Pod and check if it works as expected:</p>
<pre><code>$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl get pod app-1 -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
</code></pre>
<p>It also works with <code>Deployments</code>, <code>StatefulSets</code>, <code>DaemonSets</code> etc.:</p>
<pre><code>$ kubectl create deployment deploy-1 --image=nginx
deployment.apps/deploy-1 created
$ kubectl get pod deploy-1-7cfc5d6879-kfdlh -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
</code></pre>
<p>More examples with detailed explanations can be found in the <a href="https://kyverno.io/docs/writing-policies/" rel="nofollow noreferrer">Kyverno Writing Policies documentation</a>.</p>
| matt_j |
<p>I have deployed an Azure AKS cluster using Azure CNI behind Application Gateway. Currently, internal pod communication is via http. I am trying to make this communication secure by implementing SSL. I didn't find any optimal solution after skimming through MSDN and Kubernetes documentation. Is there any way this can be achieved?</p>
| Issues2021 | <p>CNI won't automatically encrypt the communication between pods on its own.
You could use external tools like Linkerd or Istio which could encrypt traffic between pods</p>
<p>Linkerd and Isito will encrypt traffic with mTLS out of the box.</p>
<p><a href="https://linkerd.io/2/features/automatic-mtls/" rel="nofollow noreferrer">https://linkerd.io/2/features/automatic-mtls/</a></p>
<p><a href="https://istio.io/v1.4/docs/tasks/security/authentication/auto-mtls/" rel="nofollow noreferrer">https://istio.io/v1.4/docs/tasks/security/authentication/auto-mtls/</a></p>
| Andriy Bilous |
<p>I have a custom kubernetes operator written in Go. When writing unittests I came across the need to have a fake manager initialised. The manager is created with NewManager() function of the "sigs.k8s.io/controller-runtime" package. When run locally that is not a problem, but it is when running in a pipeline.</p>
<pre><code>import (
ctrl "sigs.k8s.io/controller-runtime"
// other imports
)
var _ = Describe("ProvideNewController", func() {
var (
reconciler reconcile.Reconciler
customUtils controllers.CustomUtils // local utils package
mockCC *controllers.MockCommandClient
errResult error
)
When("ProvideNewController is called", func() {
BeforeEach(func() {
logger := logr.Discard()
recorder := record.NewFakeRecorder(100)
mockCC = &controllers.MockCommandClient{}
customUtils = controllers.CustomUtils{
CommandClient: mockCC,
}
scheme := scheme.Scheme
scheme.AddKnownTypes(operatorV1alpha1.GroupVersion, &operatorV1alpha1.Custom{})
managerOptions := ctrl.Options{
Scheme: scheme,
Logger: ctrl.Log.WithName("controllers").WithName("custom"),
WebhookServer: webhook.NewServer(webhook.Options{Port: 9443}),
LeaderElectionID: "someValue",
}
fakeClient := fake.NewClientBuilder().WithScheme(scheme).WithRuntimeObjects().Build()
reconciler = NewReconciler(fakeClient, customUtils, logger, scheme, recorder)
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), managerOptions)
if err != nil {
common.FailTestSetup(err)
}
errResult = ProvideNewController(mgr, reconciler)
})
It("should set up a controller with the specified resource types", func() {
Expect(errResult).ToNot(HaveOccurred())
})
})
})
</code></pre>
<p>Now the line <code>mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), managerOptions)</code> crashes the tests completely and ungracefully because ctrl.GetConfigOrDie() fails to read a kubeconfig file, which is expected because it really isn't present on the pod of the CI/CD runner.</p>
<p>I'm new to Go, so I'm not aware of all the options there are. From what I've seen, if I was to mock the whole struct and provide it mocked like that as an argument for the NewManager, I would've had to mock all the methods that struct implements and I don't see that as a good approach.</p>
<p>Is there a way for me to mock the config file or provide it programmatically, so I don't have to edit the pipeline and provide it from there?</p>
| Богдан Божић | <p>And so it came to be that I am answering my own question. <code>GetConfigOrDie()</code> returns a rest.Config{} object, so instead of using the function call as an argument and mocking or faking everything around it, or passing a fake config file into the CI/CD, I just passed an empty <code>&rest.Config{}</code> as an argument for the function call and it worked. I'm kinda ashamed for not trying this as a first option...</p>
<pre><code>mgr, err := ctrl.NewManager(&rest.Config{}, managerOptions)
</code></pre>
| Богдан Божић |
<p>I get this error message:</p>
<pre><code>Deployment.apps "nginxapp" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-claim"
</code></pre>
<p>Now, I thought deployment made a claim to a persistant storage, so these are det files I've run in order:</p>
<p>First, persistant volume to /data as that is persistent on minikube (<a href="https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/</a>):</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
</code></pre>
<p>Then, for my nginx deployment I made a claim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Before service I run the deployment, which is the one giving me the error above, looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
volumes:
- persistentVolumeClaim:
claimName: nginx-claim
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/data/www"
name: nginx-claim
</code></pre>
<ol>
<li><p>Where did I go wrong? Isnt it deployment -> volume claim -> volume?</p>
</li>
<li><p>Am I doing it right? The persistent volume is pod-wide (?) and therefore generally named. But the claim is per deployment? So thats why I named it <code>nginx-claim</code>. I might be mistaken here, but should not bug up this simple run doh.</p>
</li>
<li><p>In my deployment i set <code>mountPath: "/data/www"</code>, this should follow the directory already set in persistent volume definition, or is building on that? So in my case I get <code>/data/data/www</code>?</p>
</li>
</ol>
| u314 | <p>Try change to:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: local-storage # <-- changed
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>At your deployment spec add:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-claim
mountPath: "/data/www"
volumes:
- name: nginx-claim # <-- added
persistentVolumeClaim:
claimName: nginx-claim
</code></pre>
| gohm'c |
<p>I have a Problem with the Kubernetes Dashboard.
I use actually the Managed Kubernetes Service AKS and created a Kubernetes Cluster with following Setup:</p>
<ul>
<li>Kubernetes-Version 1.20.9</li>
<li>1 Worker Node with Size Standard_DS2_v2</li>
</ul>
<p>It starts successfully with the automatic configuration of <strong>coredns</strong>, <strong>corednsautoscaler</strong>, <strong>omsagent-rs</strong>, <strong>tunnelfront</strong> and the <strong>metrics-sever</strong>.</p>
<p>After that i applied three deployments for my services, which all are deployed successfully.</p>
<p>Now, i want to get access to the Kubernetes Dashboard. I used the instruction which is described on <a href="https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard</a>.</p>
<p>After that I call <strong>kubectl proxy</strong> to access the dashboard via the url: <a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/." rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.</a></p>
<p>After i use my kubeconfig-File to Sign to Kubernetes Dashboard i get following output and nor cpu neither memory usage is displayed.</p>
<p><a href="https://i.stack.imgur.com/coSnY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/coSnY.png" alt="enter image description here" /></a></p>
<p>When i execute kubectl describe kubernetes-dashboard pod i get following:
<a href="https://i.stack.imgur.com/FUus0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FUus0.png" alt="enter image description here" /></a></p>
<p>And the logs from the pod say following:</p>
<pre><code>Internal error occurred: No metric client provided. Skipping metrics.
2021/12/11 19:23:04 [2021-12-11T19:23:04Z] Outcoming response to 127.0.0.1:43392 with 200 status code
2021/12/11 19:23:04 Internal error occurred: No metric client provided. Skipping metrics.
2021/12/11 19:23:04 [2021-12-11T19:23:04Z] Outcoming response to 127.0.0.1:43392 with 200 status code
2021/12/11 19:23:04 Internal error occurred: No metric client provided. Skipping metrics.
</code></pre>
| k_gokhan1905 | <p><code>... I used the instruction which is described on https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard.</code></p>
<p>The dashboard needs a way to "cache" a small window of metrics collected from the metrics server. The instruction provided there doesn't have this enabled. You can run the following to install/upgrade kubernetes-dashboard with metrics scraper enabled:</p>
<pre><code>helm upgrade -i kubernetes-dashboard/kubernetes-dashboard --name my-release \
--set=service.externalPort=8080,resources.limits.cpu=200m,metricsScraper.enabled=true
</code></pre>
| gohm'c |
<p>poststarthookerror: <strong>command 'gcsfuse -o nonempty config_files_1_bucket /home/test123/' exited with 126</strong>
I am adding the gcs fuse command in my yaml file like</p>
<p>lifecycle:</p>
<pre><code> postStart:
exec:
command:
- gcsfuse
- -o
- nonempty
- config_files_1_bucket
- /home/test123
preStop:
exec:
command:
- fusermount
- -u
- /home/test123
</code></pre>
| Lakshay Narang | <p>same here. It used to work (on gcsfuse version 0.28.1) but i does not anymore (0.32.0). The fact is that they removed support of the nonempty option (<a href="https://github.com/appsembler/roles/pull/85" rel="nofollow noreferrer">https://github.com/appsembler/roles/pull/85</a>) because it is not more supported in fusermount v3.0
Simply remove the nonempty option and it should work.</p>
<p>More here : <a href="https://github.com/GoogleCloudPlatform/gcsfuse/issues/424" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gcsfuse/issues/424</a></p>
| franfran |
<p>I have a pod that is essentially a plugin for an apiserver, it's almost no workload pod which task is to externalize watches to another pubsub facility (serves like a bridge from one api to another)
To reduce the latency and amount of real network connections I thought that it may make sense to always deploy its 1-replica deployment to same machine, that is running apiserver itself. It turns out that it's a master node. Pod almost does not take ram and CPU, pure streaming pod without any endpoints - bridge from k8s watches to something other. How can I do that?</p>
| xakepp35 | <p>If your intention is only to run a specific pod on the master node and <strong>not</strong> open up the master node, you should implement <code>tolerations</code> and <code>nodeSelector</code>. The sample below will always run busybox on the master node:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
restartPolicy: Never
nodeSelector:
<a unique label on your master node>: <the label value>
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 3600"]
</code></pre>
| gohm'c |
<p>Let's say I've a deployment. For some reason it's not responding after sometime. Is there any way to tell Kubernetes to rollback to previous version automatically on failure?</p>
| Healthy Bowl | <p>You mentioned that:</p>
<blockquote>
<p>I've a deployment. For some reason it's not responding after sometime.</p>
</blockquote>
<p>In this case, you can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">liveness and readiness</a> probes:</p>
<blockquote>
<p>The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.</p>
</blockquote>
<blockquote>
<p>The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
</blockquote>
<p>The above probes may prevent you from deploying a corrupted version, however liveness and readiness probes aren't able to rollback your Deployment to the previous version. There was a similar <a href="https://github.com/kubernetes/kubernetes/issues/23211" rel="noreferrer">issue</a> on Github, but I am not sure there will be any progress on this matter in the near future.</p>
<p>If you really want to automate the rollback process, below I will describe a solution that you may find helpful.</p>
<hr />
<p>This solution requires running <code>kubectl</code> commands from within the Pod.
In short, you can use a script to continuously monitor your Deployments, and when errors occur you can run <code>kubectl rollout undo deployment DEPLOYMENT_NAME</code>.</p>
<p>First, you need to decide how to find failed Deployments. As an example, I'll check Deployments that perform the update for more than 10s with the following command:<br />
<strong>NOTE:</strong> You can use a different command depending on your need.</p>
<pre><code>kubectl rollout status deployment ${deployment} --timeout=10s
</code></pre>
<p>To constantly monitor all Deployments in the <code>default</code> Namespace, we can create a Bash script:</p>
<pre><code>#!/bin/bash
while true; do
sleep 60
deployments=$(kubectl get deployments --no-headers -o custom-columns=":metadata.name" | grep -v "deployment-checker")
echo "====== $(date) ======"
for deployment in ${deployments}; do
if ! kubectl rollout status deployment ${deployment} --timeout=10s 1>/dev/null 2>&1; then
echo "Error: ${deployment} - rolling back!"
kubectl rollout undo deployment ${deployment}
else
echo "Ok: ${deployment}"
fi
done
done
</code></pre>
<p>We want to run this script from inside the Pod, so I converted it to <code>ConfigMap</code> which will allow us to mount this script in a volume (see: <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod" rel="noreferrer">Using ConfigMaps as files from a Pod</a>):</p>
<pre><code>$ cat check-script-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: check-script
data:
checkScript.sh: |
#!/bin/bash
while true; do
sleep 60
deployments=$(kubectl get deployments --no-headers -o custom-columns=":metadata.name" | grep -v "deployment-checker")
echo "====== $(date) ======"
for deployment in ${deployments}; do
if ! kubectl rollout status deployment ${deployment} --timeout=10s 1>/dev/null 2>&1; then
echo "Error: ${deployment} - rolling back!"
kubectl rollout undo deployment ${deployment}
else
echo "Ok: ${deployment}"
fi
done
done
$ kubectl apply -f check-script-configmap.yml
configmap/check-script created
</code></pre>
<p>I've created a separate <code>deployment-checker</code> ServiceAccount with the <code>edit</code> Role assigned and our Pod will run under this ServiceAccount:<br />
<strong>NOTE:</strong> I've created a Deployment instead of a single Pod.</p>
<pre><code>$ cat all-in-one.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployment-checker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: deployment-checker-binding
subjects:
- kind: ServiceAccount
name: deployment-checker
namespace: default
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: deployment-checker
name: deployment-checker
spec:
selector:
matchLabels:
app: deployment-checker
template:
metadata:
labels:
app: deployment-checker
spec:
serviceAccountName: deployment-checker
volumes:
- name: check-script
configMap:
name: check-script
containers:
- image: bitnami/kubectl
name: test
command: ["bash", "/mnt/checkScript.sh"]
volumeMounts:
- name: check-script
mountPath: /mnt
</code></pre>
<p>After applying the above manifest, the <code>deployment-checker</code> Deployment was created and started monitoring Deployment resources in the <code>default</code> Namespace:</p>
<pre><code>$ kubectl apply -f all-in-one.yaml
serviceaccount/deployment-checker created
clusterrolebinding.rbac.authorization.k8s.io/deployment-checker-binding created
deployment.apps/deployment-checker created
$ kubectl get deploy,pod | grep "deployment-checker"
deployment.apps/deployment-checker 1/1 1
pod/deployment-checker-69c8896676-pqg9h 1/1 Running
</code></pre>
<p>Finally, we can check how it works. I've created three Deployments (<code>app-1</code>, <code>app-2</code>, <code>app-3</code>):</p>
<pre><code>$ kubectl create deploy app-1 --image=nginx
deployment.apps/app-1 created
$ kubectl create deploy app-2 --image=nginx
deployment.apps/app-2 created
$ kubectl create deploy app-3 --image=nginx
deployment.apps/app-3 created
</code></pre>
<p>Then I changed the image for the <code>app-1</code> to the incorrect one (<code>nnnginx</code>):</p>
<pre><code>$ kubectl set image deployment/app-1 nginx=nnnginx
deployment.apps/app-1 image updated
</code></pre>
<p>In the <code>deployment-checker</code> logs we can see that the <code>app-1</code> has been rolled back to the previous version:</p>
<pre><code>$ kubectl logs -f deployment-checker-69c8896676-pqg9h
...
====== Thu Oct 7 09:20:15 UTC 2021 ======
Ok: app-1
Ok: app-2
Ok: app-3
====== Thu Oct 7 09:21:16 UTC 2021 ======
Error: app-1 - rolling back!
deployment.apps/app-1 rolled back
Ok: app-2
Ok: app-3
</code></pre>
| matt_j |
<p><a href="https://i.stack.imgur.com/SOtjT.png" rel="nofollow noreferrer">1 Master : 10.166.232.164
2 Worker : 10.166.232.165, 10.166.232.166</a></p>
<p><a href="https://i.stack.imgur.com/OWqHJ.png" rel="nofollow noreferrer">Deploy 3 replica pods for 2 worker nodes</a></p>
<p><a href="https://i.stack.imgur.com/ORX0j.png" rel="nofollow noreferrer">Nodeport service</a></p>
<p>problem is i can access by curl podIP:8080</p>
<p>but</p>
<p>exec pod and access by clusterIP:Nodeport is not working</p>
<p>kubectl exec -it network-example2-84c98c7b4d-d7wnr /bin/bash -- curl 10.98.10.159:8080
=> curl: (7) Failed to connect to 10.98.10.159 port 8080: Connection refused</p>
<p>kubectl exec -it network-example2-84c98c7b4d-d7wnr /bin/bash -- curl 10.98.10.159:23060
=> no answer(maybe timeout error)</p>
<p>is it firewall problem..? or CNI..?</p>
<p>I'm using weave-net and no change any config</p>
| YoungDo Park | <p>A closer look to your screenshot indicate you have set <code>externalTrafficPolicy</code> to <code>Local</code>. Try:</p>
<p><code>curl 10.166.232.165:32060 or curl 10.166.232.166:32060</code></p>
<p>"Local" means only the node which has the pod running will response to you, otherwise your request will be drop. Change to "Cluster" if you wish all the nodes will response to you regardless if it has the pod running.</p>
| gohm'c |
<p>Background: Have approx 50 nodes "behind" a namespace. Meaning that a given Pod in this namespace can land on any of those 50 nodes.</p>
<p>The task is to test if an outbound firewall rule (in a FW outside the cluster) has been implemented correctly. Therefore I would like to test a command <strong>on each potential node</strong> in the namespace which will tell me if I can reach my target from the given node. (using <code>curl</code> for such test but that is besides the point for my question)</p>
<p>I can create a small containerized app which will exit 0 on success. Then next step would be execute this on each potential node and harvest the result. How to do that?</p>
<p>(I don't have access to the nodes directly, only indirectly via Kubernetes/OpenShift. I only have access to the namespace-level, not the cluster-level.)</p>
| peterh | <p>The underlying node firewall settings is NOT control by K8s network policies. To test network connectivity in a namespace you only need to run 1 pod in that namespace. To test firewall settings of the node you typically ssh into the node and execute command to test - while this is possible with K8s but that would require the pod to run with root privileged; which not applicable to you as you only has access to a single namespace.</p>
| gohm'c |
<p>I feel like I am misunderstanding RAM based <code>emptyDir</code> volumes in Kubernetes.</p>
<ol>
<li><p>Let's suppose my Kubernetes node has a total of 100GB. If I have 4 different emptyDirs that have <code>emptyDir.medium</code> set to "Memory", by default will they all have 50GBs of memory? In that case what happens when the total amount of memory used in my 4 emptyDirs exceeds 100GB?</p>
</li>
<li><p>I know in general RAM is fast, but what are some examples for the downsides? From the <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">official documentation</a>, I see the below but I don't quite understand the statement. My understanding is that if a Pod crashes, files on emptyDirs using disk will still be deleted. Will the files be kept upon node reboot if they are stored in disk? Also what do they mean by <code>count against container memory limit</code>?</p>
</li>
</ol>
<pre><code>While tmpfs is very fast, be aware that unlike disks,
tmpfs is cleared on node reboot and any files you
write count against your container's memory limit
</code></pre>
| Kun Hwi Ko | <p><code>...by default will they all have 50GBs of memory?</code></p>
<p>Yes. You can exec into the pod and check with <code>df -h</code>. If your cluster has <code>SizeMemoryBackedVolumes</code> feature enabled, you can specify the size.</p>
<p><code>...what happens when the total amount of memory used in my 4 emptyDirs exceeds 100GB?</code></p>
<p>You won't get that chance because the moment the total amount of memory used by all emptyDir(s) reach 50GB; your pod will be evicted. <strong>It doesn't need a single emptyDir to reach 50GB to evict</strong>.</p>
<p><code>...don't quite understand the statement.</code></p>
<p>It means you will not get back the data storing on emptyDir.</p>
<p><code>count against container memory limit</code></p>
<p>It means the amount of memory that you consumed using emptyDir is added to the amount of the memory that your container used and check against the resources.limits.memory.</p>
| gohm'c |
<p>I have a 4 node (on 4 different machines) Kubernetes cluster running over containerd (setuped with kubeadm), and i need to run some local images on this cluster but i can't find a way to create a containerd local registry to push my images to and then kubernetes to pull.</p>
<p>So i've tried install docker to create a docker private repository i faced a warning saying that there was a conflict with containerd, so i'm stuck and don't know another alternatives.</p>
| Gabriel Sandoli | <p>You can build with Docker and then transfer to Containerd</p>
<p>Sample steps:</p>
<p>a. Build dockerfile</p>
<pre><code>docker build -t yourimagename .
</code></pre>
<p>b. Save image as a tar file:</p>
<pre><code>docker save yourimagename > yourimagename.tar
</code></pre>
<p>c. Import to containerd registry</p>
<pre><code>ctr -n=k8s.io image import yourimagename.tar
</code></pre>
| Furkan |
<p>I have backend service deployed on a private GKE cluster, and i want to execute this Corn job but everytime i get the following error: <code>Pod errors: Error with exit code 127</code></p>
<pre><code> apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: call-callendar-api-demo
spec:
schedule: "*/15 * * * *"
jobTemplate:
spec:
template:
spec:
nodeSelector:
env: demo
containers:
- name: call-callendar-api-demo
image: busybox
command: ["/bin/sh"]
args: ["-c", 'curl -X POST "curl -X POST "https://x.x.x/api/v1/cal/add_link" -H "accept: application/json" -d "" >/dev/null 2>&1" -H "accept: application/json" -d "" >/dev/null 2>&1']
restartPolicy: Never
</code></pre>
<p>Any suggestions why this CornJob that is deployed on the same namespace with my backend service is giving me this error? Also there is no logs in the container :( btw i have basic auth, could that be a reason?</p>
<p>Edit: Logs from the pod after removing <code>>/dev/null/</code>:</p>
<pre><code>textPayload: "curl: (3) URL using bad/illegal format or missing URL
textPayload: "
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: application
"
</code></pre>
| Шурбески Христијан | <p>The command is wrong, and i changed the picture with one that implements <code>curl</code> it suppose to look like this.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: demo
spec:
schedule: "*/15 * * * *"
jobTemplate:
spec:
template:
spec:
nodeSelector:
env: demo
containers:
- name: -demo
image: curlimages/curl #changed the picture
command: ["/bin/sh"]
args: ["-c", 'curl -X POST "https://x.x.x/api/v1/cal/addy_link" -H "accept: application/json" -d "" >/dev/null 2>&1']
restartPolicy: Never
</code></pre>
<p>It solved my problem.</p>
| Шурбески Христијан |
<p>My goal is to have a gitlab CI/CD pipeline that builds my conda packages for me. For very large projects, conda is so slow that gitlab times out, so we are using mamba instead. Gitlab uses a Kubernetes runner, and what I've noticed is that my docker container works fine when I build/run it locally on my machine, but when the Kubernetes executor runs it, the conda environment doesn't have the required packages installed for some reason.</p>
<p>The Docker image gets generated from this Dockerfile:</p>
<pre><code>FROM ubuntu:focal
SHELL ["/bin/bash", "-l", "-c"]
RUN apt-get update && apt-get install -y wget
# Install mamba
RUN wget -q -P /root/ https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
RUN sh /root/Mambaforge-Linux-x86_64.sh -b
RUN /root/mambaforge/bin/conda shell.bash hook > /root/activatehook.sh
# Create an environment and install curl and numpy
RUN source /root/activatehook.sh && mamba create -n build-env -y -c conda-forge python=3.9
RUN source /root/activatehook.sh && conda activate build-env && mamba install curl numpy
</code></pre>
<p>Now if I build that <em><strong>locally</strong></em>, I can run <code>sudo docker run <my image> /bin/bash -c "source /root/activatehook.sh && conda activate build-env && mamba info && mamba list"</code>, and I see (among other things) that:</p>
<ul>
<li>The active environment is <code>build-env</code></li>
<li><code>curl</code> is installed</li>
<li><code>numpy</code> is installed</li>
</ul>
<p>Now I move that into my gitlab CI script:</p>
<pre class="lang-yaml prettyprint-override"><code>stages:
- test-stage
test-job:
stage: test-stage
tags:
- kubernetes
image: <my-image>
script:
- /bin/bash -c "source /root/activatehook.sh && conda activate build-env && mamba info && mamba list"
</code></pre>
<p>When this runs, the output from gitlab indicates that:</p>
<ul>
<li>The active environment is <code>build-env</code></li>
<li><code>curl</code> is installed</li>
<li><code>numpy</code> is <em><strong>not</strong></em> installed!</li>
</ul>
<p>I can't figure out where to go with this. The conda environment exists and is active, and one of the packages in it is properly installed, but the other is not. Furthermore, when I pull the image to my local host and run the same command manually, both <code>curl</code> and <code>numpy</code> are installed as expected!</p>
<p>Also important: I am aware of the mambaforge docker image. I have tried something like this:</p>
<pre><code>FROM condaforge/mambaforge
RUN mamba create -y --name build-env python=3.9
RUN mamba install -n build-env -y -c conda-forge curl numpy==1.21
</code></pre>
<p>In this case, I get a similar result, except that, when run from the Kubernetes runner, <em><strong>neither curl nor numpy are installed</strong></em>! If I pull the image to my local host, again, the environment is fine (both packages are correctly installed). Can anyone help explain this behavior?</p>
| maldata | <p>The issue is that the "source" command cannot be executed. To resolve this problem, you can use ENTRYPOINT in the Dockerfile. You can verify the presence of the numpy library in the mamba list by following these steps.</p>
<p>Firstly you can create <code>entrypoint.sh</code> file:</p>
<pre><code>#!/bin/bash
source /root/activatehook.sh
conda activate build-env
exec "$@"
</code></pre>
<p>Copy entrypoint file into the container. (I have enriched the content of the Dockerfile)</p>
<p>Dockerfile:</p>
<pre><code>FROM ubuntu:focal
SHELL ["/bin/bash", "-c"]
WORKDIR /root/
RUN apt-get update && \
apt-get install -y wget
RUN wget -q https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
RUN bash Mambaforge-Linux-x86_64.sh -b -p /root/mambaforge
RUN rm /root/Mambaforge-Linux-x86_64.sh
ENV PATH="/root/mambaforge/bin:$PATH"
RUN conda shell.bash hook > /root/activatehook.sh
RUN source /root/activatehook.sh && \
conda create -n build-env -y -c conda-forge python=3.9 && \
echo "source /root/activatehook.sh && conda activate build-env" >> /root/.bashrc && \
/root/mambaforge/bin/conda install -n build-env -c conda-forge -y curl numpy
COPY entrypoint.sh /root/entrypoint.sh
RUN chmod +x /root/entrypoint.sh
RUN apt-get -y autoclean && \
apt-get -y autoremove && \
rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/root/entrypoint.sh"]
</code></pre>
<p>gitlab CI script:</p>
<pre><code>stages:
- test-stage
test-job:
stage: test-stage
tags:
- kubernetes
image: <your-image>
script:
- /root/entrypoint.sh /bin/bash -c "mamba info && mamba list"
</code></pre>
| Furkan |
<p>I am using <code>@kubernetes/client-node</code> to access Kubernetes server API. I can get all the Pods from default using:</p>
<pre><code>const k8s = require('@kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
k8sApi.listNamespace().then((res) => { // or using listAllNamespacedPods
console.log(res.body);
});
</code></pre>
<p>and the body of the response from the above code looks like this:</p>
<p><a href="https://i.stack.imgur.com/wV30C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wV30C.png" alt="Response From loadFromDefault" /></a></p>
<p>but when I am using <code>kc.loadFromFile('pathToKubeConfigFile')</code>, it is unable to read it (the <code>config.yaml</code> which is saved inside <code>.kube</code> folder).
I have checked all the paths to certificates and keys files inside this file and they are correct.</p>
<pre><code>import { KubeConfig, CoreV1Api } from '@kubernetes/client-node';
const kc = new KubeConfig();
kc.loadFromFile('./config/k8sConfig.yaml');
const k8sApi = kc.makeApiClient(CoreV1Api);
k8sApi.listPodForAllNamespaces().then((res) => {
console.log(res.body);
});
</code></pre>
<p>and I need to return all the active Kubernetes Jobs (or the pods for that). Can anyone please suggest me how to achieve it?</p>
| shivam | <p>As the problem has already been resolved in the comments section, I decided to provide a Community Wiki answer just for better visibility to other community members. I would also like to describe how to return all active Kubernetes Jobs using the <a href="https://github.com/kubernetes-client/javascript#javascript-kubernetes-client-information" rel="nofollow noreferrer">Javascript Kubernetes Client</a></p>
<h5>Using the loadFromFile() method.</h5>
<p>When using the <code>loadFromFile()</code> method, it's important to make sure that the <code>kubeconfig</code> file is correct. In case the <code>kubeconfig</code> file is invalid, we may get various error messages such as:</p>
<pre><code>Error: ENOENT: no such file or directory, open '.kube/confi'
</code></pre>
<p>or</p>
<pre><code>Error: unable to verify the first certificate
</code></pre>
<p>The exact error message depends on what is incorrect in the <code>kubeconfig</code> file.</p>
<h5>List all/active Kubernetes Jobs.</h5>
<p>To list all Kubernetes Jobs, we can use the <a href="https://github.com/kubernetes-client/javascript/blob/1d5d4660f99807e3d3b02dd0984d0b980f279ff9/src/gen/api/batchV1Api.ts#L950" rel="nofollow noreferrer">listJobForAllNamespaces()</a> method.</p>
<p>I've created the <code>listAllJobs.js</code> script to demonstrate how it works:</p>
<pre><code>$ cat listAllJobs.js
const k8s = require('@kubernetes/client-node')
const kc = new k8s.KubeConfig()
kc.loadFromFile('.kube/config')
const k8sApi = kc.makeApiClient(k8s.BatchV1Api);
k8sApi.listJobForAllNamespaces().then((res) => {
res.body.items.forEach(job => console.log(job.metadata.name));
});
$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
job-1 0/1 3s 3s
job-2 1/1 10s 48m
job-3 1/1 10s 48m
$ node listAllJobs.js
job-1
job-2
job-3
</code></pre>
<p>To list only active Jobs, we need to slightly modify the <code>res.body.items.forEach(job => console.log(job.metadata.name));</code> line to check if the Job is active:</p>
<pre><code>$ cat listActiveJobs.js
const k8s = require('@kubernetes/client-node')
const kc = new k8s.KubeConfig()
kc.loadFromFile('.kube/config')
const k8sApi = kc.makeApiClient(k8s.BatchV1Api);
k8sApi.listJobForAllNamespaces().then((res) => {
res.body.items.forEach(job => job.status.active >= 1 && console.log(job.metadata.name));
});
$ kubectl get jobs
NAME COMPLETIONS
job-1 0/1
job-2 1/1
job-3 1/1
$ node listActiveJobs.js
job-1
</code></pre>
| matt_j |
<p>I noticed a strange behavior while experimenting with <code>kubectl run</code> :</p>
<ul>
<li><p>When the command to be executed is passed as option flag <code>--command -- /bin/sh -c "ls -lah"</code> > <strong>OK</strong></p>
<pre><code>kubectl run nodejs --image=node:lts-alpine \
--restart=Never --quiet -i --rm \
--command -- /bin/sh -c "ls -lah"
</code></pre>
</li>
<li><p>When command to be executed is passed in <code>--overrides</code> with <code>"command": [ "ls", "-lah" ]</code> > <strong>OK</strong></p>
<pre><code>kubectl run nodejs --image=node:lts-alpine \
--restart=Never \
--overrides='
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nodejs"
},
"spec": {
"volumes": [
{
"name": "host-volume",
"hostPath": {
"path": "/home/dferlay/Sources/df-sdc/web/themes/custom/"
}
}
],
"containers": [
{
"name": "nodejs",
"image": "busybox",
"command": [
"ls",
"-lah"
],
"workingDir": "/app",
"volumeMounts": [
{
"name": "host-volume",
"mountPath": "/app"
}
],
"terminationMessagePolicy": "FallbackToLogsOnError",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Never",
"securityContext": {
"runAsUser": 1000,
"runAsGroup": 1000
}
}
}
' \
--quiet -i --rm
</code></pre>
</li>
<li><p>When the command to be executed is passed as option flag <code>--command -- /bin/sh -c "ls -lah"</code> and <code>--overrides</code> is used for something else (volume for instance) > <strong>KO</strong></p>
<pre><code>kubectl run nodejs --image=node:lts-alpine --restart=Never \
--overrides='
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nodejs"
},
"spec": {
"volumes": [
{
"name": "host-volume",
"hostPath": {
"path": "/home/dferlay/Sources/df-sdc/web/themes/custom/"
}
}
],
"containers": [
{
"name": "nodejs",
"image": "busybox",
"workingDir": "/app",
"volumeMounts": [
{
"name": "host-volume",
"mountPath": "/app"
}
],
"terminationMessagePolicy": "FallbackToLogsOnError",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Never",
"securityContext": {
"runAsUser": 1000,
"runAsGroup": 1000
}
}
}
' \
--quiet -i --rm --command -- /bin/sh -c "ls -lah"
</code></pre>
</li>
</ul>
<p>So it looks like using <code>--overrides</code> prevents <code>--command</code> to be used.</p>
<p>However, I precisely need to use <code>--command</code> to bypass the array format expected by <code>--overrides</code> (ie. <code>"command": [ "ls", "-lah" ]</code>) because in my use case the command is a placeholder and cannot be known in advance.</p>
<ul>
<li>How can I do that ? Is there something I'm missing ?</li>
</ul>
<p>FYI: <code>kubectl version=v1.23.1+k3s2</code></p>
| David | <p>You can bypass the array format by using the <code>args</code> field:</p>
<pre><code>"command": [
"sh",
"-c"
],
"args": [ "pwd && id && node YOUR_COMMAND" ]
</code></pre>
| Anton Stukanov |
<p>What is Minimum Viable Pod (MVP) in kubernetes?</p>
<p>i've tried to google it but nothing useful was there...</p>
<p>i've heard about the MVP consept when i saw <a href="https://github.com/BretFisher/kubernetes-mastery/blob/f81ab571d61bef2b4d383f121d6c394a217d4176/k8s/nginx-1-without-volume.yaml" rel="nofollow noreferrer">this</a> yaml file and couldn't get what is MVP and why this pod is a MVP!</p>
| Ali | <p>The simplest possible manifest that you need to write in order to run a pod.</p>
| gohm'c |
<p>I am trying to start a deployment but I am getting this error</p>
<pre><code>error: error validating "httpd-basic-deployment.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers): invalid type for io.k8s.api.core.v1.PodSpec.containers: got "map", expected "array"; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>of the below pod definition file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ebay-app
spec:
selector:
matchLabels:
environment: dev
app: ebay
replicas: 1
template:
metadata:
labels:
environment: dev
app: ebay
spec:
volumes:
- name: volume
hostPath:
path: /mnt/data
containers:
name: container1-nginx
image: nginx
volumeMounts:
name: volume
mountPath: /var/nginx-data
name: container2-tomcat
image: tomcat
nodeSelector:
boardType: x86vm
</code></pre>
<p>I tried listing the cotnainers again:</p>
<pre><code> volumes:
- name: volume
hostPath:
path: /mnt/data
containers:
- name: container1-nginx
image: nginx
volumeMounts:
name: volume
mountPath: /var/nginx-data
- name: container2-tomcat
image: tomcat
nodeSelector:
boardType: x86vm
</code></pre>
<p>that results in different error</p>
<pre><code>error: error validating "httpd-basic-deployment.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts): invalid type for io.k8s.api.core.v1.Container.volumeMounts: got "map", expected "array"; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>what am i doing wrong ?</p>
| Ciasto piekarz | <p>VolumeMounts should also have <code>-</code>. It indicates start of an array. Change it as shown below.</p>
<pre><code>volumeMounts:
- name: volume
mountPath: /var/nginx-data
</code></pre>
<p>Have a look at <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="noreferrer">this example yaml</a> to create pod that has two containers and shares same volume. In this example, its clear where to use <code>-</code> symbol and where not.</p>
| HarishVijayamohan |
<p>I build a ceph cluster with kubernetes and it create an osd block into the <code>sdb</code> disk.
I had delete the ceph cluster but cleanup all the kubernetes instance which were created by ceph cluster, but it did't delete the osd block which is mounted into sdb.</p>
<p><a href="https://i.stack.imgur.com/9I1NB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9I1NB.png" alt="enter image description here" /></a></p>
<p>I am a beginner in kubernetes. How can I remove the osd block from <code>sdb</code>.
And why the osd block will have all the disk space?</p>
| Jack Xu | <p>I find a way to remove osd block from disk on ubuntu18.04:</p>
<p>Use this command to show the logical volume information:</p>
<p><code>$ sudo lvm lvdisplay</code></p>
<p>Then you will get the log like this:</p>
<p><a href="https://i.stack.imgur.com/wWFal.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wWFal.png" alt="enter image description here" /></a></p>
<p>Then execute this command to remove the osd block volumn.</p>
<p><code>$ sudo lvm lvremove <LV Path></code></p>
<p>Check if we have removed the volume successfully.</p>
<p><code>$ lsblk</code></p>
| Jack Xu |
<p>K8s network policies allow specifying CIDRs, but I'd like to specify DNS name.</p>
<p>On a high level I'd see it working the following way:</p>
<ul>
<li>There's a whitelist of allowed hosts</li>
<li>k8s intercepts IP resolution requests and checks whether host is whitelisted</li>
<li>if yes, resolved IPs are temporarily added to network policy thus allowing for egress traffic</li>
</ul>
<p>Is there any way to achieve this functionality?</p>
| Pavel Voronin | <p>vpc-cni does not implement k8s network policies. You need to replace vpc-cni with one of the EKS compatible CNI of your choice <a href="https://docs.aws.amazon.com/eks/latest/userguide/alternate-cni-plugins.html" rel="nofollow noreferrer">here</a> that support using FQDN in the policy. Note upgrade may be required (eg. Calico Enterprise) to have this feature.</p>
| gohm'c |
<p>When describing a node, there are history conditions that show up.</p>
<pre><code>Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 10 Aug 2021 10:55:23 +0700 Tue, 10 Aug 2021 10:55:23 +0700 CalicoIsUp Calico is running on this node
MemoryPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 16 Aug 2021 12:02:18 +0700 Mon, 16 Aug 2021 11:54:02 +0700 KubeletNotReady PLEG is not healthy: pleg was last seen active 11m17.462332922s ago; threshold is 3m0s
</code></pre>
<p>I have 2 questions:</p>
<ol>
<li>I think those conditions only show the latest status. How can I access the full-time history of the previous conditions?</li>
<li>Suggest me the tool that converts node condition to something like pod events for log centralize.</li>
</ol>
| Đinh Anh Huy | <p>You're right, the <code>kubectl describe <NODE_NAME></code> command shows the current condition status (<code>False</code>/<code>True</code>).</p>
<p>You can monitor Nodes events using the following command:</p>
<pre><code># kubectl get events --watch --field-selector involvedObject.kind=Node
LAST SEEN TYPE REASON OBJECT MESSAGE
3m50s Warning EvictionThresholdMet node/kworker Attempting to reclaim inodes
44m Normal NodeHasDiskPressure node/kworker Node kworker status is now: NodeHasDiskPressure
</code></pre>
<p>To view only status related events, you can use <code>grep</code> with the previous command:</p>
<pre><code># kubectl get events --watch --field-selector involvedObject.kind=Node | grep "status is now"
44m Normal NodeHasDiskPressure node/kworker Node kworker status is now: NodeHasDiskPressure
</code></pre>
<p>By default, these events are retained for <a href="https://github.com/kubernetes/kubernetes/blob/da53a247633cd91bd8e9818574279f3b04aed6a5/cmd/kube-apiserver/app/options/options.go#L71-L72" rel="noreferrer">1 hour</a>. However, you can run the <code>kubectl get events --watch --field-selector involvedObject.kind=Node</code> command from within a Pod and collect the output from that command using a log aggregation system like <a href="https://grafana.com/oss/loki/" rel="noreferrer">Loki</a>. I've described this approach with a detailed explanation <a href="https://stackoverflow.com/a/68212477/14801225">here</a>.</p>
| matt_j |
<p>How can I clean up the failed and completed pods created by kubernetes job automatically without using cronjob. I want to keep only the last pod created by job.</p>
<p>How can we accomplish that?</p>
| cloudbud | <p><code>...clean up the failed and completed pods created by kubernetes job automatically without using cronjob</code></p>
<p>If you specify <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">ttlSecondsAfterFinished</a> to the same period as the Job schedule, you should see only the last pod until the next Job starts. You can prolong the duration to keep more pods in the system this way and not wait until they are explicitly delete.</p>
| gohm'c |
<p>I have a mysqldb pod and need to add a log when we run a script restorebackup.sh. The script is inside the pod and i need this log in the general logs of the pod (to access it with kubectl get logs services/mysqldb).</p>
<p>There is any way to do this?</p>
| Vitor Estevam | <p>Generally the <code>kubectl logs</code> shows the first process's stdout (pid=1).
So you could try put logs to /proc/1/fd/1 in you pod.</p>
<p>An example command in pod:</p>
<pre><code>echo hello >> /proc/1/fd/1
</code></pre>
<p>Then you will able to see this <code>hello</code> by <code>kubectl logs</code>.</p>
<p>For you script <code>restorebackup.sh</code>, maybe you could try <code>sh restorebackup.sh >> /proc/1/fd/1</code> to redirect all outputs.</p>
| Ji Bin |
<p>I am trying to redeploy the exact same existing image, but after changing a secret in the Azure Vault. Since it is the same image that's why <code>kubectl apply</code> doesn't deploy it. I tried to make the deploy happen by adding a <code>--force=true</code> option. Now the deploy took place and the new secret value is visible in the dashboard config map, but not in the API container <code>kubectl exec</code> console prompt in the environment.</p>
<p>Below is one of the 3 deploy manifest (YAML file) for the service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tube-api-deployment
namespace: tube
spec:
selector:
matchLabels:
app: tube-api-app
replicas: 3
template:
metadata:
labels:
app: tube-api-app
spec:
containers:
- name: tube-api
image: ReplaceImageName
ports:
- name: tube-api
containerPort: 80
envFrom:
- configMapRef:
name: tube-config-map
imagePullSecrets:
- name: ReplaceRegistrySecret
---
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: tube
spec:
ports:
- name: api-k8s-port
protocol: TCP
port: 8082
targetPort: 3000
selector:
app: tube-api-app
</code></pre>
| MCC21 | <p>I think it is not happening because when we update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the pod container process to detect that they’ve been changed and reload them. Currently, there is no built-in way to signal an application when a new version of a ConfigMap is deployed. It is up to the application (or some helper script) to look for the config files to change and reload them.</p>
| MCC21 |
<p>I am trying to initialize my container but I keep on getting <code>directory or file does not exist</code> error on the following script. I essentially need to get config files from repo into the folder.</p>
<p>I am not looking for alternative solutions by placing the exact file, I need this to be a solution for an arbitrary number of files.</p>
<p>Looking into the alpine docker, the <code>/usr/share</code> folder should exist.</p>
<p>This is the <code>initContainer</code> used:</p>
<pre class="lang-yaml prettyprint-override"><code> initContainers:
- name: init-config
image: alpine/git
command:
- "git clone https://github.com/coolacid/docker-misp.git /usr/repo && cp -a /usr/repo/docker-misp/server-configs/ /usr/share"
volumeMounts:
- mountPath: /usr/share
name: misp-app-config
</code></pre>
<p>How do I properly move the files into a volume?</p>
<p>Edit 1: I am using the <code>alpine/git</code> image.</p>
| Zerg Overmind | <p>First of all, I recommend using:</p>
<pre><code>command: ["/bin/sh","-c"]
args: ["git clone https://github.com/coolacid/docker-misp.git /usr/repo && cp -a /usr/repo/server-configs/ /usr/share"]
</code></pre>
<p>instead of:</p>
<pre><code>command:
- "git clone https://github.com/coolacid/docker-misp.git /usr/repo && cp -a /usr/repo/docker-misp/server-configs/ /usr/share"
</code></pre>
<p>Additionally, you shouldn't specify the <code>docker-misp</code> directory as all the contents of the <code>docker-misp.git</code> repository have been cloned to <code>/usr/repo</code>:</p>
<pre><code># git clone https://github.com/coolacid/docker-misp.git /usr/repo
Cloning into '/usr/repo'...
remote: Enumerating objects: 905, done.
remote: Counting objects: 100% (160/160), done.
remote: Compressing objects: 100% (24/24), done.
remote: Total 905 (delta 144), reused 136 (delta 136), pack-reused 745
Receiving objects: 100% (905/905), 152.64 KiB | 2.50 MiB/s, done.
Resolving deltas: 100% (444/444), done.
# ls /usr/repo
LICENSE README.md build-docker-compose.yml docker-compose.yml examples modules server server-configs
</code></pre>
<hr />
<p>I've prepared an example to illustrate how it works:<br />
<strong>NOTE:</strong> I used <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> but you probably want to use a different <a href="https://kubernetes.io/docs/concepts/storage/volumes/#volume-types" rel="nofollow noreferrer">type of volumes</a>:</p>
<pre><code>$ cat app-1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: app-1
name: app-1
spec:
initContainers:
- name: init-config
image: alpine/git
command: ["/bin/sh","-c"]
args: ["git clone https://github.com/coolacid/docker-misp.git /usr/repo && cp -a /usr/repo/server-configs/ /usr/share"]
volumeMounts:
- mountPath: /usr/share
name: misp-app-config
containers:
- image: nginx
name: web-1
volumeMounts:
- mountPath: "/work-dir"
name: misp-app-config
volumes:
- name: misp-app-config
emptyDir: {}
$ kubectl apply -f app-1.yml
pod/app-1 created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-1 1/1 Running 0 11s
</code></pre>
<p>We can check if the <code>server-configs</code> directory has been copied to the <code>web-1</code> Pod:</p>
<pre><code>$ kubectl exec -it app-1 -- sh
Defaulted container "web-1" out of: web-1, init-config (init)
# ls /work-dir
server-configs
# ls /work-dir/server-configs
email.php
</code></pre>
<p>As you can see in the example above, everything works as expected.</p>
| matt_j |
<p>I'm very new to k8s and the related stuff, so this may be a stupid question: How to change the pod name?</p>
<p>I am aware the pod name seems set in the helm file, in my values.yaml, I have this:</p>
<pre><code>...
hosts:
- host: staging.application.com
paths:
...
- fullName: application
svcPort: 80
path: /*
...
</code></pre>
<p>Since the application is running in the prod and staging environment, and the pod name is just something like <code>application-695496ec7d-94ct9</code>, I can't tell which pod is for prod or staging and can't tell if a request if come from the prod or not. So I changed it to:</p>
<pre><code>hosts:
- host: staging.application.com
paths:
...
- fullName: application-staging
svcPort: 80
path: /*
</code></pre>
<p>I deployed it to staging, pod updated/recreated automatically but the pod name still remains the same. I was confused about that, and I don't know what is missing. I'm not sure if it is related to the <code>fullnameOverride</code>, but it's empty so it should be fine.</p>
| Meilan | <p><code>...the pod name still remains the same</code></p>
<p>The code snippet in your question likely the helm values for Ingress. In this case not related to Deployment of Pod.</p>
<p>Look into your helm template that define the Deployment spec for the pod, search for the <code>name</code> and see which helm value was assigned to it:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox # <-- change & you will see the pod name change along. the helm syntax surrounding this field will tell you how the name is construct/assign
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 3600"]
</code></pre>
<p>Save the spec and apply, check with <code>kubectl get pods --selector app=busybox</code>. You should see 1 pod with name <code>busybox</code> prefix. Now if you open the file and change the name to <code>custom</code> and re-apply and get again, you will see 2 pods with <strong>different</strong> name prefix. Clean up with <code>kubectl delete deployment busybox custom</code>.</p>
<p>This example shows how the name of the Deployment is used for pod(s) underneath. You can paste your helm template surrounding the name field to your question for further examination if you like.</p>
| gohm'c |
<p>From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that.</p>
| Jason M | <p>It's also a good thing to check your cluster and node-pool operations.</p>
<ol>
<li>Check the cluster operation in cloud shell and run the command:</li>
</ol>
<pre><code>gcloud container operations list
</code></pre>
<ol start="2">
<li>Check the age of the nodes with the command</li>
</ol>
<pre><code>kubectl get nodes
</code></pre>
<ol start="2">
<li>Check and analyze your deployment on how it reacts to operations such as cluster upgrade, node-pool upgrade & node-pool auto-repair. You can check the cloud logging if your cluster upgrade or node-pool upgrades using queries below:</li>
</ol>
<p>Please note you have to add your cluster and node-pool name in the queries.</p>
<p>Control plane (master) upgraded:</p>
<pre><code>resource.type="gke_cluster"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateCluster" OR "UpdateClusterInternal")
(protoPayload.metadata.operationType="UPGRADE_MASTER"
OR protoPayload.response.operationType="UPGRADE_MASTER")
resource.labels.cluster_name=""
</code></pre>
<p>Node-pool upgraded</p>
<pre><code>resource.type="gke_nodepool"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateNodePool" OR "UpdateClusterInternal")
protoPayload.metadata.operationType="UPGRADE_NODES"
resource.labels.cluster_name=""
resource.labels.nodepool_name=""
</code></pre>
| Reid123 |
<p>I have an symfony application and three kubernetes clusters : dev, staging prod.</p>
<p>I'm using env files to set the environment. I build docker images with the right .env and deploy through kubectl. But as a matter facts, the value of APP_ENV isn't taken into account, even though the other params are (database connexion for example).</p>
<p>My staging env is set to "test" but composer still installs dev dependencies and logs go to /var/log/dev.log. In these log lines I also have an app_environment value which should be set to "test" according to the .env file. Still I find "local" instead.</p>
<p>What am I doing wrong ?</p>
<p>My staging .env file :</p>
<pre><code># * .env contains default values for the environment variables needed by the app
# * .env.local uncommitted file with local overrides
# * .env.$APP_ENV committed environment-specific defaults
# * .env.$APP_ENV.local uncommitted environment-specific overrides
###> symfony/framework-bundle ###
APP_ENV=test
APP_SECRET=XXXXXXX
#TRUSTED_PROXIES=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
#TRUSTED_HOSTS='^(localhost|example\.com)$'
###< symfony/framework-bundle ###
###> doctrine/doctrine-bundle ###
# Format described at https://www.doctrine-project.org/projects/doctrine-dbal/en/latest/reference/configuration.html#connecting-using-a-url
# For an SQLite database, use: "sqlite:///%kernel.project_dir%/var/data.db"
# For a PostgreSQL database, use: "postgresql://db_user:[email protected]:5432/db_name?serverVersion=11&charset=utf8"
# IMPORTANT: You MUST configure your server version, either here or in config/packages/doctrine.yaml
DATABASE_URL=postgresql://root:root@postgres:5432/dbb
###< doctrine/doctrine-bundle ###
###> nelmio/cors-bundle ###
#CORS_ALLOW_ORIGIN=^https?://(localhost|127\.0\.0\.1)(:[0-9]+)?$
CORS_ALLOW_ORIGIN=^https://my-domain.com
###< nelmio/cors-bundle ###
###> lexik/jwt-authentication-bundle ###
JWT_PUBLIC_KEY=%kernel.project_dir%/config/jwt/public.pem
###< lexik/jwt-authentication-bundle ###
###> nzo/elk-bundle ###
ELK_APP_NAME=my-app
ELK_APP_ENVIRONMENT=test
###< nzo/elk-bundle ###
</code></pre>
<p>Docker file :</p>
<pre><code>FROM php:7.4-fpm
WORKDIR /app
# Install selected extensions (git, git-flow and other stuff)
RUN apt-get update \
&& apt-get install -y --no-install-recommends zlib1g-dev libpq-dev git git-flow libicu-dev libxml2-dev libzip-dev\
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo pdo_pgsql pgsql \
&& docker-php-ext-install zip xml \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
# Install composer
RUN curl -sS https://getcomposer.org/installer | php && mv composer.phar /usr/local/bin/composer \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
COPY ./my-project/ /usr/src/my-project
COPY docker-dist/back/config/php/php.ini /usr/local/etc/php/
COPY docker-entrypoint.sh /entrypoint.sh
RUN composer install --working-dir=/usr/src/my-project
# WORKDIR is /var/www/html (inherited via "FROM php")
# "/entrypoint.sh" will populate it at container startup from /usr/src/my-project
VOLUME /var/www/app
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
</code></pre>
| Brice Le Roux | <p>It appears the warmup wasn't working properly.</p>
<p>I had a closer look at my docker-entrypoint and manage to run composer install and cache cleaning inside.</p>
<p>Problem solved.</p>
<p>Thanks everyone.</p>
<p>entrypoint.sh :</p>
<pre><code>set -e
cd /var/www/app
tar cf - --one-file-system -C /usr/src/myapp . | tar xf -
rm -rf var/cache var/log
composer install
php bin/console cache:clear --env=dev
chown -R www-data:www-data .
exec "$@"```
</code></pre>
| Brice Le Roux |
<p>I'm trying to work with Istio from Go, and are using Kubernetes and Istio go-client code.</p>
<p>The problem I'm having is that I can't specify <code>ObjectMeta</code> or <code>TypeMeta</code> in my Istio-<code>ServiceRole</code> object. I can only specify <code>rules</code>, which are inside the <code>spec</code>.</p>
<p>Below you can see what I got working: </p>
<pre class="lang-golang prettyprint-override"><code>import (
v1alpha1 "istio.io/api/rbac/v1alpha1"
)
func getDefaultServiceRole(app nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
}
}
</code></pre>
<p>What I would like to do is have this code work:</p>
<pre class="lang-golang prettyprint-override"><code>func getDefaultServiceRole(app *nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
TypeMeta: metav1.TypeMeta{
Kind: "ServiceRole",
APIVersion: "v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: app.Name,
Namespace: app.Namespace,
},
Spec: v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
},
},
}
</code></pre>
<p>Can anyone point me in the right direction?</p>
| Kyrremann | <p>Istio now supports:</p>
<pre><code>import (
istiov1alpha3 "istio.io/api/networking/v1alpha3"
istiogov1alpha3 "istio.io/client-go/pkg/apis/networking/v1alpha3"
)
VirtualService := istiogov1alpha3.VirtualService{
TypeMeta: metav1.TypeMeta{
Kind: "VirtualService",
APIVersion: "networking.istio.io/v1alpha3",
},
ObjectMeta: metav1.ObjectMeta{
Name: "my-name",
},
Spec: istiov1alpha3.VirtualService{},
}
</code></pre>
<p>Where <code>istiov1alpha3.VirtualService{}</code> is an istio object.</p>
| user13245577 |
<p>I have setup EFK stack in K8s cluster. Currently fluentd is <strong>scraping</strong> logs from all the containers.</p>
<p>I want it to only scrape logs from containers <code>A</code>, <code>B</code>, <code>C</code> and <code>D</code>.</p>
<p>If I had some prefix with as <code>A-app</code> I could do something like below.</p>
<pre><code>"fluentd-inputs.conf": "# HTTP input for the liveness and readiness probes
<source>
@type http
port 9880
</source>
# Get the logs from the containers running in the node
<source>
@type tail
path /var/log/containers/*-app.log // what can I put here for multiple different containers
# exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
</parse>
</source>
# enrich with kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
</code></pre>
| confusedWarrior | <p>To scrape logs only from specific Pods, you can use:</p>
<pre><code>path /var/log/containers/POD_NAME_1*.log,/var/log/containers/POD_NAME_2*.log,.....,/var/log/containers/POD_NAME_N*.log
</code></pre>
<p>To scrape logs from specific containers in specific Pods, you can use:</p>
<pre><code>path /var/log/containers/POD_NAME_1*CONTAINER_NAME*.log,/var/log/containers/POD_NAME_2*CONTAINER_NAME*.log,.....,/var/log/containers/POD_NAME_N*CONTAINER_NAME*.log
</code></pre>
<hr />
<p>I've created a simple example to illustrate how it works.</p>
<p>To scrape logs from <code>web-1</code> container from <code>app-1</code> Pod and logs from all containers from <code>app-2</code> Pod, you can use:</p>
<pre><code>path /var/log/containers/app-1*web-1*.log,/var/log/containers/app-2*.log
$ kubectl logs -f fluentd-htwn5
...
2021-08-20 13:37:44 +0000 [info]: #0 starting fluentd worker pid=18 ppid=7 worker=0
2021-08-20 13:37:44 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/app-1_default_web-1-ae672aa1405b91701d130da34c54ab3106a8fc4901897ebbf574d03d5ca64eb8.log
2021-08-20 13:37:44 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/app-2-64c99b9f5b-tm6ck_default_nginx-cd1bd7617f04000a8dcfc1ccd01183eafbce9d0155578d8818b27427a4062968.log
2021-08-20 13:37:44 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/app-2-64c99b9f5b-tm6ck_default_frontend-1-e83acc9e7fc21d8e3c8a733e10063f44899f98078233b3238d6b3dc0903db560.log
2021-08-20 13:37:44 +0000 [info]: #0 fluentd worker is now running worker=0
...
</code></pre>
| matt_j |
<p>I am trying to deploy ArgoCD and applications located in subfolders through Terraform in an AKS cluster.</p>
<p>This is my Folder structure tree:</p>
<p><em>I'm using app of apps approach, so first I will deploy ArgoCD (this will manage itself as well) and later ArgoCD will let me SYNC the cluster-addons and application manually once installed.</em></p>
<pre><code>apps
cluster-addons
AKV2K8S
Cert-Manager
Ingress-nginx
application
application-A
argocd
override-values.yaml
Chart
</code></pre>
<p>When I run the command "helm install ..." manually in the AKS cluster everything is installed fine.
ArgoCD is installed and later when I access ArgoCD I see that rest of applications are missing and I can sync them manually.</p>
<p><strong>However, If I want to install it through Terraform only ArgoCD is installed but looks like it does not "detect" the override_values.yaml file</strong>:</p>
<p>i mean, ArgoCD and ArgoCD application set controller are installed in the cluster but ArgoCD does not "detect" the values.yaml files that are customized for my AKS cluster. If I run "helm install" manually on the cluster everything works but not through Terraform</p>
<pre><code>resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [
"${file("values.yaml")}"
]
</code></pre>
<p>values.yaml file is located in the folder where I have the TF code to install argocd and argocd applicationset.</p>
<p>I tried to change the name of the file" values.yaml" to "override_values.yaml" but same issue.</p>
<p><strong>I have many things changed into the override_values.yaml file so I cannot use "set" inside the TF code...</strong></p>
<p>Also, I tried adding:</p>
<pre><code> values = [
"${yamlencode(file("values.yaml"))}"
]
</code></pre>
<p>but I get this error in "apply" step in the pipeline:</p>
<pre><code>error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {} "argo-cd:\r\n ## ArgoCD configuration\r\n ## Ref: https://github.com/argoproj/argo-cd\r\n
</code></pre>
<p>Probably because is not a JSON file? It does make sense to convert this file into a JSON one?</p>
<p>Any idea if I can pass this override values yaml file through terraform?</p>
<p>If not, please may you post a clear/full example with mock variables on how to do that using Azure pipeline?</p>
<p>Thanks in advance!</p>
| X T | <p>The issue was with the values identation in TF code.</p>
<p>The issue was resolved when I resolve that:</p>
<pre><code>resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [file("values.yaml")]
</code></pre>
<p>It is working fine also with quoting.</p>
| X T |
<p>Currently, some of my pods are using ephemeral volumes (those created by Docker at the container level). I want some form of persistent data stored in specific directories on the host (like bind mounts in Docker) such that my pods can restart without losing data. Persistent Volume Claims seem to be the best way to do this. How should I go about transitioning my existing pods to use PVCs that copy across data from the existing ephemeral volumes?</p>
| ExplosiveFridge | <ol>
<li>On the host, create the directory to hold your data. Eg. <code>mkdir /local_data</code></li>
<li>Copy the data to the local directory. Eg. <code>kubectl cp <namespace>/<pod>:/path/in/the/container /local_data</code></li>
<li>Check and check again all your data is intact in <code>/local_data</code></li>
<li>Create a new pod with the following spec.</li>
</ol>
<p>Example:</p>
<pre><code>kind: Pod
...
spec:
...
nodeName: <name> # <-- if you have multiple nodes this ensure your pod always run on the host that hold the data
containers:
- name: ...
...
volumeMounts:
- name: local_data
mountPath: /path/in/the/container
...
volumes:
- name: local_data
hostPath: /local_data
type: Directory
</code></pre>
<p>Apply and check if your pod runs as expected</p>
| gohm'c |
<p>As we know by default when we create a Strimzi-Kafka user, user gets its own user.crt & user.key created in Kubernetes secrets-manager but I want to use my own user.crt & user.key, is it feasible?</p>
<p>Rather than creating the user first then replacing with our own keys! Do we have option to pass our own crt, keys in runtime user create? Can we specify somehow in the deployment file?</p>
<p>From official doc: I got this <a href="https://strimzi.io/docs/master/#installing-your-own-ca-certificates-str" rel="nofollow noreferrer">https://strimzi.io/docs/master/#installing-your-own-ca-certificates-str</a> but it's for <code>kind:Kafka</code> not for <code>kind:KafkaUser</code> as we know <code>kind:KafkaUser</code> is used for user creation.</p>
| vegeta | <p>Am answering my question myself!</p>
<p><strong>STEP1:</strong></p>
<pre><code>kubectl -n <namespace> create secret generic <ca-cert-secret> --from-file=ca.crt=<ca-cert-file>
</code></pre>
<p><em>Eg:</em></p>
<pre><code>kubectl -n kafka create secret generic custom-strimzi-user --from-file=ca.crt=ca-decoded.crt --from-file=user.crt=user-decoded.crt --from-file=user.key=user-decoded.key -o yaml
</code></pre>
<p><strong>STEP2:</strong> </p>
<pre><code>kubectl -n <namespace> label secret <ca-cert-secret> strimzi.io/kind=<Kafka or KafkaUser> strimzi.io/cluster=<my-cluster>
</code></pre>
<p><em>Eg:</em></p>
<pre><code>kubectl -n kafka label secret custom-strimzi-user strimzi.io/kind=KafkaUser strimzi.io/cluster=kafka
</code></pre>
<p><strong>STEP3</strong>: Now to Enable ACL & TLS for above created user: </p>
<p>Apply Strimzi officially provided <strong>create user yaml</strong> deployment file (<strong>kind</strong>:<strong>KafkaUser</strong>) format after replacing the user name with one created from above, then execute :</p>
<pre><code>kubectl apply -f kafka-create-user.yml
</code></pre>
<p><strong>Note:</strong> Here if we run <code>kubectl apply -f kafka-create-user.yml</code> before creating custom user as in STEP1 & STEP2 then Strimzi create a user with its own <code>user.crt</code> & <code>user.key</code></p>
<p>FYI above what I shared is for user custom <strong>crt</strong> & user custom <strong>key</strong> but for operator cluster CA (crt & key) we have official doc here: <a href="https://strimzi.io/docs/master/#installing-your-own-ca-certificates-str" rel="nofollow noreferrer">https://strimzi.io/docs/master/#installing-your-own-ca-certificates-str </a> </p>
<p>Regards,
Sudhir Tataraju</p>
| vegeta |
<p>I am trying to track all API requests to my kubernetes cluster running on some ec2 instances. How do I go about doing this?</p>
<p>I am basically trying to check which IP the request is sent from, any data sent and any other discerning information.</p>
<p>I tried using prometheus but have not had any luck so far.</p>
| Aditya | <p>You can enable <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">Auditing</a> on your cluster. For specific resource, use <code>resourceNames</code> in the audit policy to specify the resource name.</p>
| gohm'c |
<p>I'm trying to inject my secrets from Google Secret Manager into Kubernetes Pod as environment variable.</p>
<p>I need put it as environment variable to my NodeJS application can read it.</p>
<p>I tried the solution from <a href="https://stackoverflow.com/questions/63923379/how-to-inject-secret-from-google-secret-manager-into-kubernetes-pod-as-environme">How to inject secret from Google Secret Manager into Kubernetes Pod as environment variable?</a> but not work for me.</p>
<p>Also I tried to setup a init container but it put the secrets as files into the pod.</p>
<p>Any idea?</p>
<p>Thanks</p>
| Alejandro Sotillo | <p>Checkout <a href="https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp</a>. It's a CSI driver for mounting Secret Manager secrets.</p>
| Sandro B |
<p>I'm trying to give a group of users permission to scale a specific set of deployments in kubernetes 1.20</p>
<p>I've tried using the API reference doc here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#patch-scale-deployment-v1-apps" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#patch-scale-deployment-v1-apps</a>
to set resource names like so:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeoperator-cr
rules:
... #irrelevant rules omitted
- apiGroups: ["apps"]
resources:
- /namespaces/my-namespace-name/deployments/my-deployment-name/scale
- deployments/my-deployment-name/scale
verbs:
- update
- patch
</code></pre>
<p>This doesn't work:</p>
<pre><code>$ kubectl scale deployments -n my-namespace-name my-deployment-name --replicas 3
Error from server (Forbidden): deployments.apps "my-deployment-name" is forbidden: User "kubeoperatorrole" cannot patch resource "deployments/scale" in API group "apps" in the namespace "my-namespace-name"
</code></pre>
<p>The only way I can get the scale command to work is to grant the permission for all deployments (which is <strong>not</strong> what I want) like this:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeoperator-cr
rules:
... #irrelevant rules omitted
- apiGroups: ["apps"]
resources:
- deployments/scale
verbs:
- update
- patch
</code></pre>
<pre><code>$ kubectl scale deployments -n my-namespace-name my-deployment-name --replicas 3
deployment.apps/my-deployment-name scaled
</code></pre>
<p>What is the correct syntax for specifying a specific deployment resource by name, or is this not possible?
The deployments I'm targeting cannot be moved to an isolated namespace.</p>
| conmanworknor | <p>Try:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeoperator-cr
rules:
- apiGroups: ["apps"]
resources:
- deployments/scale
resourceNames: ["my-deployment-name"] # <-- name of your deployment here
verbs:
- update
- patch
</code></pre>
| gohm'c |
<p>I want to get all events that occurred in Kubernetes cluster in some python dictionary using maybe some API to extract data from the events that occurred in the past. I found on internet that it is possible by storing all data of Kube-watch on Prometheus and later accessing it. I am unable to figure out how to set it up and see all past pod events in python. Any alternative solutions to access past events are also appreciated. Thanks!</p>
| Dragnoid99 | <p>I'll describe a solution that is not complicated and I think meets all your requirements.
There are tools such as <a href="https://github.com/heptiolabs/eventrouter" rel="nofollow noreferrer">Eventrouter</a> that take Kubernetes events and push them to a user specified sink. However, as you mentioned, you only need Pods events, so I suggest a slightly different approach.</p>
<p>In short, you can run the <code>kubectl get events --watch</code> command from within a Pod and collect the output from that command using a log aggregation system like <a href="https://grafana.com/oss/loki/" rel="nofollow noreferrer">Loki</a>.</p>
<p>Below, I will provide a detailed step-by-step explanation.</p>
<h3>1. Running kubectl command from within a Pod</h3>
<p>To display only Pod events, you can use:</p>
<pre><code>$ kubectl get events --watch --field-selector involvedObject.kind=Pod
</code></pre>
<p>We want to run this command from within a Pod. For security reasons, I've created a separate <code>events-collector</code> ServiceAccount with the <code>view</code> Role assigned and our Pod will run under this ServiceAccount.<br />
<strong>NOTE:</strong> I've created a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> instead of a single Pod.</p>
<pre><code>$ cat all-in-one.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: events-collector
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: events-collector-binding
subjects:
- kind: ServiceAccount
name: events-collector
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: events-collector
name: events-collector
spec:
selector:
matchLabels:
app: events-collector
template:
metadata:
labels:
app: events-collector
spec:
serviceAccountName: events-collector
containers:
- image: bitnami/kubectl
name: test
command: ["kubectl"]
args: ["get","events", "--watch", "--field-selector", "involvedObject.kind=Pod"]
</code></pre>
<p>After applying the above manifest, the <code>event-collector</code> was created and collects Pod events as expected:</p>
<pre><code>$ kubectl apply -f all-in-one.yml
serviceaccount/events-collector created
clusterrolebinding.rbac.authorization.k8s.io/events-collector-binding created
deployment.apps/events-collector created
$ kubectl get deploy,pod | grep events-collector
deployment.apps/events-collector 1/1 1 1 14s
pod/events-collector-d98d6c5c-xrltj 1/1 Running 0 14s
$ kubectl logs -f events-collector-d98d6c5c-xrltj
LAST SEEN TYPE REASON OBJECT MESSAGE
77s Normal Scheduled pod/app-1-5d9ccdb595-m9d5n Successfully assigned default/app-1-5d9ccdb595-m9d5n to gke-cluster-2-default-pool-8505743b-brmx
76s Normal Pulling pod/app-1-5d9ccdb595-m9d5n Pulling image "nginx"
71s Normal Pulled pod/app-1-5d9ccdb595-m9d5n Successfully pulled image "nginx" in 4.727842954s
70s Normal Created pod/app-1-5d9ccdb595-m9d5n Created container nginx
70s Normal Started pod/app-1-5d9ccdb595-m9d5n Started container nginx
73s Normal Scheduled pod/app-2-7747dcb588-h8j4q Successfully assigned default/app-2-7747dcb588-h8j4q to gke-cluster-2-default-pool-8505743b-p7qt
72s Normal Pulling pod/app-2-7747dcb588-h8j4q Pulling image "nginx"
67s Normal Pulled pod/app-2-7747dcb588-h8j4q Successfully pulled image "nginx" in 4.476795932s
66s Normal Created pod/app-2-7747dcb588-h8j4q Created container nginx
66s Normal Started pod/app-2-7747dcb588-h8j4q Started container nginx
</code></pre>
<h3>2. Installing Loki</h3>
<p>You can install <a href="https://github.com/grafana/loki" rel="nofollow noreferrer">Loki</a> to store logs and process queries. Loki is like Prometheus, but for logs :). The easiest way to install Loki is to use the <a href="https://github.com/grafana/helm-charts/tree/main/charts/loki-stack" rel="nofollow noreferrer">grafana/loki-stack</a> Helm chart:</p>
<pre><code>$ helm repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories
$ helm repo update
...
Update Complete. ⎈Happy Helming!⎈
$ helm upgrade --install loki grafana/loki-stack
$ kubectl get pods | grep loki
loki-0 1/1 Running 0 76s
loki-promtail-hm8kn 1/1 Running 0 76s
loki-promtail-nkv4p 1/1 Running 0 76s
loki-promtail-qfrcr 1/1 Running 0 76s
</code></pre>
<h3>3. Querying Loki with LogCLI</h3>
<p>You can use the <a href="https://grafana.com/docs/loki/latest/getting-started/logcli/" rel="nofollow noreferrer">LogCLI</a> tool to run LogQL queries against a Loki server. Detailed information on installing and using this tool can be found in the <a href="https://grafana.com/docs/loki/latest/getting-started/logcli/#installation" rel="nofollow noreferrer">LogCLI documentation</a>. I'll demonstrate how to install it on Linux:</p>
<pre><code>$ wget https://github.com/grafana/loki/releases/download/v2.2.1/logcli-linux-amd64.zip
$ unzip logcli-linux-amd64.zip
Archive: logcli-linux-amd64.zip
inflating: logcli-linux-amd64
$ mv logcli-linux-amd64 logcli
$ sudo cp logcli /bin/
$ whereis logcli
logcli: /bin/logcli
</code></pre>
<p>To query the Loki server from outside the Kubernetes cluster, you may need to expose it using the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource:</p>
<pre><code>$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: loki-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: loki
servicePort: 3100
path: /
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/loki-ingress created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
loki-ingress <none> * <PUBLIC_IP> 80 19s
</code></pre>
<p>Finally, I've created a simple python script that we can use to query the Loki server:<br />
<strong>NOTE:</strong> We need to set the <code>LOKI_ADDR</code> environment variable as described in the <a href="https://grafana.com/docs/loki/latest/getting-started/logcli/#example" rel="nofollow noreferrer">documentation</a>. You need to replace the <code><PUBLIC_IP></code> with your Ingress IP.</p>
<pre><code>$ cat query_loki.py
#!/usr/bin/env python3
import os
os.environ['LOKI_ADDR'] = "http://<PUBLIC_IP>"
os.system("logcli query '{app=\"events-collector\"}'")
$ ./query_loki.py
...
2021-07-02T10:33:01Z {} 2021-07-02T10:33:01.626763464Z stdout F 0s Normal Pulling pod/backend-app-5d99cf4b-c9km4 Pulling image "nginx"
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.836755152Z stdout F 0s Normal Scheduled pod/backend-app-5d99cf4b-c9km4 Successfully assigned default/backend-app-5d99cf4b-c9km4 to gke-cluster-1-default-pool-328bd2b1-288w
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.649954267Z stdout F 0s Normal Started pod/web-app-6fcf9bb7b8-jbrr9 Started container nginx2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.54819851Z stdout F 0s Normal Created pod/web-app-6fcf9bb7b8-jbrr9 Created container nginx
2021-07-02T10:32:59Z {} 2021-07-02T10:32:59.414571562Z stdout F 0s Normal Pulled pod/web-app-6fcf9bb7b8-jbrr9 Successfully pulled image "nginx" in 4.228468876s
...
</code></pre>
| matt_j |
<p>In Kubernetes, environment variables from a ConfigMap do not change the max_connections property in a PostgreSql pod. How do you change Postgres max_connections configuration via environment variables in Kubernetes ?</p>
<p>I've tried following parameters to configure Postgres.</p>
<p>The problem is, I can use DB, USER and PASSWORD parameters and values are set as expected. But i need to change max_connections configuration. I made related research, it looks like PGOPTIONS is the right choice to send configuration changes. Even i tried PGOPTIONS and other variations, there is no impact on max_connections value. I am connecting postgresql and i am executing SHOW MAX_CONNECTIONS query, it shows 100 even i specify 1000 in the environment configuration values. </p>
<p>I am using Kubernetes 1.14 in digitalocean. </p>
<h3>ConfigMap</h3>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config-demo
labels:
app: postgres
data:
POSTGRES_DB: demopostgresdb
POSTGRES_USER: demopostgresadmin
POSTGRES_PASSWORD: demopostgrespwd
PGOPTIONS: "-c max_connections=1000 -c shared_buffers=1024MB"
POSTGRES_OPTIONS: "-c max_connections=1000 -c shared_buffers=1024MB"
PG_OPTIONS: "-c max_connections=1000 -c shared_buffers=1024MB"
MAX_CONNECTIONS: "1000"
</code></pre>
<h3>Statefulset</h3>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pgdemo
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
envFrom:
- configMapRef:
name: postgres-config-demo
env:
- name: PGOPTIONS
value: "-c max_connections=1000 -c shared_buffers=1024MB"
- name: "MAX_CONNECTIONS"
value: "1000"
ports:
- containerPort: 5432
name: postgredb
volumeMounts:
- name: postgredb
mountPath: /var/lib/postgresql/data
subPath: postgres
volumeClaimTemplates:
- metadata:
name: postgredb
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: do-block-storage
resources:
requests:
storage: 3Gi
</code></pre>
<p>I am expecting to get max_connections value as 1000.
But it looks like as default value like 100.</p>
<p>There is no error in any log.</p>
| okproject | <p>Postgresql docker image can only support several env parameters, <code>POSTGRES_PASSWORD</code>, <code>POSTGRES_USER</code> ,<code>POSTGRES_DB</code>, <code>POSTGRES_INITDB_ARGS</code>, <code>POSTGRES_INITDB_WALDIR</code>, <code>POSTGRES_HOST_AUTH_METHOD</code>, <code>PGDATA</code>.</p>
<p>If you want modify some database configuration parameters, you can use <code>args</code> in kubernetes. </p>
<p><strong>Deployment yaml file fraction</strong></p>
<pre><code> containers:
- args:
- -c
- max_connections=1000
- -c
- shared_buffers=1024MB
envFrom:
- configMapRef:
name: postgres-config
image: postgres
imagePullPolicy: IfNotPresent
name: postgres
</code></pre>
| Lance76 |
<p>Some context:</p>
<p>I have little experience with ci/CD and manage a fast paced growing application since it saw the light of the day for the first time. It is composed by several microservices at different environments. Devs are constantly pushing new code to DEV , but they frequently forget about sending new values from their local .<code>env</code> ove to the openshift cloud, regardless of this being a brand new environment or existing ones.</p>
<p>The outcome? Services that fail because they lack to have their secrets updated.</p>
<p>I understand the underlying issue is lack of communication between both us DevOps staff and devs themselves. But I've been trying to figure out some sort of process that would make sure we are not missing anything. Maybe something like a "before takeoff checklist" (yes, like the ones pilots do in a real flight preparation): if the chack fails then the aircraft is not ready to takeoff.</p>
<p>So the question is for everyone out there that practices DevOps. How do you guys deal with this?</p>
<p>Does anyone automates this within Openshift/kubernetes, for example? From your perspective and experience, would you suggest any tools for that, or simply enforce communication?</p>
| the_piper | <p>Guess no checklist or communication would work for team that <code>...frequently forget about sending new values from their local .env ove...</code>, which you must have already done.</p>
<p>Your step in the pipeline should check for service availability before proceeding to next step, eg. does the service has endpoint registered within acceptable time, no endpoint means the backing pod(s) did not enter readiness state as expected. In such case, rollback and send notification to the team responsible for the service/application and exit cleanly.</p>
<p>There's no fix formula for CI/CD, especially human error. Check & balance at every step is the least you can do to trigger early warning and avoid a disastrous deployment.</p>
| gohm'c |
<p>Hey it's been quite days struggling to make the sample book app running. I am new to istio and trying to get understand it. I followed this <a href="https://www.docker.com/blog/getting-started-with-istio-using-docker-desktop/" rel="nofollow noreferrer">demo</a> of an other way of setting up the bookinfo. I am using minikube in a virtualbox machine with docker as a driver. I set metalLB as a loadBalancer for <strong>ingress-gateway</strong>, here is the configmap i used for metalLB :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: custom-ip-space
protocol: layer2
addresses:
- 192.168.49.2/28
</code></pre>
<p>the <code>192.168.49.2</code> is the result of the command: <code>minikube ip</code></p>
<p>The ingressgateway yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: productpage
port:
number: 9080
</code></pre>
<p>and the output command of <code>kubectl get svc -n istio-system</code>:</p>
<pre><code>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.111.105.179 <none> 3000/TCP 34m
istio-citadel ClusterIP 10.100.38.218 <none> 8060/TCP,15014/TCP 34m
istio-egressgateway ClusterIP 10.101.66.207 <none> 80/TCP,443/TCP,15443/TCP 34m
istio-galley ClusterIP 10.103.112.155 <none> 443/TCP,15014/TCP,9901/TCP 34m
istio-ingressgateway LoadBalancer 10.97.23.39 192.168.49.0 15020:32717/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32199/TCP,15030:30010/TCP,15031:30189/TCP,15032:31134/TCP,15443:30748/TCP 34m
istio-pilot ClusterIP 10.108.133.31 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 34m
istio-policy ClusterIP 10.100.74.207 <none> 9091/TCP,15004/TCP,15014/TCP 34m
istio-sidecar-injector ClusterIP 10.97.224.99 <none> 443/TCP,15014/TCP 34m
istio-telemetry ClusterIP 10.101.165.139 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 34m
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 34m
jaeger-collector ClusterIP 10.111.188.83 <none> 14267/TCP,14268/TCP,14250/TCP 34m
jaeger-query ClusterIP 10.103.148.144 <none> 16686/TCP 34m
kiali ClusterIP 10.111.57.222 <none> 20001/TCP 34m
prometheus ClusterIP 10.107.204.95 <none> 9090/TCP 34m
tracing ClusterIP 10.104.88.173 <none> 80/TCP 34m
zipkin ClusterIP 10.111.162.93 <none> 9411/TCP 34m
</code></pre>
<p>and when trying to curl <code>192.168.49.0:80/productpage</code> I am getting :</p>
<pre><code>* Trying 192.168.49.0...
* TCP_NODELAY set
* Immediate connect fail for 192.168.49.0: Network is unreachable
* Closing connection 0
curl: (7) Couldn't connect to server
myhost@k8s:~$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
</code></pre>
<p>and before setting up the metalLB, I was getting connection refused!</p>
<p>Any solution for this please ? as it's been 5 days struggling to fix it.</p>
<p>I followed the steps <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">here</a> and all steps are ok!</p>
| marOne | <p>In my opinion, this is a problem with the <code>MetalLB</code> configuration.</p>
<p>You are trying to give <code>MetalLB</code> control over IPs from the <code>192.168.49.2/28</code> network.<br />
We can calculate for <code>192.168.49.2/28</code> network: <strong>HostMin</strong>=<code>192.168.49.1</code> and <strong>HostMax</strong>=<code>192.168.49.14</code>.</p>
<p>As we can see, your <code>istio-ingressgateway</code> LoadBalancer Service is assigned the address <code>192.168.49.0</code> and I think that is the cause of the problem.</p>
<p>I recommend changing from <code>192.168.49.2/28</code> to a range, such as <code>192.168.49.10-192.168.49.20</code>.</p>
<hr />
<p>I've created an example to illustrate you how your configuration can be changed.</p>
<p>As you can see, at the beginning I had the configuration exactly like you (I also couldn't connect to the server using the <code>curl</code> command):</p>
<pre><code>$ kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingressgateway LoadBalancer 10.109.75.19 192.168.49.0
$ curl 192.168.49.0:80/productpage
curl: (7) Couldn't connect to server
</code></pre>
<p>First, I modified the <code>config</code> <code>ConfigMap</code>:<br />
<strong>NOTE:</strong> I changed <code>192.168.49.2/28</code> to <code>192.168.49.10-192.168.49.20</code></p>
<pre><code>$ kubectl edit cm config -n metallb-system
</code></pre>
<p>Then I restarted all the controller and speaker <code>Pods</code> to force <code>MetalLB</code> to use new config (see: <a href="https://github.com/metallb/metallb/issues/348#issuecomment-442218138" rel="nofollow noreferrer">Metallb ConfigMap update</a>).</p>
<pre><code>$ kubectl delete pod -n metallb-system --all
pod "controller-65db86ddc6-gf49h" deleted
pod "speaker-7l66v" deleted
</code></pre>
<p>After some time, we should see a new <code>EXTERNAL-IP</code> assigned to the <code>istio-ingressgateway</code> <code>Service</code>:</p>
<pre><code>kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP AGE
istio-ingressgateway LoadBalancer 10.106.170.227 192.168.49.10
</code></pre>
<p>Finally, we can check if it works as expected:</p>
<pre><code>$ curl 192.168.49.10:80/productpage
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
...
</code></pre>
| matt_j |
<p>I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.</p>
<p>When I do a "docker images", I get:</p>
<pre><code>cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
</code></pre>
<p>Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
</code></pre>
<p>Anyone know how I can fix this? Seems the biggest problem is <code>Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution</code></p>
| Mike K. | <p>Since minikube is being used, you can refer to their documentation.
It is recommended that if a <code>imagePullPolicy</code> is being used, it needs to be set to <code>Never</code>. If set to <code>Always</code>, it will try to reach out and pull from the network.</p>
<p>From docs: <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/pushing/</a>
"Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network."</p>
| arctic |
<p>I am trying to access a secret on GCP Secrets and I get the following error :</p>
<pre><code>in get_total_results "api_key": get_credentials("somekey").get("somekey within key"), File
"/helper.py", line 153, in get_credentials response = client.access_secret_version(request={"name": resource_name})
File "/usr/local/lib/python3.8/site-packages/google/cloud/secretmanager_v1/services/secret_manager_service/client.py",
line 1136, in access_secret_version response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/usr/local/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py", line 285, in retry_wrapped_func return retry_target( File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py",
line 188, in retry_target return target() File "/usr/local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py",
line 69, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) File "<string>",
line 3, in raise_from google.api_core.exceptions.PermissionDenied:
403 Request had insufficient authentication scopes.
</code></pre>
<p>The code is fairly simple:-</p>
<pre><code>def get_credentials(secret_id):
project_id = os.environ.get("PROJECT_ID")
resource_name = f"projects/{project_id}/secrets/{secret_id}/versions/1"
client = secretmanager.SecretManagerServiceClient()
response = client.access_secret_version(request={"name": resource_name})
secret_string = response.payload.data.decode("UTF-8")
secret_dict = json.loads(secret_string)
return secret_dict
</code></pre>
<p>So, what I have is a cloud function, which is deployed using Triggers, and uses a service account which has the Owner role.</p>
<p>The cloud function triggers a Kubernete Work Job and creates a container, which downloads a repo inside the container and executes it.</p>
<p>Dockerfile is:</p>
<pre><code>FROM gcr.io/project/repo:latest
FROM python:3.8-slim-buster
COPY . /some_dir
WORKDIR /some_dir
COPY --from=0 ./repo /a_repo
RUN pip install -r requirements.txt & pip install -r a_repo/requirements.txt
ENTRYPOINT ["python3" , "main.py"]
</code></pre>
| dotslash227 | <p>The GCE instance might not have the correct authentication scope.</p>
<p>From: <a href="https://developers.google.com/identity/protocols/oauth2/scopes#secretmanager" rel="noreferrer">https://developers.google.com/identity/protocols/oauth2/scopes#secretmanager</a></p>
<p><code>https://www.googleapis.com/auth/cloud-platform</code> is the required scope.</p>
<p>When creating the GCE instance you need to select the option that gives the instance the correct scope to call out to cloud APIs: <a href="https://i.stack.imgur.com/CQFjz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CQFjz.png" alt="from the cloud console" /></a></p>
| Sandro B |
<p>I want to run <a href="https://github.com/dptij/example-1" rel="nofollow noreferrer">this Spring Boot application</a> inside Minikube so that I can reach port <code>8080</code> in my browser (once the application is running).</p>
<p>To do so, I do following steps.</p>
<ol>
<li>Start Docker.</li>
<li>Run <code>mvn spring-boot:build-image</code>.</li>
<li>Start Minikube using <code>minikube start</code>.</li>
<li>Create a Minikube deployment using <code>kubectl create deployment example-1-engine-1 --image=example-1-engine-1</code>.</li>
<li>Expose port 8080 using <code>kubectl expose deployment example-1-engine-1 --type=NodePort --port=8080</code> and <code>kubectl port-forward service/example-1-engine-1 8080:8080</code>.</li>
<li>Start the appliation in Minikube.</li>
<li>Start the application in Minikube using <code>minikube service example-1-engine-1</code>.</li>
</ol>
<p>Following output appears:</p>
<p><a href="https://i.stack.imgur.com/FeKif.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FeKif.png" alt="Output of Minikube in the command line" /></a></p>
<p>However, the browser cannot open the web application on port 8080 (browser cannot connect to the application at <a href="http://127.0.0.1:50947/" rel="nofollow noreferrer">http://127.0.0.1:50947/</a>).</p>
<p><strong>What am I doing wrong and how can I make sure that the application in question runs inside Minikube and I can access the web application at port 8080?</strong></p>
<p><strong>Update 1:</strong> Here is the output of <code>kubectl get pods</code>:</p>
<pre><code>C:\Users\XXXXXXX>kubectl get pods
NAME READY STATUS RESTARTS AGE
example-1-engine-1-d9fc48785-zdvkf 0/1 ImagePullBackOff 0 2d18h
hello-minikube-6ddfcc9757-wj6c2 1/1 Running 1 5d
</code></pre>
<p><strong>Update 2:</strong> I tried to enter <code>eval $(minikube -p minikube docker-env)</code>. For this purpose, I first opened the shell of Minikube container in Docker.</p>
<p><a href="https://i.stack.imgur.com/HUT30.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUT30.png" alt="Screenshot of the button for opening the shell in Minikube" /></a></p>
<p>In the window that appeared thereafter, I entered <code>eval $(minikube -p minikube docker-env)</code> and got the response <code>minikube: not found</code>.</p>
<p><a href="https://i.stack.imgur.com/ZZWda.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZZWda.png" alt="Screenshot of the error message "minikube: not found"" /></a></p>
| Glory to Russia | <p>In my opinion, there are two possible issues with your case.</p>
<h3>1. Using a locally built Docker image</h3>
<p>You want to run locally built Docker image in Kubernetes, but it doesn't work out-of-the-box.
Generally, you have two options:</p>
<ol>
<li><p>Use <code>docker image push</code> command to share your image to the <a href="https://hub.docker.com/" rel="nofollow noreferrer">Docker Hub</a> registry or to a self-hosted one as described in the <a href="https://docs.docker.com/engine/reference/commandline/push/" rel="nofollow noreferrer">Docker documentation</a>.</p>
</li>
<li><p>Use <code>eval $(minikube docker-env)</code> command to point your terminal to use the docker daemon inside minikube as described in the <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env" rel="nofollow noreferrer">minikube documentation</a>.</p>
</li>
</ol>
<h3>2. Using a Docker image with a specific tag</h3>
<p>As I can see, your image has specific tag <code>1.0-SNAPSHOT</code>:</p>
<pre><code>$ docker images
REPOSITORY TAG
example-1-engine-1 1.0-SNAPSHOT
</code></pre>
<p>You need to specify this tag in your <code>kubectl create deployment</code> command:</p>
<pre><code>$ kubectl create deployment example-1-engine-1 --image=example-1-engine-1:1.0-SNAPSHOT
</code></pre>
<hr />
<p>I've deployed your <a href="https://github.com/dptij/example-1" rel="nofollow noreferrer">Spring Boot application</a>, to show that it works as expected for me.</p>
<p>First, to point my terminal to use the docker daemon inside minikube I ran:</p>
<pre><code>$ eval $(minikube -p minikube docker-env)
</code></pre>
<p>Then I created Docker image and deployed <code>example-1-engine-1</code> app:</p>
<pre><code>$ mvn spring-boot:build-image
...
[INFO] Successfully built image 'docker.io/library/example-1-engine-1:1.0-SNAPSHOT'
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
...
$ kubectl create deployment example-1-engine-1 --image=example-1-engine-1:1.0-SNAPSHOT
deployment.apps/example-1-engine-1 created
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
example-1-engine-1-c98f8c569-xq2cv 1/1 Running 0 13s
</code></pre>
<p>Finally, I exposed the <code>example-1-engine-1</code> <code>Deployment</code> as in your example:</p>
<pre><code>$ kubectl expose deployment example-1-engine-1 --type=NodePort --port=8080
service/example-1-engine-1 exposed
$ kubectl port-forward service/example-1-engine-1 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
$ minikube service example-1-engine-1
|-----------|--------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------------|-------------|---------------------------|
| default | example-1-engine-1 | 8080 | http://192.168.49.2:32766 |
|-----------|--------------------|-------------|---------------------------|
Opening service default/example-1-engine-1 in default browser...
</code></pre>
<p>In the open browser tab, I saw:</p>
<p><a href="https://i.stack.imgur.com/5lcIF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5lcIF.png" alt="enter image description here" /></a></p>
<p>Additionally, I saw the same page at <code>http://127.0.0.1:8080</code>.</p>
<hr />
<p>Unfortunately, I don't have Windows machine (as in your example) and I used Linux instead, so I wasn't able to exactly reproduce your issue.</p>
| matt_j |
<p>I have a K3s setup with calico pods [<code>calico-node-</code> & <code>calico-kube-controllers-</code>] running. On uninstalling K3s, calico pods get deleted but I see that <code>calicoctl</code> and <code>iptables -S</code> commands still running and shows data.</p>
<p>I want to delete calico (including <em>calicoctl</em> and <em>Iptables</em> created by calico) completely. Which commands will help me to do so ?</p>
<p><strong>K3s uninstalltion command:</strong> <code>/usr/local/bin/k3s-uninstall.sh</code> deletes all k3s pods including calico, but <code>calicoctl</code> and <code>iptables -S</code> still works.</p>
<p><strong>PS:</strong> I already tried few things -</p>
<ol>
<li>Command <code>kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.5/config/v1.5/calico.yaml</code> deletes the <code>calico-node-</code> but <em><code>calico-kube-controller</code></em> , <em><code>calicoctl</code></em> and <em><code>iptables -S</code></em> are still present</li>
<li><code>Kubectl delete</code> commands in <a href="https://stackoverflow.com/questions/53610641/how-to-delete-remove-calico-cni-from-my-kubernetes-cluster/53610810#53610810">this que</a> also not working for me, after executing these two commands still <em><code>calicoctl</code></em> and <em><code>iptables -S</code></em> are present</li>
</ol>
| solveit | <p><strong>Deleting calico-Iptables:</strong></p>
<p>Use <a href="https://github.com/projectcalico/calico/blob/master/hack/remove-calico-policy/remove-calico-policy.sh" rel="nofollow noreferrer">calico-policy</a> and add below lines at the end of script:</p>
<pre><code>echo "Flush remaining calico iptables"
iptables-save | grep -i cali | iptables -F
echo "Delete remaining calico iptables"
iptables-save | grep -i cali | iptables -X
</code></pre>
<p>This will delete all calico iptables when you check with <code>iptables -S</code></p>
<p><strong>Note:</strong> Run this script only after uninstalling K3S.</p>
<p><strong>Deleting calicoctl:</strong></p>
<p>Simply run <code>sudo rm $(which calicoctl)</code> command, it will find and delete the calicoctl.</p>
| gaurav sinha |
<p>I have created the EKS cluster.</p>
<p>Then follow the document (<a href="https://eksctl.io/usage/eksctl-karpenter/" rel="nofollow noreferrer">https://eksctl.io/usage/eksctl-karpenter/</a>) to add <code>karpenter support</code>,</p>
<pre><code> metadata:
name: eks-dev
region: ap-southeast-2
version: "1.22"
+ tags:
+ karpenter.sh/discovery: eks-dev
+iam:
+ withOIDC: true # required
+karpenter:
+ version: '0.9.0'
managedNodeGroups:
- name: spot
</code></pre>
<p>but when I upgrade it, nothing happen.</p>
<pre><code>$ eksctl upgrade cluster -f eks-dev.yaml --approve
2022-06-07 21:08:25 [!] NOTE: cluster VPC (subnets, routing & NAT Gateway) configuration changes are not yet implemented
2022-06-07 21:08:25 [ℹ] no cluster version update required
2022-06-07 21:08:26 [ℹ] re-building cluster stack "eksctl-eks-dev-cluster"
2022-06-07 21:08:26 [✔] all resources in cluster stack "eksctl-eks-dev-cluster" are up-to-date
2022-06-07 21:08:26 [ℹ] checking security group configuration for all nodegroups
2022-06-07 21:08:26 [ℹ] all nodegroups have up-to-date cloudformation templates
$
</code></pre>
<p>The note is about to igonre the change for VPC, but Karpenter change is not related to vpc.</p>
<p>So how can I fix this issue?</p>
| Bill | <p>Support for <code>karpenter</code> only applies to <strong>new</strong> cluster, it has no effect to existing cluster. You can manually install karpenter on existing cluster following <a href="https://karpenter.sh/v0.10.1/getting-started/getting-started-with-eksctl/#create-the-karpenternode-iam-role" rel="nofollow noreferrer">this guide</a>.</p>
| gohm'c |
<p>I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.</p>
<p>The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?</p>
<p>Is this why I can't get a pod endpoint with:</p>
<pre><code>export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
</code></pre>
<p>and then curl:</p>
<pre><code>curl http://$PODIP:8080
</code></pre>
<p>My deployment is definitely on the right port:</p>
<pre><code>ports:
- containerPort: 8080
</code></pre>
<p>And, in fact, the deployment for the tut is from a google sample.</p>
<p>Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?</p>
<p>Deployment yaml is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
</code></pre>
| Davtho1983 | <p>Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as <strong>NodePort</strong> or <strong>LoadBalancer</strong>, since <strong>ClusterIP</strong> only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
</code></pre>
<p>After redeploying your service, you can run command <code>kubectl get all -o wide</code> on <strong>cloud shell</strong> to validate that NodePort type service has been created with a node and target port.</p>
<p>To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:</p>
<p><code>curl <node_IP_address>:<Node_port></code></p>
| Cesar |
<p>Until now I'm using Docker for a hand-made hosting solution on single-VPCs, with fail2ban installed on host and watching at Docker logs from Nginx containers (Each server can host multiple websites, served through an Nginx proxy)</p>
<p>I wonder how it would be possible to achieve same feature with Kubernetes, especially blocking POST requests to /wp-admin access after X attempts?</p>
<p>I thought about building a custom Docker image for Nginx proxy (Ingress in K8s), including Fail2ban; but maybe there's a simpler solution: Network Policies ?</p>
| Bazalt | <p>That's an old question probably resolved by the author, but for other community members I decided to provide an answer with a few clarifications.</p>
<p>I have tried to find a <code>fail2ban</code> solution that can help with this case. Unfortunately, I did not find anything suitable and easy to use at the same time.<br />
It may be reasonable to create a <a href="https://github.com/fail2ban/fail2ban/issues" rel="nofollow noreferrer">GitHub issue</a> for <code>fail2ban</code> integration with Kubernetes.</p>
<p>Below are some other solutions that may help you:</p>
<h3><a href="https://github.com/SpiderLabs/ModSecurity" rel="nofollow noreferrer">ModSecurity</a></h3>
<p>Using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a> to expose HTTP and HTTPS routes from outside the cluster to services within the cluster may be a good starting point for you.</p>
<p>As we can see in the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress documentation</a>:</p>
<blockquote>
<p>You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect. You may need to deploy an Ingress controller such as ingress-nginx.</p>
</blockquote>
<p>In the NGINX Ingress Controller you can use <a href="https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/modsecurity/#modsecurity-web-application-firewall" rel="nofollow noreferrer">ModSecurity</a> as a third party addons:</p>
<blockquote>
<p>ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap. Note this will enable ModSecurity for all paths, and each path must be disabled manually.</p>
</blockquote>
<p>You can enable the <a href="https://github.com/SpiderLabs/ModSecurity" rel="nofollow noreferrer">OWASP Core Rule Set</a> by setting the following annotation at the ingress level (more information can be found in the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#modsecurity" rel="nofollow noreferrer">NGINX ModSecurity configuration</a> documentation):</p>
<pre><code>nginx.ingress.kubernetes.io/enable-owasp-core-rules: "true"
</code></pre>
<p>It seems possible to use <code>ModSecurity</code> as a <strong>Brute-Force Authentication Protection</strong> as described in this article:
<a href="https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/defending-wordpress-logins-from-brute-force-attacks/" rel="nofollow noreferrer">Defending WordPress Logins from Brute Force Attacks</a>.</p>
<p>Additionally, it is worth mentioning that NGINX Ingress Controller has many <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting" rel="nofollow noreferrer">annotations</a> that can be used to mitigate <strong>DDoS Attacks</strong> e.g.:</p>
<blockquote>
<p>nginx.ingress.kubernetes.io/limit-whitelist: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.</p>
</blockquote>
<blockquote>
<p>nginx.ingress.kubernetes.io/limit-rps: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.</p>
</blockquote>
<blockquote>
<p>nginx.ingress.kubernetes.io/limit-connections: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.</p>
</blockquote>
<h3><a href="https://wordpress.org/plugins/" rel="nofollow noreferrer">WordPress Plugins</a></h3>
<p>As you are using WordPress, you can use many WordPress Plugins.
For example the <a href="https://wordpress.org/plugins/web-application-firewall/" rel="nofollow noreferrer">Web Application Firewall</a> plugin offers <code>Real Time IP Blocking</code> feature.</p>
<h3><a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/" rel="nofollow noreferrer">Web Application Firewall (WAF)</a></h3>
<p>Whether you use an onprem or cloud environment, you can use a specialized firewall (WAF) and DDoS mitigation service such as <a href="https://cloud.google.com/armor/" rel="nofollow noreferrer">Google Cloud Armor</a> (see <a href="https://cloud.google.com/blog/products/identity-security/new-waf-capabilities-in-cloud-armor" rel="nofollow noreferrer">Cloud Armor for on-prem and cloud workloads</a>).</p>
| matt_j |
<p>I have a private project on Gitlab with CI/CD set up to push/pull docker images from Google Container Registry and to deploy my software to Kubernetes Engine in GCP. </p>
<p>Is there a way to make my project public without worrying about the secrets used to connect to GCP getting leaked? In particular, I'm worried that when my repository is public anyone would be able to add a line like <code>echo $GCP_REPOSITORY_SECRET</code> somewhere in the <code>.gitlab-ci.yml</code> file, push their branch and view the output of the CI to "discover" my secret. Does Gitlab have a mechanism to prevent this? More fundamentally, are there best practices to keep deployment secrets secret for public repositories?</p>
| Paymahn Moghadasian | <p>Masked variables are ridiculously easy to unmask...</p>
<pre class="lang-sh prettyprint-override"><code>echo ${MASKED_VARIABLE::1} ${MASKED_VARIABLE:1} // mind the gap \!
</code></pre>
<p>You may want to <em>PROTECT</em> them instead;
AND, make sure that only truly trusted devs can push to your protected branches.</p>
| notGitLab |
<p>I am working on an angular application and deployed it in kubernetes. I can access my application through Nginx Ingress.</p>
<p>I am using angular router to enable navigation through different components in my app.</p>
<p><em>Using the deployed application i tried to navigate through different components, when I click refresh on the browser or directly access a specific url path, I get 404 Not Found Page.</em></p>
<p>For example, if one accesses URL <code>mycompany.domain.com</code>, it shows the home component. In my angular router I have a <code>/user</code> path that points to user component.</p>
<p>Upon navigating to user menu, my new URL will now be <code>mycompany.domain.com/user</code> - and it is all working as expected.
However if I refresh the current page, it will become 404 Not Found Page, which is the problem.</p>
<p><strong>My few thoughts:</strong></p>
<ol>
<li>The router is part of the SPA, and of course will be loaded once the SPA is loaded.</li>
<li>The url path <code>/user</code> is only known by the router in the SPA - and so when we try to access the mycompany.domain.com/user directly, the server does not find any resource matching to it.</li>
<li>The only one who can understand the <code>/user</code> url path is my SPA - which is not loaded yet because the server already decided that the resource is not found.</li>
</ol>
<p>So I concluded (but still to try) the problem can occur anywhere I deploy my SPA regardless my ingress or server configuration.</p>
<p><strong>My solution</strong> is using angular router <code>useHash</code> option - it means that my navigation path will be after a # and be considered as URL Fragments, like this: mycompany.domain.com/#/user, in this case, my server will not try to understand the fragment, as it is meant to be understood by the page. I did so inspired by Vue.js router.</p>
<p><strong>My questions are:</strong></p>
<ol>
<li>Is my understanding (and conclusion) correct?</li>
<li>Is there any other solution? Because Angular by default doesn't use the hash and I am sure that there is a reason for that because it is not making sense if its doesn't work when deployed?</li>
<li>Can URL Rewriting help me? I have tried to look for it myself the usage is not matching with my conclusions.</li>
</ol>
<p>I am not a SPA expert, I am just starting and I would appreciate if someone will correct and answer me.</p>
| Azel | <p><strong>Save This code as web.config then paste the web.config file in the dist folder</strong></p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.web>
<compilation targetFramework="4.0" />
<customErrors mode="On" redirectMode="ResponseRewrite">
<error statusCode="404" redirect="/index.html" />
</customErrors>
</system.web>
<system.webServer>
<httpErrors errorMode="Custom">
<remove statusCode="404"/>
<error statusCode="404" path="/index.html" responseMode="ExecuteURL"/>
</httpErrors>
</system.webServer>
</configuration>
</code></pre>
| Uchchas |
<p>I would like to know how does the AKS cluster autoscaler choses in which node pool to add a node in a multiple node pool environment</p>
<p>For instance, if I have a node pool tainted for a specific applications, will the autoscaler automatically detect the taint and only scale the node pool up if there are some pending pods which can be scheduled on the nodes ? Or will it scale a random node pool in the cluster ?</p>
<p>There is nothing about it on <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler</a>.</p>
| Dzack | <p>Below are my test results:</p>
<p><strong>Scenario1:</strong></p>
<p>If there are multiple nodepools and if all those nodepools have got corresponding taints applied , then cluster autoscaler will scale only that particular nodepool for which the corresponding pods are in pending state which are specified with tolerations i.e. it will scale only that nodepool which matches with corresponding taints/tolerations</p>
<p><strong>Scenario2:</strong></p>
<p>If you have 3 nodepools , taint is applied only on one nodepool - once the corresponding nodepool is full , then the pending pods can go to other nodepools (on which taints were not applied) and there is a high possibility of auto-scaling randomly the other nodepools also!</p>
<p><strong>Please Note:</strong> Only Taints & Tolerations will not guarantee of sticking the pods to corresponding nodepools. But If you apply Taints/Tolerations along with NodeAffinity that will make sure the pods will go and deploy only on corresponding nodepools !</p>
<p>All those conclusions are based upon the tests which I did locally in my AKS cluster!</p>
| Shiva Patpi |
<p>Im looking to get the number of pods on a cluster by namespace. Is this possible with a kubectl command?</p>
<p>Looking to exclude certain namespaces as well</p>
<p>kubectl get pods gets me a list of every pod.</p>
| 404Everything | <p>Please use below command:</p>
<pre><code>kubectl get pods --all-namespaces -o json | jq '.items | group_by(.metadata.namespace) | map({"namespace": .[0].metadata.namespace, "NoOfPods": (length)})'
</code></pre>
<p>Output format:</p>
<pre><code>[
{
"namespace": "keda",
"NoOfPods": 3
},
{
"namespace": "kube-system",
"NoOfPods": 12
},
{
"namespace": "prod",
"NoOfPods": 1
},
{
"namespace": "stage",
"NoOfPods": 1
}
]
</code></pre>
| Shiva Patpi |
<p>Has anyone come across this issue before?</p>
<pre><code>/snap/bin/microk8s
permanently dropping privs did not work: File exists
</code></pre>
<p>I get the same error when trying to run any of the other sub commands like microk8s.enable, microk8s.status, microk8s.kubectl - all same error message.</p>
<p>I tried to:</p>
<ul>
<li>run <code>strace</code> with the command to see if I can figure out what "File exists" - Nothing fruitful came of that.</li>
</ul>
<p>Any pointers will be greatly appreciated.</p>
| Okezie | <p>I had a same error message when I execute docker command after installing docker via snap.
After logout-login, the error was solved in my case.</p>
| ogavvat |
<p>I noticed a really strange bug with <code>kubectl diff</code> command.</p>
<p>I have a directory <code>k8s-test/</code> and when I run <code>kubectl diff -f k8s-test/</code> output is empty, but when I do <code>kubectl diff -f k8s-test/deployment.yaml</code> I can see the differences for that specific file in the output.</p>
<p>How can I debug this to find out the root cause?</p>
| Ivan Aracki | <p>Try: <code>kubectl diff --recursive -f k8s-test</code></p>
| gohm'c |
<p>I am having a problem where I am trying to restrict a deployment to <strike>work on</strike> <strong>avoid</strong> a specific node pool and nodeAffinity and nodeAntiAffinity don't seem to be working.</p>
<ul>
<li>We are running DOKS (Digital Ocean Managed Kubernetes) v1.19.3</li>
<li>We have two node pools: infra and clients, with nodes on both labelled as such</li>
<li>In this case, we would like to avoid deploying to the nodes labelled "infra"</li>
</ul>
<p>For whatever reason, it seems like no matter what configuration I use, Kubernetes seems to schedule randomly across both node pools.</p>
<p>See configuration below, and the results of scheduling</p>
<p><strong>deployment.yaml snippet</strong></p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: "test"
labels:
app: wordpress
client: "test"
product: hosted-wordpress
version: v1
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: wordpress
client: "test"
template:
metadata:
labels:
app: wordpress
client: "test"
product: hosted-wordpress
version: v1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: doks.digitalocean.com/node-pool
operator: NotIn
values:
- infra
</code></pre>
<p><strong>node description snippet</strong>
<em>note the label, 'doks.digitalocean.com/node-pool=infra'</em></p>
<pre><code>kubectl describe node infra-3dmga
Name: infra-3dmga
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=s-2vcpu-4gb
beta.kubernetes.io/os=linux
doks.digitalocean.com/node-id=67d84a52-8d08-4b19-87fe-1d837ba46eb6
doks.digitalocean.com/node-pool=infra
doks.digitalocean.com/node-pool-id=2e0f2a1d-fbfa-47e9-9136-c897e51c014a
doks.digitalocean.com/version=1.19.3-do.2
failure-domain.beta.kubernetes.io/region=tor1
kubernetes.io/arch=amd64
kubernetes.io/hostname=infra-3dmga
kubernetes.io/os=linux
node.kubernetes.io/instance-type=s-2vcpu-4gb
region=tor1
topology.kubernetes.io/region=tor1
Annotations: alpha.kubernetes.io/provided-node-ip: 10.137.0.230
csi.volume.kubernetes.io/nodeid: {"dobs.csi.digitalocean.com":"222551559"}
io.cilium.network.ipv4-cilium-host: 10.244.0.139
io.cilium.network.ipv4-health-ip: 10.244.0.209
io.cilium.network.ipv4-pod-cidr: 10.244.0.128/25
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 20 Dec 2020 20:17:20 -0800
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: infra-3dmga
AcquireTime: <unset>
RenewTime: Fri, 12 Feb 2021 08:04:09 -0800
</code></pre>
<p><strong>sometimes results in</strong></p>
<pre><code>kubectl get po -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wordpress-5bfcb6f44b-2j7kv 5/5 Running 0 1h 10.244.0.107 infra-3dmga <none> <none>
</code></pre>
<p><strong>other times results in</strong></p>
<pre><code>kubectl get po -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wordpress-5bfcb6f44b-b42wj 5/5 Running 0 5m 10.244.0.107 clients-3dmem <none> <none>
</code></pre>
<p>I have tried using nodeAntiAffinity to similar effect.</p>
<p>And lastly, I have even tried creating test labels instead of using the built-in labels from Digital Ocean and I get the same affect (Affinity just doesn't seem to be working for me at all).</p>
<p>I am hoping that someone can help me resolve or even point out a silly mistake in my config, because this issue has been driving me nuts trying to solve it (and it also is a useful feature, when it works).</p>
<p>Thank you,</p>
| Joel | <p>Great news!</p>
<p>I have finally resolved this issue.</p>
<p>The problem was "user error" of course.</p>
<p>There was an extra <code>Spec</code> line further down in the config that was very hidden.</p>
<p>Originally, before switching to StatefulSets, we were using Deployments, and I had a pod Spec hostname entry which was overriding the <code>Spec</code> at the top of the file.</p>
<p>Thanks <a href="https://stackoverflow.com/users/11560878/wytrzyma%c5%82y-wiktor">@WytrzymałyWiktor</a> and <a href="https://stackoverflow.com/users/8144188/manjul">@Manjul</a> for the suggestions!</p>
| Joel |
<p>Kubernetes garbage collection that removes unused images, runs only when disk space is low</p>
<p>How to run garbage collection even when disk space is available and not full.</p>
| Jayashree Madanala | <p><code>... runs only when disk space is low</code></p>
<blockquote>
<p>The kubelet performs garbage collection on unused images every <strong>five minutes and on unused containers every minute</strong>...</p>
</blockquote>
<p>You can push further by changing the kubelet <a href="https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images" rel="nofollow noreferrer">settings</a>. Example, if you changed <code>HighThresholdPercent: 60</code>; the clean-up starts when the disk usage crossed 60% and the process will bring it down to <code>LowThresholdPercent: 50</code>.</p>
| gohm'c |
<p>A bit unclear sentence <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#completion-mode" rel="nofollow noreferrer">in the doco</a>:</p>
<blockquote>
<p>Note that, although rare, more than one Pod could be started for the same index, but only one of them will count towards the completion count.</p>
</blockquote>
<p>So say it happens, there are two pods in an Indexed job with the same index.
Now, one of them is failed (the <code>restartPolicy = "Never"</code>), another succeed. Assume that all other pods succeed. Will the job fail overall? Or maybe it depends on which one of those sharing the same index was first - the succeeded or the failed one? Or is it totally indeterminate?</p>
| Mikha | <p>The first sentence is important:</p>
<blockquote>
<p>The Job is considered complete when there is one successfully completed Pod <strong>for each index</strong>.</p>
</blockquote>
<p>There can be duplicated index but for each index, only <strong>one</strong> (the one that reached Completed first) will be counted for <code>spec.completions</code>.</p>
| gohm'c |
<p>I have to deploy on my kubernetes cluster two deployments that use the same service for communicate but the two deployments are located into two differents namespaces:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
namespace: namespace1
labels:
app: app1
spec:
replicas: 2
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: eu.gcr.io/direct-variety-20998876/test1:dev
resources:
requests:
cpu: "100m"
memory: "128Mi"
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
...
</code></pre>
<p>and an identical second but in another amespace:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app2
namespace: namespace2
labels:
app: app2
spec:
replicas: 2
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: eu.gcr.io/direct-variety-20998876/test1:prod
resources:
requests:
cpu: "100m"
memory: "128Mi"
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
...
</code></pre>
<p>so i have to create a common service for bot deployment that run over the two namespaces:
I try:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: apps-service
namespace: ???
spec:
selector:
app: ???
ports:
- protocol: TCP
port: 8000
targetPort: 8000
type: NodePort
</code></pre>
<p>Until now i create one service for any app in specific namespace but there is a method for create a single service for manage both deployment (and then associate an unique ingress)?</p>
<p>So many thanks in advance</p>
| Manuel Santi | <p>First, I would like to provide some general explanations.
As we can see in the <a href="https://v1-18.docs.kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites" rel="nofollow noreferrer">Ingress documentation</a>:</p>
<blockquote>
<p>You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.</p>
</blockquote>
<p><strong><a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a></strong> can be deployed in any namespace and is often deployed in a namespace separate from the application namespace.</p>
<p><strong><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">Ingress resource</a></strong> (Ingress rules) should be deployed in the same namespace as the services they point to.</p>
<p>It is possible to have one ingress controller for multiple ingress resources.</p>
<p>Deploying an <code>Ingress</code> resource in the same namespace as the <code>Services</code> it points to is the most common approach (I recommend this approach).
However, there is way to have <code>Ingress</code> in one namespace and <code>Services</code> in another namespaces using <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">externalName</a> Services.</p>
<hr />
<p>I will create an example to illustrate how it may work.</p>
<p>Suppose, I have two <code>Deployments</code> (<code>app1</code>, <code>app2</code>) deployed in two different <code>Namespaces</code> (<code>namespace1</code>, <code>namespace2</code>):</p>
<pre><code>$ cat app1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app1
name: app1
namespace: namespace1
spec:
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- image: nginx
name: nginx
$ cat app2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app2
name: app2
namespace: namespace2
spec:
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- image: nginx
name: nginx
</code></pre>
<p>And I exposed these <code>Deployments</code> with <code>ClusterIP</code> <code>Services</code>:</p>
<pre><code>$ cat svc-app1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app1
name: app1
namespace: namespace1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app1
$ cat svc-app2.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app2
name: app2
namespace: namespace2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app2
</code></pre>
<p>We want to have a single <code>Ingress</code> resource in a separate <code>Namespace</code> (<code>default</code>).
First, we need to deploy Services of type ExternalName that map a Service to a DNS name.</p>
<pre><code>$ cat external-app1.yml
kind: Service
apiVersion: v1
metadata:
name: external-app1
spec:
type: ExternalName
externalName: app1.namespace1.svc
$ cat external-app2.yml
kind: Service
apiVersion: v1
metadata:
name: external-app2
spec:
type: ExternalName
externalName: app2.namespace2.svc
</code></pre>
<p>Then we can deploy Ingress resource:</p>
<pre><code>$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: external-app1
servicePort: 80
- path: /app2
backend:
serviceName: external-app2
servicePort: 80
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/app-ingress created
</code></pre>
<p>Finally, we can check if it works as expected:</p>
<pre><code>$ curl 34.118.X.207/app1
app1
$ curl 34.118.X.207/app2
app2
</code></pre>
<p><strong>NOTE:</strong> This is a workaround and may work differently with different ingress controllers. It is ususally better to have two or more Ingress resources in different namespaces.</p>
| matt_j |
<p>I am using kubernetes 1.13.2 on bare metals (No Provider).</p>
<p>I already have a master and a worker node set up while ago, but now my new worker node cannot join to the cluster and receives "Unauthorized" message when it tries to register</p>
<p>I have renewed my token on my master, and created a new join command. But still getting "Unauthorized" response upon joining</p>
<p>After sending <strong>kubeadm join ...</strong> command, it times out</p>
<pre><code>[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-name" as an annotation
[kubelet-check] Initial timeout of 40s passed.
error uploading crisocket: timed out waiting for the condition
</code></pre>
<p>and here is what I get in <strong>journalctl -u kubelet</strong></p>
<pre><code>Apr 22 20:31:13 node-name kubelet[18567]: I0422 20:31:13.399059 18567 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Apr 22 20:31:13 node-name kubelet[18567]: I0422 20:31:13.404930 18567 kubelet_node_status.go:72] Attempting to register node node-name
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.406863 18567 kubelet_node_status.go:94] Unable to register node "node-name" with API server: Unauthorized
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.407096 18567 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node-name.1597fce5edba5ee6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-name", UID:"node-name", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node node-name status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"node-name"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf27bf9d2c75d6e6, ext:897526251, loc:(*time.Location)(0x71d3440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf27bfa05821f203, ext:13556483910, loc:(*time.Location)(0x71d3440)}}, Count:8, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Unauthorized' (will not retry!)
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.409745 18567 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node-name.1597fce5edba8b6c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-name", UID:"node-name", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node node-name status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"node-name"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf27bf9d2c76036c, ext:897537648, loc:(*time.Location)(0x71d3440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf27bfa0582242b8, ext:13556504573, loc:(*time.Location)(0x71d3440)}}, Count:8, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Unauthorized' (will not retry!)
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.476603 18567 kubelet.go:2266] node "node-name" not found
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.576911 18567 kubelet.go:2266] node "node-name" not found
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.630766 18567 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.631616 18567 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Unauthorized
Apr 22 20:31:13 node-name kubelet[18567]: E0422 20:31:13.632799 18567 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Unauthorized
</code></pre>
| bardia zamanian | <p>The issue was caused by Docker and/or Kubernetes version mismatch between Kubernetes nodes.<br />
The problem was resolved after reinstalling Docker and Kubernetes to the correct versions.</p>
<p>Kubernetes <code>version skew support policy</code> describes the maximum version skew supported between various Kubernetes components. For more information, see the <a href="https://kubernetes.io/docs/setup/release/version-skew-policy/" rel="nofollow noreferrer">version-skew-policy</a> documentation.</p>
<p>The Kubernetes <a href="https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG" rel="nofollow noreferrer">release notes</a> list which versions of Docker are compatible with that version of Kubernetes.
For example in the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md#external-dependencies" rel="nofollow noreferrer">CHANGELOG-1.13.md</a> you can find validated docker versions for Kubernetes 1.13.</p>
| matt_j |
<p>Hi I have followed <a href="https://www.youtube.com/watch?v=u948CURLDJA" rel="nofollow noreferrer">this video</a> from starting to end. Using <code>kubectl describe</code> to show the Service that was created yields</p>
<pre><code>$ kubectl describe -n ingress-nginx service/ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: LoadBalancer
IP: 10.110.231.177
LoadBalancer Ingress: localhost
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32352/TCP
Endpoints: 10.1.0.12:80,10.1.0.13:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30563/TCP
Endpoints: 10.1.0.12:443,10.1.0.13:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30574
Events: <none>
</code></pre>
<p>Why did not I get a public IP address as explained by the author of the video? Is that why I am not able to access the link <a href="http://marcel.test" rel="nofollow noreferrer">http://marcel.test</a>?</p>
<p>Also while doing the same setup on AWS, the external IP stays at <code>Pending</code> for a LoadBalancer service.</p>
| Shubham Arora | <p>The host file I was using was incorrect , as I was using git bash I vim into <code>/etc/hosts</code> but actually the hosts file in windows is <code>C:\Windows\System32\drivers\etc</code> so I updated the host file with 127.0.0.1 marcel.test and it worked</p>
| Shubham Arora |
<p>I have an Ansible script that installs a Kubernetes cluster and is supposed to enable the dashboard. One task is giving me issues: </p>
<pre><code>- name: Expose Dashboard UI
shell: kubectl proxy --port=8001 --address={{ hostvars['master'].master_node_ip }} --accept-hosts="^*$" >> dashboard_started.txt
args:
chdir: $HOME
creates: dashboard_started.txt
</code></pre>
<p>The problem is that this works, but the command <code>kubectl proxy</code> is blocking: you can't type in another command until you ctrl+c out of the command, at which point the dashboard is inaccessible. My Ansible script freezes from performing this command. I can successfully connect to the dashboard in my browser while Ansible is frozen. But I need Ansible to perform other tasks after this one as well. I have tried adding an ampersand <code>&</code> at the end of my command: </p>
<p><code>kubectl proxy --port=8001 --address={{ hostvars['master'].master_node_ip }} --accept-hosts="^*$" >> dashboard_started.txt &</code></p>
<p>Or</p>
<p><code>kubectl proxy --port=8001 --address={{ hostvars['master'].master_node_ip }} --accept-hosts="^*$" & >> dashboard_started.txt</code></p>
<p>And while both these commands cause Ansible to execute and pass over my task, I can't reach the dashboard. Using the <code>jobs</code> command on the machine the command is run on shows no background tasks, either for root or the Ansible user. </p>
<p>What am I doing wrong? </p>
<p>EDIT: </p>
<p>To those reading this: <strong>DO NOT DO THIS UNLESS YOU CAN ACCESS THE DASHBOARD FROM LOCALHOST</strong>. If you are running Kubernetes Dashboard in a VM or on some external server, and are trying to access it from another machine (the VM host for example), you will NOT be able to log in. See here: </p>
<p><a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/1.7.x-and-above.md" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/1.7.x-and-above.md</a></p>
<blockquote>
<p>NOTE: Dashboard should not be exposed publicly using kubectl proxy
command as it only allows HTTP connection. For domains other than
localhost and 127.0.0.1 it will not be possible to sign in. Nothing
will happen after clicking Sign in button on login page.</p>
</blockquote>
| yesman | <p>You could run the task with the asynchronous option. For example:</p>
<pre><code>- name: Expose Dashboard UI
shell: "(kubectl proxy --port=8001 --address={{ hostvars['master'].master_node_ip }} --accept-hosts="^*$" >> dashboard_started.txt >/dev/null 2>&1 &)"
args:
chdir: $HOME
creates: dashboard_started.txt
async: 10
poll: 0
</code></pre>
<p>When poll is 0, Ansible will start the task and immediately move on to the next one without waiting for a result.</p>
<p>I personally added the subshell parentheses though i suppose that there is no need to use them, async itself does the trick I hope!</p>
<p>Hope it helps!</p>
<p><a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html</a></p>
| José Ángel Morena Simón |
<p>I have a microservice running in kubernetes with deployment. It is serving a Node JS application with PM2.
I recently found whenever I deploy a new version of application with helm, i can see PM2 is being exited with code [0]</p>
<pre><code>PM2 log: App [api.bundle:0] exited with code [0] via signal [SIGINT]
</code></pre>
<p>I tried to investigate if there is an exception in application. Could not found any error prior deployment. That leads me to ask if how pm2 is getting restarted? Does kubernetes sends kill signal to pm2 if an new deployment comes in?</p>
| Timam | <p><code>...whenever I deploy a new version of application with helm, i can see PM2 is being exited with code [0]</code></p>
<p>When you do <code>helm upgrade</code> the command triggers rolling update to the deployment which replace existing pod(s) with new pod. During this process, <code>signal [SIGINT]</code> was sent to inform your PM2 container it's time to exit.</p>
<p><code>...if how pm2 is getting restarted? Does kubernetes sends kill signal to pm2 if an new deployment comes in?</code></p>
<p>Correct.</p>
| gohm'c |
<p>When I use to build Hypervisor (i.e. VMWare, Hyper-V, etc.) clusters, ALL the hardware and software has to be exactly the same. Otherwise I ran the possibility that the workload could run into a 'conflict' (i.e. the VM won't run because the hardware or OS is different), if there is a failure on one of the nodes.</p>
<p>If you are building a kubernetes clusters from miscellaneous (i.e. legacy) hardware sitting around the server room (i.e. different vendors [Dell, HPE, etc.], different CPU types [i.e. AMD, Intel, etc.], different BIOS versions, memory size, etc.). Do kubernetes worker nodes have to exactly the same hardware for the workload to balance across correctly the cluster (i.e. distribute the workload across the nodes).</p>
<p>I would guess everything from the OS (i.e. distro/kernel, libraries, modules, and services) up would have to be same. I am just trying to find a general answer to a question that I have not seen a good answer to?</p>
| dshield | <p>Generally, it is ok to run heterogenous hardware in Kubernetes. If all of you boxes are X86 machines there is nothing to worry as your docker images should run everywhere. For instance it is common to mix different types of spot instances in the cloud and this works fine.</p>
<p>However, if you are mixing architecture (i.e. arm and x86) or operating systems (i.e. windows and linux) it generally makes sense to add a label indicating that. Those are typical labels in Kubernetes 1.15+:</p>
<pre><code>$ kubectl describe node xxxxx
Name: xxxxx
Roles: node
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=dell_xyz
beta.kubernetes.io/os=linux
[...]
</code></pre>
<p>You can then use those labels in your node selector in a pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: x86-pod
spec:
containers:
- name: x86-test
image: "yourrepo.io/test_repo"
nodeSelector:
beta.kubernetes.io/arch: amd64
</code></pre>
| jankantert |
<p>I want to deploy airflow in Openshift.</p>
<p>I am using this values.yaml: <a href="https://github.com/bitnami/charts/blob/master/bitnami/airflow/values.yaml" rel="nofollow noreferrer">https://github.com/bitnami/charts/blob/master/bitnami/airflow/values.yaml</a></p>
<p>I want to clone my dag files from the Bitbucket repository of my company.</p>
<p>I am modifying the /values.yaml to achieve that:</p>
<pre><code> dags:
## Enable in order to download DAG files from git repositories.
##
enabled: true
repositories:
- repository:
branch: master
name: dags
path: docker-image/dags
</code></pre>
<p>in which part should I insert the info about the secrets in this values.yaml?</p>
| Bruno Justino Praciano | <p>First of all, your <code>values.yaml</code> file should have <code>git</code> at the beggining:</p>
<pre><code>git:
dags:
## Enable in order to download DAG files from git repositories.
##
enabled: true
repositories:
- repository:
branch: master
name: dags
path: docker-image/dags
</code></pre>
<p>I assume you have a private repository and you want <code>airflow</code> to access it.<br />
As you can see in the <a href="https://github.com/bitnami/charts/tree/master/bitnami/airflow#load-dag-files" rel="nofollow noreferrer">Bitnami Apache Airflow Helm chart documentation</a>:</p>
<blockquote>
<p>If you use a private repository from GitHub, a possible option to clone the files is using a Personal Access Token and using it as part of the URL: https://USERNAME:[email protected]/USERNAME/REPOSITORY</p>
</blockquote>
<p>It's about <code>GitHub</code> but I belive it also works with <code>Bitbucket</code>.</p>
<p>Personal access token can be used in <code>Bitbucket Data Center</code> and <code>Server</code>. You can find out how to create it in this <a href="https://confluence.atlassian.com/bitbucketserver/personal-access-tokens-939515499.html" rel="nofollow noreferrer">Personal access tokens documentation</a>.</p>
<p><strong>NOTE:</strong> If you use <code>Bitbucket Cloud</code> it is not possible to create personal access token but you can create <a href="https://support.atlassian.com/bitbucket-cloud/docs/app-passwords/" rel="nofollow noreferrer">app password</a> instead ( look at <a href="https://community.atlassian.com/t5/Bitbucket-questions/Personal-Access-Token-for-Bitbucket-Cloud/qaq-p/677663" rel="nofollow noreferrer">Personal Access Token for Bitbucket Cloud?</a>).<br />
I have <code>Bitbucket Cloud</code> and tested this scenario with the app password and it works as expected.</p>
| matt_j |
<p>Can Kubernetes deployment manifest file have both -env and -envFrom keys?</p>
<p>I have set a <code>secrets.yaml</code> file to set the environment variables and also have environment variables that are hard coded.</p>
<p>Can I have both of them set using both <code>-env</code> and <code>-envFrom</code> in the YAML files?</p>
| Ajinkya16 | <p><code>Can kubernetes deployment manifest file have both -env and -envFrom keys? </code></p>
<p>Yes.</p>
<pre><code>...
envFrom:
- secretRef:
name: <name of your secret>
env:
- name: <variable name>
value: <hardcoded value>
- name: <variable name>
valueFrom:
secretKeyRef:
name: <name of your secret>
key: <If I only want this and not all the keys in the secret>
</code></pre>
| gohm'c |
<p>I have a microservice written in Haskell, the compiler is 8.8.3.
I built it with <code>--profile</code> option and ran it with <code>+RTS -p</code>. It is running about 30 minutes, there is <code><my-service>.prof</code> file but it is empty (literally 0 bytes). Previously I did it on my local machine and I stop the service with CTRL-C and after the exit it produced <code><my-service>.prof</code> file which was not empty.</p>
<p>So, I have 2 questions:</p>
<ol>
<li>How to collect profiling information when a Haskell microservice runs under Kubernetes in the most correct way (to be able to read this .prof file)?</li>
<li>How to pass run time parameter to Haskell run-time where to save this .prof file (maybe some workaround if no such an option), for 8.8.3 - because I have feeling that the file may be big and I can hit disk space problem. Also I don't know how to flush/read/get this file while microservice is running. I suppose if I will be able to pass full path for this .prof file then I can save it somewhere else on some permanent volume, to "kill" the service with <code>INT</code> signal for example, and to get this .prof file from the volume.</li>
</ol>
<p>What is the usual/convenient way to get this .prof file when the service runs in Kubernetes?</p>
<p>PS. I saw some relevant options in the documentation for newest versions, but I am with 8.8.3</p>
| RandomB | <p>I think the only way to do live profiling with GHC is to use the eventlog. You can insert <code>Debug.Trace.traceEvent</code> into your code at the functions that you want to measure and then compile with <code>-eventlog</code> and run with <code>+RTS -l -ol <output-file-name> -RTS</code>. You can use <a href="https://hackage.haskell.org/package/ghc-events-analyze" rel="nofollow noreferrer"><code>ghc-events-analyze</code></a> to analyze and visualize the produced eventlog.</p>
<p>The official eventlog documentation for GHC 8.8.3 is <a href="https://downloads.haskell.org/ghc/8.8.3/docs/html/users_guide/runtime_control.html#tracing" rel="nofollow noreferrer">here</a>.</p>
| Noughtmare |
<p>The bash script I'm trying to run on the K8S cluster node from a proxy server is as below:</p>
<pre><code>#!/usr/bin/bash
cd /home/ec2-user/PVs/clear-nginx-deployment
for file in $(ls)
do
kubectl -n migration cp $file clear-nginx-deployment-d6f5bc55c-sc92s:/var/www/html
done
</code></pre>
<p>This script is not copying data which is therein path <code>/home/ec2-user/PVs/clear-nginx-deployment</code> of the master node.</p>
<p>But it works fine when I try the same script manually on the destination cluster.</p>
<p>I am using python's <code>paramiko.SSHClient()</code> for executing the script remotely:</p>
<pre><code>def ssh_connect(ip, user, password, command, port):
try:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=user, password=password, port=port)
stdin, stdout, stderr = client.exec_command(command)
lines = stdout.readlines()
for line in lines:
print(line)
except Exception as error:
filename = os.path.basename(__file__)
error_handler.print_exception_message(error, filename)
return
</code></pre>
<p>To make sure the above function is working fine, I tried another script:</p>
<pre><code>#!/usr/bin/bash
cd /home/ec2-user/PVs/clear-nginx-deployment
mkdir kk
</code></pre>
<p>This one runs fine with the same python function, and creates the directory 'kk' in desired path.
If you could please suggest the reason behind, or suggest an alternative to carry out this.
Thank you in advance.</p>
| kkpareek | <p>The issue is now solved.</p>
<p>Actually, the issue was related to permissions which I got to know later. So what I did to resolve is, first <code>scp</code> the script to remote machine with:</p>
<p><code>scp script.sh user@ip:/path/on/remote</code></p>
<p>And then run the following command from the local machine to run the script remotely:</p>
<p><code>sshpass -p "passowrd" ssh user@ip "cd /path/on/remote ; sudo su -c './script.sh'"</code></p>
<p>And as I mentioned in question, I am using python for this.</p>
<p>I used the <code>system</code> function in <code>os</code> module of python to run the above commands on my local to both:</p>
<ol>
<li>scp the script to remote:</li>
</ol>
<pre><code>import os
command = "scp script.sh user@ip:/path/on/remote"
os.system(command)
</code></pre>
<ol start="2">
<li>scp the script to remote:</li>
</ol>
<pre><code>import os
command = "sshpass -p \"passowrd\" ssh user@ip \"cd /path/on/remote ; sudo su -c './script.sh'\""
os.system(command)
</code></pre>
| kkpareek |
<p>I have deployment with volumes and limits presented above.
The problem is that kubernetes reject create pod with such error:</p>
<pre><code> pods "app-app-96d5dc969-2g6zp" is forbidden:
exceeded quota: general-resourcequota, requested: limits.ephemeral-storage=1280Mi,
used: limits.ephemeral-storage=0, limited: limits.ephemeral-storage=1Gi
</code></pre>
<p>As I've understood nodes have limit 1Gi for ephemeral-storage, but what is 1280Mi?
Is it correct, that kubernetes allocate some amount of memory for each volume?</p>
<pre><code>...
spec:
containers:
resources:
limits:
cpu: 1
memory: 3Gi
ephemeral-storage: "1Gi"
requests:
cpu: 1
memory: 3Gi
ephemeral-storage: "1Gi"
volumeMounts:
- name: app-logs
mountPath: /app/log
- name: app-tmp
mountPath: /tmp
- name: app-backups
mountPath: /app/backups
- name: app-logback
mountPath: /app/config/logback.xml
subPath: logback.xml
- name: app-mdc
mountPath: /app/config/mdc.properties
subPath: mdc.properties
volumes:
- name: app-logs
emptyDir: {}
- name: app-tmp
emptyDir: {}
- name: app-backups
emptyDir: {}
- name: app-logback
configMap:
name: "{{ include "app.fullname" . }}-app-logback"
- name: app-mdc
configMap:
name: "{{ include "app.fullname" . }}-app-mdc"
</code></pre>
<p>Resource quotes for namespace:</p>
<pre><code>kubectl describe quota
Name: general-resourcequota
Namespace: app
Resource Used Hard
-------- ---- ----
configmaps 5 15
limits.cpu 0 4
limits.ephemeral-storage 0 1Gi
limits.memory 0 8Gi
pods 0 10
requests.cpu 0 2
requests.memory 0 4Gi
requests.storage 0 50Gi
services 1 20
services.loadbalancers 1 5
services.nodeports 2 5
</code></pre>
| Haster | <p>You namespace has a quota set to cap at 1Gi:</p>
<p><code>limits.ephemeral-storage 0 1Gi</code></p>
<p>The messaging said that the namespace will exceed the limit and reach 1.28Gi (1280Mi) with your deployment.</p>
<p>Reduce your limit to 700Mi to stay within the 1Gi limit and your pod will be schedule accordingly. Note that quota aggregates resource consumption in the namespace, not per pod basis.</p>
| gohm'c |
<p>I am trying to use 'Kubernetes Ingress with Traefik, CertManager, LetsEncrypt and HAProxy' for certificates management.</p>
<p>What I want to do is use certificates in my application which deployed on kubernetes.</p>
<p>My application contains following services:</p>
<blockquote>
<p>my-app1 NodePort 10.43.204.206 16686:31149/TCP</p>
</blockquote>
<blockquote>
<p>my-app2 NodePort 10.43.204.208 2746:30972/TCP</p>
</blockquote>
<p>So for my-app1 without certificates I was accessing it as <code>http://{IP}:31149/app1</code>. And with certificates I am now accessing it as <code>https://my-dns.com/app1</code>. For this I am using <a href="https://allanjohn909.medium.com/kubernetes-ingress-traefik-cert-manager-letsencrypt-3cb5ea4ee071" rel="nofollow noreferrer">this</a> link. I created following ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prod-ingress
namespace: my-ns
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- "my-dns.com"
secretName: prod-cert
rules:
- host: "my-dns.com"
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: my-app1
port:
number: 16686
</code></pre>
<p>But for my-app2 without certificates I was accessing it as <code>https://{IP}:30972/app2</code>. So as I am already using https for my-app2 but I want to also use certificates for this service.</p>
<p>Any idea how to do this?</p>
| Rupesh Shinde | <p>So the issue was, I was deploying my application with self signed certificates. So because of this I am getting issue while accessing my dashboard.</p>
<p>So now I just disabled self signed certificates in my application. And now I am able
to access dashboard with domain name <code>https://my-dns.com</code>.</p>
| Rupesh Shinde |
<p>If I have a k8s deployment file for a <code>service</code> with multiple containers like <code>api</code> and <code>worker1</code>, can I make it so that there is a configmap with a variable <code>worker1_enabled</code>, such that if my <code>service</code> is restarted, container <code>worker1</code> only runs if <code>worker1_enabled=true</code> in the configmap?</p>
| Gavin Haynes | <p>The short answer is No.</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods" rel="nofollow noreferrer">k8s docs</a>, Pods in a Kubernetes cluster are used in two main ways:</p>
<blockquote>
<ul>
<li>Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly.</li>
<li>Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.</li>
</ul>
</blockquote>
<p>Unless your application requires it, it is better to separate the worker and api containers into their own <code>pod</code>. So you may have one <code>deployment</code> for worker and one for api.</p>
<p>As for deploying worker when <code>worker1_enabled=true</code>, that can be done with <a href="https://helm.sh/docs/chart_template_guide/getting_started/" rel="nofollow noreferrer">helm</a>. You have to create a <code>chart</code> such that when the value of <code>worker1_enabled=true</code> is set, worker is deployed.</p>
<p>Last note, a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> in kubernetes is an abstract way to expose an application running on a set of Pods as a network service.</p>
| adelmoradian |
<p>Hello all i am trying to create a <code>StatefulSet</code> which has a PVC and StorageClass as Azurefileshare.
when i have created this my PVC is in pending state:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 51m persistentvolume-controller Failed to provision volume with StorageClass "azurefile-standard-zrs2": could not get storage key for storage account : Failed to create storage account f38f8ede8e, error: storage.AccountsClient#Create: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="MaxStorageAccountsCountPerSubscriptionExceeded" Message="Subscription 0c767d4cf39 already contains 251 storage accounts in location westeurope and the maximum allowed is 250."
</code></pre>
<p>This is my manifest file:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-deployment2
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: backup
serviceName: <SERVICE_NAME>
template:
metadata:
labels:
app: backup
annotations:
backup.velero.io/backup-volumes: nginx-logs
spec:
volumes:
- name: nginx-logs1
persistentVolumeClaim:
claimName: nginx-logs1
containers:
- image: base-debian:latest
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "100Mi"
cpu: "200m"
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs1
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-logs1
namespace: test
labels:
app: backup
spec:
accessModes:
- ReadWriteOnce
storageClassName: azurefile-standard-zrs2
resources:
requests:
storage: 1Gi
</code></pre>
| Satyam Pandey | <p>As @<strong>mmiking</strong> pointed out in the comments section, you've reached the max number of storage accounts in a single subscription but <strong>only</strong> in the <code>westeurope</code> location.</p>
<p>You can see in the <a href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas#storage" rel="nofollow noreferrer">Azure resource quotas documentation</a>:</p>
<blockquote>
<p>Azure Storage has a limit of 250 storage accounts per region, per subscription. This limit includes both Standard and Premium storage accounts.</p>
</blockquote>
<p><strong>NOTE:</strong> This limit is per region per subscription. You are able to create 250 storage accounts in one region (e.g. <code>westeurope</code>) and you can still create another 250 storage accounts in a different region (e.g. <code>northeurope</code>) using the same subscription.</p>
<p>You can see your current storage account usage in a specific location using <a href="https://learn.microsoft.com/en-us/cli/azure/storage/account?view=azure-cli-latest#az_storage_account_show_usage-examples" rel="nofollow noreferrer">az storage account show-usage</a> command:</p>
<pre><code>$ az storage account show-usage --location <LOCATION_NAME> --out table --subscription <SUBSCRIPTION_ID>
CurrentValue Limit Unit
-------------- ------- ------
9 250 Count
</code></pre>
| matt_j |
<p>In HA kubernetes clusters, we configure multiple control planes(master nodes), but how does multiple control planes sync their data? When we create a pod using kubectl command, the request went through the cloud load balancer to one of the control plane. I want to understand how other control planes sync their data with the one that got the new request?</p>
| Vinay Sorout | <p>First of all, please note that <strong>API Server</strong> is the only component that directly talks with the <strong>etcd</strong>.<br />
Every change made on the Kubernetes cluster ( e.g. <code>kubectl create</code>) will create appropriate entry in <strong>etcd</strong> database and everything you will get from a <code>kubectl get</code> command is stored in <strong>etcd</strong>.</p>
<p>In this <a href="https://medium.com/jorgeacetozi/kubernetes-master-components-etcd-api-server-controller-manager-and-scheduler-3a0179fc8186" rel="nofollow noreferrer">article</a> you can find detailed explanation of communication between <strong>API Server</strong> and <strong>etcd</strong>.</p>
<p>Etcd uses <strong>RAFT</strong> protocol for leader election and that leader handles all client requests which need cluster consensus ( requests that do not require consensus can be processed by any cluster member ):</p>
<blockquote>
<p>etcd is built on the Raft consensus algorithm to ensure data store consistency across all nodes in a cluster—table stakes for a fault-tolerant distributed system.</p>
</blockquote>
<blockquote>
<p>Raft achieves this consistency via an elected leader node that manages replication for the other nodes in the cluster, called followers. The leader accepts requests from the clients, which it then forwards to follower nodes. Once the leader has ascertained that a majority of follower nodes have stored each new request as a log entry, it applies the entry to its local state machine and returns the result of that execution—a ‘write’—to the client. If followers crash or network packets are lost, the leader retries until all followers have stored all log entries consistently.</p>
</blockquote>
<p>More information about <strong>etcd</strong> and <strong>raft consensus algorithm</strong> can be found in this <a href="https://www.ibm.com/cloud/learn/etcd" rel="nofollow noreferrer">documentation</a>.</p>
| matt_j |
<p>I want to expose a service that is currently running in an EKS Cluster. I don't want it to be exposed to the internet just inside the VPC.</p>
<p>What I'm looking for is that I can access this service just using AWS API Gateway</p>
| Alkaid | <p>This scenario can be fulfill by using a VPC endpoint to engage a NLB that front the service in EKS. Here's the <a href="https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/" rel="nofollow noreferrer">example</a> how to do it.</p>
| gohm'c |
<p>In my Kubernetes Cluster i have some challenges with the Ingress. As example i installed NodeRed und the Nginx-ingress via Helm. NodeRed is available via</p>
<ul>
<li>FQDN: <a href="http://my.server.name:31827" rel="nofollow noreferrer">http://my.server.name:31827</a></li>
<li>IP: <a href="http://10.x.x.x:31827" rel="nofollow noreferrer">http://10.x.x.x:31827</a></li>
</ul>
<p>Now i created an Ingress: </p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nr-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret1
hosts:
- my.server.name
rules:
- host: my.server.name
http:
paths:
- path: /nr
backend:
serviceName: my-nodered-node-red
servicePort: 1880
</code></pre>
<p>When i do a Get <a href="http://my.server.name/nr" rel="nofollow noreferrer">http://my.server.name/nr</a> i see only parts working, see the screenshot:</p>
<p><a href="https://i.stack.imgur.com/3gVFP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3gVFP.png" alt="enter image description here"></a></p>
<p>It looks to me, that i missed the reverse proxy settings. Normally i would place those things in a reverse proxy setting in the nginx like this. But this is not possible because i am using the Nginx-ingress. </p>
<pre><code>location / {
proxy_pass http://localhost:1880/;
}
</code></pre>
<p>But i do not know how to do that in Kubernetes ? What do i miss ? kubernetes version is 1.14.1.</p>
| Mchoeti | <p>Maybe too late for the answer, but I had the same problem and solve it:</p>
<p>1-changed httpRoot: '/nr' in node red settings.xml configuration file (in kubernetes, probably defined in a PV) (@vasili-angapov mentions)</p>
<p>2- set ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nodered-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /nr
pathType: Prefix
backend:
service:
name: nodered
port:
number: 1880
</code></pre>
| tico errecart |
<p>I am new to kubernetes and my question could be naive.</p>
<p>I am working for client and i installed JupyterHub on Google Kubernetes Engine. To be able to access it from client side externally I used ingress with TLS & hostname <code>https://my-jupyterhub/jhub</code> and everything is working fine.</p>
<p>Now my client doesn't want to use DNS/hostname for some reasons and want to access the jupyterhub using the IP address only.</p>
<p>Is it possible to have TLS without hostname in ingress? If No, how can I achieve it?</p>
| tank | <p>I think that you can achieve this by connecting through the External IP of the cluster and serving the Jupyter hub on the port you want and open that port.</p>
<p>You can of course set an static IP on the cluster so it stays the same, you can find information on that regard on the following links [1] [2].</p>
<p>[1] <a href="https://cloud.google.com/compute/docs/ip-addresses#reservedaddress" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/ip-addresses#reservedaddress</a></p>
<p>[2] <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip</a></p>
| Israel Z |
<p>I am in situation where number of nodes is reasonably large. Nodes are consumable and can be added and removed (if idle) at any time. All nodes would have labels</p>
<blockquote>
<p>label.category=A or
label.category=B</p>
</blockquote>
<p>I want to schedule my pods onto nodes with the same category. I really do not wish to hardcode which one. All I want is that a group of pods would end up in the same category nodes.</p>
| bioffe | <p>you may wanted to use Pod Topology Spread Constraints, as example below</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
topologySpreadConstraints:
- maxSkew: <integer>
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
</code></pre>
| Sthanunathan Pattabiraman |
<p>I have some issue with kubernetes yaml.</p>
<p>I want to execute touch to make file and copy it to a folder but the container stopped.</p>
<pre><code>containers:
- name: 1st
image: alpine
volumeMounts:
- name: sharedvolume
mountPath: /usr/share/sharedvolume
command: ["/bin/sh"]
args: ["-c", "cd ~/ && touch file.txt && cp file.txt /usr/share/sharedvolume"]
</code></pre>
<p>I googled through many stackoverflow answers and tried them but nothing worked. I also tried to combine the args on commands but still didn't work.</p>
| arsaphone | <p>First of all, please keep in mind that a container exits when its main process exits.<br />
After container in a Pod exits, the <code>kubelet</code> restarts it or not depending on the <code>spec.restartPolicy</code> field of the Pod (see: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">Container restart policy documentation</a>).</p>
<hr />
<p>As a workaround, you can try adding the <code>sleep infinity</code> command to the end of the <code>args</code> field (it should work in most cases).</p>
<p>Take a look at the example below:<br />
<strong>NOTE:</strong> I just want you to pay attention to the <code>sleep infinity</code> at the end of the <code>args</code> filed.</p>
<pre><code>$ cat alpine.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine
name: alpine
spec:
containers:
- image: alpine
name: alpine
command: ["/bin/sh"]
args: ["-c", "cd ~/ && touch file.txt && cp file.txt /usr/share/sharedvolume && sleep infinity"]
volumeMounts:
- name: sharedvolume
mountPath: /usr/share/sharedvolume
...
$ kubectl apply -f alpine.yml
pod/alpine created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
alpine 1/1 Running 0 12s
$ kubectl exec -it alpine -- sh
/ # df -h | grep "/usr/share/sharedvolume"
/dev/sdb 7.8G 36.0M 7.8G 0% /usr/share/sharedvolume
/ # ls /usr/share/sharedvolume
file.txt lost+found
</code></pre>
<p>As you can see the <code>alpine</code> Pod is running, so it works as expected.</p>
<hr />
<p>As I wrote before, adding the <code>sleep infinity</code> is a kind of workaround. It's worth considering what the main process of this container should be.<br />
Useful information on how to use the <code>alpine</code> image can be found in the <a href="https://hub.docker.com/_/alpine" rel="nofollow noreferrer">How to use this image reference</a>.</p>
| matt_j |
<p>I need to install this NGINX Ingress Controller Git release <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.22.0" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.22.0</a> in my Kubernetes cluster. Can anyone share the steps on how to do it?</p>
<p>I did some research, but could not find any useful article.</p>
<p><strong>Additional information</strong></p>
<ul>
<li>I'm managing the cluster using helm. So is there a way to set it up using helm?</li>
<li>Is any other approach recommended?</li>
</ul>
| dunu008 | <p>You can display all available helm chart versions using:</p>
<pre class="lang-bash prettyprint-override"><code>helm search repo ingress-nginx --versions
</code></pre>
<pre class="lang-bash prettyprint-override"><code>NAME CHART VERSION APP VERSION DESCRIPTION
ingress-nginx/ingress-nginx 4.2.1 1.3.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.2.0 1.3.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.1.4 1.2.1 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.1.3 1.2.1 Ingress controller for Kubernetes using NGINX a...
....
</code></pre>
<p>Then, choose the version you want <code>CHART VERSION</code>, here 4.2.0</p>
<pre class="lang-bash prettyprint-override"><code>helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace $NAMESPACE \
--version 4.2.0
</code></pre>
| Reda E. |
Subsets and Splits